I use one service from time to time: I need to upload some files there (the name of the service does not matter, because, frankly, they are all the same). Basically, I just point to a folder on my hard disk, after which its contents are copied to a remote server, which is probably doing something related to databases – these files are given names, and checks are made on who downloads them.
The service is owned by a large company, so its processes are large-scale. It is likely to be hacked a lot, so some protection is required, as is checking that no one has modified the files in the interval between uploading from my PC and receiving it on the server. I understand all this.
... but in essence, we are talking about the fact that you need to register several files, read them, upload, and then close the connection and write to the log file whether everything went well. And if not, what exactly happened. There is nothing complicated about this, and I wrote similar code from scratch using the Wininet API and PHP on a server connected to my MySQL database. Perhaps my system was not as reliable as enterprise-level systems, but it supported hundreds of thousands of uploaded files, their verification, downloading, and logging. That's a job for one coder for two or three weeks, isn't it?
The special file upload tool I use today has a total of 230 MB of client files and uses 2,700 files to manage this process.
You might think that this is a typo, but there is no mistake: two thousand seven hundred files and 237 MB of executable and supporting files to copy several files from the client to the server. This is no longer bloatware or overengineering, but absolute, obvious, visual madness.
The problem is that, most likely, this uploader is no different from any similar modern software created by any other large company. And by the way, it gives error messages and does not work at the moment.
I have seen coders doing this. I know how it goes. This is not only because coders don't write low-level efficient code to achieve their goal: they've just never seen low-level efficient, well-written code. How can we expect them to create something better if they don't even realize it's possible?
You can write a program that uploads files securely, quickly, and securely to a server, and it takes a twelfth of that amount of code. It could be just one file, a single small .exe. It doesn't need hundreds of DLLs. Not only is it possible, but it's easy, and it's more reliable, efficient, and convenient to debug, and it actually works.
You might think that old programmers in their fifties (like my father) complain about bloated code because they are obsolete and grouchy. And I realize it. But the obsolete and grouchy complain about code that is 50% slower than it should be, or code that is 50% larger than it should be. However, the situation has gone far beyond this. We've reached a point where I sincerely believe that 99.9% of the code in the files on our PCs is completely useless and never gets executed. The code is just in a package of 65 DLLs, simply because the coder wanted to do something trivial like store a bitmap and had no idea how easy it could be, so he imported a whole bunch of bloatware to solve the problem.
Like I said, I really shouldn't get mad at young programmers for this. That is how they were taught. They have no idea what high performance or development with constraints is. It may seem strange that a girl of 25 is talking about this, but I had enough wise mentors to show me really beautiful code. My father told me that the first Elite in 1984 had a huge galaxy, 3D space combat, a career progression system, trading, and thousands of planets to explore, and at the same time the game was 64 KB. Modern programmers may hear this, but they don't realize the gulf between it and what we have today.
Why is this important to me?
This worries me for a lot of reasons, not least because if you need two thousand times more code to complete a task, then at least it should work. But more importantly, I realize that 99.9% of the CPU time on my huge, powerful PC is completely useless. Today, computers are so fast that, 10 years ago, they would have been regarded as absolute magic. Anything you can imagine should happen in 1/60th of a second. However, when I press the Microsoft Surface laptop's volume icon, I see a delay: the machine gradually creates a new user interface element, figures out which icons to draw, and then they appear and become interactive. This takes time. It seems to be about half a second, which is close to a billion years on a processor’s time scale.
If right now (conservatively) 99% of our PC's resources are wasted, we're wasting 99% of the computer's energy. This is absolutely criminal. And what are these expenses for? If you look in the task manager, you'll notice a bloated software nonsense that does god knows what. I'm just typing this blog post. Windows has 102 background processes running. My NVIDIA graphics card currently owns six of them, and some of them have subtasks. To do what? I'm not playing a game; I'm using almost the same set of video card driver functions that my father did twenty years ago, but for some reason six processes are required.
Web View Microsoft Edge also needs 6 processes, just like Microsoft Edge itself. And I don't even use Microsoft Edge. It seems like I was opening an SVG file yesterday, and here you go - 12 useless pieces of code are wasting memory and probably also polling the CPU.
This is absolute madness. This is the reason why nothing works, everything is slow, you have to buy a new smartphone every year and a new TV to download these bloated streaming apps that also hide equally bad code.
Personally, I think it's only going to get worse because big tech companies like Facebook, Twitter, Reddit, etc. are the worst examples of this trend. Soon, each of the thousands of "programmers" working for these companies will be using machine learning to copy-paste bloated, buggy, sprawling Github stuff into their code. Just to add two numbers, they’ll need 32 DLLs, 16 Windows Services, and a billion lines of code.
Twitter has 2,000 developers. More precisely, it was until Elon Musk came along. Tweetdeck sometimes refuses to load the user column. This has been going on for 4 years now. I'm sure none of the coders have any idea why this is happening. And the code at its core, as my dad says, is just a bunch of bloated, copy-pasted ****.
When suggesting the topic title from a link, Reddit can't handle the ampersand, semicolon, and pound symbol. Outside, the year is 2022. The company probably also has 2,000 developers. Obviously, none of them is able to get the text parser to work correctly. What are all these people doing?
Once upon a time, there was a “golden age” of programming when there were limits on memory and CPU. Today, we live in an ultra-wasteful pit of inefficiency. This is very sad.
Thanks for reading! I hope you have found my reflections interesting and that you now have some questions to consider. Feel free to leave any comments and write if you agree with my opinion.
Top comments (104)
I don't know what it's like now, but when I ran "hello world" in React a couple of years ago it extracted a quarter of a gigabyte of files.
I mostly agree with you (and as I'm in your father's age range that might not be surprising) but I do recognise that everything we do is standing on the shoulders of those who came before. We have things like Electron to genericise making applications and while everyone knows it's bloated, it works, and it works for almost everyone because it does everything you don't need in case someone else does.
I find the weird background tasks applications use upsetting as well. I have no idea why every proprietary application needs to run at startup and have "update" and "maintenance" processes running when I'm not trying to use it. It's another reason to steer clear of proprietary software wherever possible; free software almost never does this, or if it does it asks you first.
The answer to this is simple, whether you like it or not - it's the users. They expect the software to be updated automatically. They never do it manually, but they are first to complain that something is not working on their not-updated-for-two-years version.
Isn't that what package management is for, though? It's not the job of an application to handle things like window decorations or storage quotas... or updates.
It is. Does Windows have package manager that is popular among its users? Isn't the most popular way to install things on MacOS to just download a dmg from a website?
On Mac, it's the App store or homebrew (third party) (or Ports, if that's still maintained). On Windows, it's the MS Store or chocolatey (third party) (I think?)
I don't think many people download .exe or .dmg files any more.
Literally nobody is using Chocolatey or MS Store. Some people use Mac App Store, but most application aren't there. Only developers use Homebrew or MacPorts. Most people download from websites.
@moopet When there are tens of thousands of software packages in a distribution, updating a program with hundreds of dependencies becomes quite difficult, especially for non-free software. Yes, all sorts of snap and flatpak are used to solve this problem, but they completely eliminates the advantages of shared libraries, which are designed to avoid code duplication.
@katafrakt it doesn't matter for the purposes of this whether many people use it or not, it's the only practical way for applications to behave unless (as @mariamarsh notes) we all move to statically-linked behemoths.
Oh yeah, of course, if the reality does not matter, keep wondering why every application has this kind of background process.
The question is, "what's wrong with code in 2022" and the culture of downloading separate things for everything is part of that.
It works, but most of the time it sucks. And lately, the user does not care what the software is written on, as long as it works. For users, the quality bar has already gone underground, so when they find something that works quickly even on old hardware and does not take up a hundred megabytes, they are really surprised.
Imagine if almost all popular instant messaging clients were written in Word macros and shipped with a Word copy. In my opinion, Electron is just as ridiculous.
But despite the general disappointment that there is a lot of software on the electron, it is still really the leader in cross-platform desktops... And there are certain reasons for that which can't be solved by other frameworks.
Electron got so popular because it meant you could write an application once for the web and then never have to care about cross-platform compatibility because that stuff is all taken care of for you. It had a great developer experience and a good-enough user experience.
I personally regret the rise of Electron because it's lead to slow, bloated applications that take half a gigabyte of memory instead of a few megabytes, but at the same time I understand why it's gotten so popular.
It's kind of like money -- the less you have, the more conservative you are with it. The more you have, the more you get conditioned to irresponsibly spend it and you start to care less about how much you are spending. It's not responsible, necessarily, but you've got enough to the point where you don't really care that you could be spending less; and I think that's kind of what's happened with the current problem of software bloat.
Yes, exactly as you say
Reasonable alternative to Electron:
github.com/cubiclesoft/php-app-server
Produces similar results at a fraction of the size (e.g. 85KB app size for Linux). And PHP is available to write code for the server side of the app.
How popular is this product? I haven't heard of it before, but it looks interesting. Are there any examples of software on it? What features did you personally use working with it?
P.S. Sorry, I did not notice that you are a representative of this software.
That's okay. It's obviously not as popular as Electron but a number of people have clicked star/fork, so it's slowly gaining traction. I don't advertise my software very often and just expect people to run into my stuff. As to software that uses PHP App Server, there's this:
file-tracker.cubiclesoft.com/
A commercial project I wrote and actively use/maintain.
Thanks for sharing 😊
I'll be sure to explore your products, and hope you'll find your audience 😌
I was checked my "
hello world" "mmorpg poc" application written in NextJS ( react + BE ) memory footprint was 2.5Mb. That is still count as huge. But I think that is quite fine in our memory gobbler times../server size is: 256kb.
./static (graphic files - maybe few unwanted still left): 1.3Mb
Worst part the node_modules lib: 350 MB.
This way of programming fare more distand from my Z80 assembly game code writtend around 1987 on Videoton TV Computer.
We could use far less code if browsers supported other languages besides JavaScript
Actually you can write any languages to browser compiled to WASM/JS/HTML. By the way if you create any JS/HTML application that is also need for compile to proper JS target, mainly ES5 for compatibility reason.
I think actual browsers is so complex application - with tons of unused API ( surprise ! ), any extra languages will be raise these complexity unnececcary.
The good question is:
Why don't able WASM reach these API endpoint and HTML page without any JS code?
Thanks for sharing 🌈
Just look at the calculator, which used to require less than a megabyte, but now it eats up more than 150 MB of RAM at a time with the same functionality.
I don’t know, maybe in Photoshop this is justified: there is a new useful and diverse functionality, albeit bloated code for its implementation. But in the case of a calculator, this is really just indecent.
A 20-year-old calculator did not have to think about hidpi, 4k, multi-monitor configurations, touch control, support for a dozen interface themes, synchronization to the cloud, loading exchange rates from the Internet and a whole lot of functionality that an individual user may not need, but others users do, and writing millions of versions for each is unprofitable.
The fact is that a modern calculator does not have to think about it either. All these themes/touchs/hidpi and others like them — all this is resolved by the libraries of the operating system. And in terms of functionality, it has not gone too far from a 20-year-old calculator, but it eats resources like AutoCAD or MathLab 20 years ago.
It is precisely the libraries that are part of the calculator that occupy those megabytes
They are part of the OS, why separate them? And it is highly doubtful that touch control support will increase the code by 40-50 MB.
This is the characteristic mindset of some modern programmers, justifying bloat with incredible complexity: all those "touchs/hidpi/unicode" is just a mantra to stop thinking. "What are you doing? Don't go there! It’s difficult, there are touch, hidpi, currency conversion and some other important things!”
SpeedCrunch: 4MB RAM. Has a portable version in the Portable Apps platform.
It does way more than Windows Calculator. The only thing it doesn't do is graph equations. For graphing, I will occasionally fire up:
desmos.com/calculator
I still run Calculator though when I need "as soon as I type it in" conversions between bases (binary, decimal, octal, and hex). That's something SpeedCrunch doesn't do very well.
They look like really useful tools, thanks for the recommendation 🤗
4MB are much better than 150
I have a better example. To turn off the RGB lighting of the Aorus graphic card (Gigabyte vendor), I need to install 0.5GB of software. It's that shitty soft... 400 megabytes to turn off the RGB.
It's funny. How much does the installer weigh? And how much space does the installed utility take up?
RGBFusion2.0 installer 253 MB.
When installing and uninstalling the program, it says that it takes 170MB.
In fact, 2 utilities are needed there, and as I understand it, the 2nd one used to be separate, and then became a plugin for the first one. 0.5GB is the total weight of the installers.
Can't you physically disconnect the RGB cable?
In my case, the card must be disassembled for this.
It would suck to lose the warranty.
It's a pity... on some of them, the wire can be traced and unhooked without unscrewing a single screw.
I have a Gigabyte card like this too. I'm pretty sure you can install the software, use it, then uninstall it and the light settings will remain (not really the content of this thread, I know).
When I mentioned .25GB for React, I was mostly talking about the total size of files it added to my
node_modules
directory. The "installer" was technically only a few MB, the compressed files it downloaded might have been (guessing) 50MB, but after unpacking all of them to text files, and using up a zillion inodes, and creating directory structures you need a rope and a torch to explore, the end effect is a lot more than the initiator.I think it'd be benefitial to talk about all these things separately (size of download, size of installation, amout of resources uses when running) because if the installation process only downloaded the files it needed, and everything else was run with libraries existing on the system, then that would be... great? Right? But you can have an application which behaves that way during installation and turns out to chew its way through all your available RAM when running, for example. And these things are different, and if they were the responsibility of different actors, they might be better optimised?
This is a similar topic, but I think that you are right and it is better to consider it separately, since my page will not withstand as many comments of such a discussion 😁
There's no need to install a 500MB application to turn off RGB lighting. A piece of electrical tape works wonders. It's non-conductive and comes in a variety of colors with black being the most popular.
I use painters tape, electrical tape, and even the sticky part of sticky notes to cover up the blindingly-bright LEDs that all modern technology gadgets seem to come with. Depends on how much light I want to let through that decides which route I go with to cover up the LEDs. When I want nothing showing, electrical tape gets used to great effect.
Using electrical tape for such purposes is an incredible bullshit, I'd rather install 0.5 GB of software. My video card is not 4090, of course, but that's how much you have to disrespect your hardware to do such things with it.
Here I can support you, I would not do such "modifications" with my hardware, in my opinion it is "village custom". But for someone a PC is just a working machine, then this is quite an effective option.
That is, you must first install the aorus engine so that it installs RGB fusion. If you install RGB Fusion separately, it doesn't work for some reason. Moreover, with this utility installed separately, aorus engine will not install it. Such crap...
500 MB is about the weight of Windows 2000 installation files, in which there is "a little" more functionality 🙃
This meme comes to mind:
programmerhumor.io/programming-mem...
Two elite astronauts landed on the moon with 2KB ram, and now over 20 million untalented people run slack with their 15GB ram. I'm not sure which is better, but certainly the former is more inspiring 😄
Now the first one just blows my mind. 🤯
I always complain because we sent man to the moon with waaay less memory, but my IDE and other daily software freezes from time to time and for no reason...
Well, don't use an IDE. Also, run Portable Apps wherever possible.
I run Crimson Editor (last updated in 2004) + Command Prompt + Windows File Explorer. That's my "IDE." Never freezes on me. Well, the first two don't. File Explorer likes to barf on network shares from time to time necessitating a reboot once every 6-8 months.
What OS are you using?
Lol that's right 🤣
Modern computers can absolutely waste a ton of CPU and memory, but at the highest levels, the organizations would not care about solving these minor issues. Why does my laptop need 15 gigabytes of RAM to have a spreadsheet, Microsoft Teams and one Chrome tab open? Couldn't tell you. I absolutely hate Windows for how useless it feels. Can't even open Outlook without it spinning up to 100% CPU usage.
But, things aren't always so obvious; software is objectively a challenging problem that requires a lot of well-thought out plans.
For instance: how do you write one project and compile it to work on other people's computers? If you were still writing C, you would have to write a C codebase and use a compiler to support a given architecture and hope the operating system can run your code. Then you have to test it and all that other fun stuff.
But then suddenly, you need to target an entirely different platform that you've never seen before with a different compiler. Suddenly your C code doesn't work, because the platform is different; compiler types are of different sizes, the endianness is a different direction, certain standard library functions don't exist or take different parameters, and now you're stuck writing C preprocessor macros hoping that your pains go away.
This was what coding was like in the awful days, and it can still exist too if you write C/C++. No compiler will make you happy. Everyone either acknowledged C/C++ was awful and moved onto other things during this time period, or simply stuck behind and said "this isn't so bad guys" as they wrote high performance software dealing with the issues that C/C++ brings. Java won a lot of fans because it was very portable with it's "write once, run anywhere" mantra, while C++ is still hated to this day by many.
With Java came other languages that offered more dynamic and flexible programming, like Python and Ruby, which most people scoffed at when thinking about building full-fledged software in. The performance metrics of these two languages aren't great, but sometimes people write Python/Ruby code that can interop with C-world and get decent performance. Fast forward one or two decades and now we have insane machine learning libraries that you can use with Python and are used at Fortune 500 companies.
The web is popular, but golly does it suck to write things for it. HTML pages aren't dynamic, so you need a language to be able to create dynamic pages that can retrieve information from database, so in comes PHP, which sells itself as a solution to a problem web devs were having with FastCGI and Perl. PHP proved itself as an okay solution, and somehow companies around the world threw their million-dollar industries at it and it got us decently far. But the browser was the real pain in the butt, so in came JavaScript, and JavaScript took off to the moon.
Where am I getting with all this? I'm mostly stating that things that alleviate headaches from the programmers are far more popular than things that don't. Writing and maintaining separate language codebases with different purposes is not necessarily better to some people who would prefer to have a "monolithic" repository of code that can do everything and not make things complex. JavaScript can now control the front end, the back end, it can be used to design games, GUI applications, and so much more without ever having to leave the comfort of the JavaScript language itself.
Facebook, Google, Apple and all the other tech companies are the foremost "leaders" of the technical world, and when Facebook publishes a library called "React" where the goal is to make the web easier to develop, what happens? Everyone's going to write React. Google makes Go and uses Go? Go devs will pop up in random places. Increasing your surface vector of being a potential hire at a Fortune 500 is a very promising idea for many aspiring programmers.
But this doesn't mean everything will appear pretty at the bottom of the totem pole. Organizationally, no one gets promoted for fixing memory leaks. Sometimes I have to close my Discord because it leaks memory after several hours and gets slow. Is anyone going to fix it at Discord? No, because it's an Electron problem (probably), and Discord isn't there to fix Electron problems, they're around to fix Discord problems.
YouTube, a site owned and ran by Google, has to polyfill in a ton of extra JavaScript for non-Chrome browsers, making the performance of YouTube on non-Chrome browsers not all that great. Google runs and upgrades Chrome with non-standard libraries so they can move fast, and in turn, makes other browsers perform worse on their sites. I can admit that sometimes other browsers are very slow at upgrading their features (Firefox), but it's not likely that Google will care about the non-Google browsers, as Chrome is included in every single Android device, and is renowned for being the most popular browser on the internet. Why? Probably because it's a Google browser lol, remember that part about everyone using React?
So in summation, we live in a society where poorly-built software is so common-place that people have to upgrade computers to go on Facebook of all things. The lowest common denominators of computer hardware are not the targets of big business, and probably never will be all that important. Our ten-year old laptops are deemed unimportant, and everyone is expected to upgrade their smartphone every two years. Why? Probably because nobody wants to write fast software made with slow phones in mind!
/rant
@sleibrock This is valuable insight, you really show how complicated and ugly things get and fast... and it's true, our world of software and technology is a mess and you really bring clarity to a messy topic
But to talk to @mariamarsh real quick, there's a couple things I want to point out:
I assume you wrote this in reference to Windows computers. Windows is my background and it's honestly a frustrating and messy OS, for many reasons I won't get into. It's not Linux, and Definitely not the prime pick to run containers in. But, it is an extremely complicated software platform where I've touched things I didn't even know existed that affect things I didn't even know X component could. However, since windows 7, they've much improved their act and it's a much more reliable platform.
Where are you getting your 99.9% figure from? How do you calculate resource waste? Is this in regards to software that runs on windows, or you talking about the OS itself? Or both?
I don't think, necessarily, file-size is the End-All, Be-All indicator of resource-usage/waste. For example, compare the file size between the exact same .csv and .xlsx(Excel) file, which one is smaller? File compression, and other factors play into this.
Libraries. Yes, There are built-in libraries in the OS, but that's not the end of the story. What about different Versions of the same library/software? I think it was Windows xp where you had to download different versions of .Net and .Net components for every piece of software. But the same thing applies to every language and dependency version.
Version mismatch and Dependency Hell are very real things that cause issues, from personal to corporate environments to this day. All software is built on software before it, and once you dive deep into what dependencies everything is built on, you'll never reach the bottom. Do you remember how one developer removed left-pad from NPM and broke the internet? That's the situation we are in with all of our software on any operating system. This is a joke tweet, but also, so very true:
"the most consequential figures in the tech world are half guys like steve jobs and bill gates and half some guy named ronald who maintains a unix tool called 'runk' which stands for Ronald's Universal Number Kounter and handles all math for every machine on earth" - twitter.com/6thgrade4ever/status/1...
This is a big problem with any modern operating system, whether it's Linux or Windows.
XUbuntu eats up 500-800 MB just after startup, and it needs 1.5-2 GB for some significant work. Win2K startuped and ran with128MB, WinXP with 256. Well, that's not entirely fair, because they were 32-bit: just to CALL to the full address, we need a 64-bit address, but XUbuntu has a difference even with WinXP by 8 times. In fairness, a workstation on Linux still doesn’t eat up more than the conventional 1-2 GB after startup, but Win10 / 11 can easily eats 2-3.
On Linux, all problems manifest themselves in exactly the same way, just try to build any open source project, it will immediately pull billions of the same open source libraries to itself. And many of them are needed only for the sake of one or two functions.
If it were not for the SSD, the available RAM, and the hardware instructions in the processors and their multithreading, the operation of computers running any modern OS would be a sad sight.
The main resources are eaten not by a bare machine, but by applications on it. Websites are almost entirely crap code, and one page "weighs" a hundred megabytes. This needs to be optimized on the server side, otherwise nothing. IDEA, VSCode and a bunch of other applications eat about the same (a lot) almost regardless of the OS. Another example for you is Jetbrains Toolbox, a little application for downloading and updating the IDE. It eats up 200-500 MB of RAM. What? How? Why?
Dependency hell can also exist in linux, I would not put 2 different versions of openssl, or libjpeg without "dances with tambourines". Look at the NPM and Composer dependencies of any site. Previously, jQuery was enough of all JS, but what about now? NPM folder can easily reach several gigabytes, and then from too many files the collector will fail and fall, great!
What about 99.9%, maybe I'm exaggerating, but the absolutely irrational loss of resources applies to both software and OS.
Thank you for your questions 🤗
I also advise you to read the comments of other users, there are a lot of interesting thoughts and opinions 🌈
"If it were not for the SSD, the available RAM, and the hardware instructions in the processors and their multi-threading, the operation of computers running any modern OS would be a sad sight." Well, yes, things are written for the current hardware standards of the day. People(developers, commuters, pedestrians) will "fill the space" of where they are. People naturally use the tools at their fingertips.
Nothing you've described is particularly new to me, but it feels like you are just describing the state of software in 2022. So, since I'm not sure what you are comparing everything to, I have to ask:
P.S. - Linux Dependency hell is particularly frustrating because if you try to update your packages, and one of those packages was installed by pip / is dependent on something installed by pip, the package manager could fail to update anything.
It feels like you are a very curious young man 😏
$500 and we'll face you in a 1v1 Discord battle to see who wins, the Dark Side or the Light Side 🔴⚔️💚You will be in the role of Darth Vader 👾
But I have a condition: I will take my father Chewbacca with me 🤣
Some of these conversations can be tagged under the "static linking versus dynamic linking" category and others probably file under "software bloat". What do you think your approach to application development is with respect to static/dynamic linking? Ship with deps, or ship targeting deps on a host environment?
We can assemble a separate blog on this topic from the comments under my post haha 🤣
With each new comment some new information is added and it's really cool 👍
Thanks for sharing your thoughts 🤩
I was doing "premature optimizations" at the beginning of carrier, was optimizing and minimizing what was possible, and later I learned that it's not something that is appreciated in our space, they won't pay you more for more optimal code, won't thank you and most likely won't even notice. Clients will appreciate if you finish a task in a shortest term, no matter how it's implemented and how much RAM and CPU it will consume. So being a good developer requires sacrificing performance for a development velocity. Code quality matters a lot so we can maintain the project instead of rewriting in the future, and this also has its price of performance.
Clients and companies are also not the ones to blame: they are focused on building a valuable projects with a limited budget in a reasonable time, they have much more important things to consider rather than resource usage.
I hate reddit from technical perspective, it's slow, buggy, inconvenient, fails with 500 on regular basis, but I still keep using it because simply don't know an other place with large community for IT topics. We, as users, want something that can serve it's purpose at least somehow and it's easy to find. Maybe there are much better alternatives to what we use but we just don't want to spend time searching for it. Gmail, for example, now it seems to be improved, but it was a masterpiece of bloatware some years ago. Maybe there is a lightweight and even free slack alternative, but no one is using it for the work.
Nobody is really guilty in this situation. I think team leads should try to convince business to allocate time (money) for optimization, but it's not always possible.
I'm learning Rust and there is a little hope that it will keep becoming more and more popular. Desktop framework Tauri is using Rust and it promises to produce 600Kb executables with same HTML/CSS/JS abilities as in Electron. But to be realistic, Rust is much more damn complex than JS. Between 600Kb executable and faster development + easier to find developers business will pick the latter.
Thank you for such an extensive feedback. 🥰
I agree with you, and I also mentioned in one of my comments that companies value cheapness and speed of execution, but not optimization. And that users use low-quality products because there are simply no others on the market.
I hope my post encourages more developers to try to write cleaner code, and I'm glad it's gaining popularity.
Have a nice day 😊
If that was true, if better alternative was missing on the market, someone would definitely create it. And they created it, I'm sure there are alternatives to FB, Reddit, Twitter, Gmail, Slack, but people just don't need something better, they use what others use. Some people still were using IE only couple of years ago. And this works even in development, people use popular libs and frameworks even if better exists for a long time.
That's a great intention for writing, developers should strive for building a better products, so thank you for posting!
Yes, there are probably many projects that do not receive due attention, and this is sad 😔
Thank you for your support, I really appreciate it 😊
You ablsolutely right! I think the root of the problem is that large companies that were once not very big started make development tools that intentionally make life easier for programmers outside of these companies primarily for the purpose of marketing. The idea was that developers outside of the big companies would do the least amount of effort and get the most out of it, while the developers inside the big companies would stay very experienced and develop those development tools. This situation gave rise to the fact that developers outside large companies after a while already demanded that development be as simple as possible. The simpler the development environment was (and languages with technologies, of course), the more developers outside of large companies received large companies and, as a result, more profit (a prime example of Flash technology). New developers are already coming to development environments in which simplicity is above all and they are simply afraid to go down a level, they think that black magic is going on there, and when they encounter a problem at a lower level, they think that it is enough to go to stackoveflow or write an issue on github and there are already dark lords lower levels versed in black magic just solve the problem. In fact, software development has been swallowed up by the same problem that has existed and will continue to exist in many industries. Initially, the conditional bubble of swelling is small and not visible, but it grows and becomes larger and larger. And once it bursts, it's all just a matter of time.
A very interesting thought. You almost wrote a whole article, thanks for such a detailed feedback. 😍 Have a good weekend 😊
honestly, I work as a dev for a mid sized company and its pretty apparent why this happens.
its a spiral of business value/impact of your software and the time crunch. you need to deliver a very impactfull feature within unrealistic deadlines. you lean towards one and you lose the other (getting a feature within the deadline might mean losing a bit of its impact and conversely getting the highest impact means spending more time on it).
This combined with the fact that most companies ironically care about time the most ( deadlines) and the fact that you need to work with multiple other devs who might not be on the same page as you, devs are never pushed the other way, they're never asked to push themselves to explore, truly be passionate and care for the product and create something that maximizes impact and performance, causing our current industry situation.
best way around this is for devs is to keep honing their own skills, work on their own little side project and explore the crazy advancements in web, tinker with styles trying to get lower latency faster ttfb.
I agree, it is often not the programmers who are to blame, but the business. Now the market value of an employee and his chance to get into a cash vacancy is determined not by the efficiency of the code, but by the knowledge of the list of trendy frameworks and libraries.
Beyond that, business decisions are driven by consumers: businesses do what drives sales and growth.
But consumers do not know the limits of the possible and do not understand that the product they are offered is bad, because they have not seen the good.
And in this closed circuit, programmers (and only a few of them) are the only ones who have an understanding of what a product can be.
Inexperienced devs with too little knowledge, using garbage frameworks and libraries, resulting in bloatware on top of bloatware. Devs are getting spoiled. They have these monster CPUs, and terabytes of memory, and believe the purpose is to consume as much as possible - Instead of conserving as much as possible. I made a point out of ensuring Magic can run on "dust" when I implemented its backend parts, and Hyperlambda, optimising it into oblivion, to the point where you can have the thing running on a Raspberry Pi if you wish. The result is that it performs 5x faster than PHP and 10x faster than Python, even though it purely technically should run slower due to its internal architecture ...
The irony ... :/
There is a chain reaction - new software increases the amount of code, the user has to increase the hardware, as a rule, with a surplus from the requirements of the software. New software is developed for surplus - why optimize if there are so many resources? And at some stage the resource is exhausted, the user buys new hardware.
And if user doesn’t buy it and stays on older versions, then questions of compatibility of formats and scripts begin to arise, it’s necessary to install some incomprehensible garbage to view / play the file, which you either can’t put on the old hardware / OS, or eats up the resource like crazy .
In fact, even in 2022, Microsoft itself is selling the type of “modern” Surface Go laptops that start with 4 GB of RAM. User puts Slack, Discord and other "masterpieces" there and that’s it.
And what you wrote about Hyperlambda is very cool.
I would like to correct the author - the first Elite occupied not 64, but 48 kilobytes, where the first 16 KB - it was a computer ROM with some analogue of the operating system.
And yes, in 2004 we had a Pentium-3 533 MHz at work, and one browser game on Flash worked perfectly on it. This flash player had a bad habit of being updated a couple of times a month, and after each update it worked slower and slower. After 5 years this game was already barely running on the 1700th Celeron.
To be even more precise, the original Elite version took only 22KB. But its version for the ZX Spectrum has almost doubled in weight - up to 40KB. But on the other hand, many different enemy starships were added.
How is that even possible? I just can't believe 🤯
Thank you for this remark. This makes it even sadder 😔
Yes, we've come a long way from the days when thousands of developers around the world would reinvent ten thousand wheels every day and every single one of them was convinced their wheel was the best wheel. Today we have embraced the concept of open source, where code libraries are reviewed and commented on and improved by thousands of developers, who see the value in contributing to the development of the best wheel that we can build with our collective knowledge. Sure there are inexperienced devs who chose (often for bad reasons) to use code libraries that are not well maintained. The other reason why software has more files and bytes now, is because the capabilities they can make use of, have increased dramatically. For example, web browsers can render 3d animations now. No computer could render anything in 3d in 1970s.
Yes, you are right, this is the problem with libraries, where everyone is trying to add something of their own. And when business requires a ready-made solution here and now, of course it is easier for a developer to connect such a ready-made library, from which he needs one feature out of 5000 🤪
Here we can smoothly switch to browsers: the problem is not in new features, because most sites do not use 3d, and in fact surfing ordinary text pages has gone not very far from the 2000s. The problem is that the sites are almost entirely composed of crappy code, which is why one page eats up hundreds megabytes.
Web browsers themselves are no better - they, like many applications, grow over time until they turn into monsters that contain another operating system of their own (sometimes two or more). It seems that in the end they must die under tons of their own unsupported code. But for some reason they never die. 😵
I propose to call this problem the "Problem of the French Scribes". There is a legend explaining why in French words there are many more letters in writing than they are pronounced (sometimes only one sound is pronounced out of 5 letters) - in ancient times, when there were very few literate people, and paper documents were already in circulation, scribes took bribes from clients for each letter. So they cheated quite good amounts for writing simple texts. Smart-ass puffed-up turkeys 😄 This is only speculation of course, but it is quite logical.
By the way, as far as I remember, when writing The Three Musketeers, Dumas was paid for the number of lines (like some programmers). Therefore, he specifically came up with monosyllabic characters with a bunch of stingy dialogues, for example, Grimaud Athos's servant.
In the end, the writer was told that lines that took up less than half a column would not be paid. Then he even thought about removing Grimaud from the story.
But ten years later, the writer was paid for the number of words - and Grimaud became more talkative. So the French trace is confirmed, and this problem can also be called "Dumas syndrome" 😲
This is a curious fact 🧐
This name sounds interesting and mysterious heh 🤖