The market standard in web hosting is this: Your website runs on shared resources on a more or less packed server & your priority on the machine and number of parallel processes allowed depends on how much you pay.
If you have a cheap plan, performance can be low and traffic peaks can be cut off.
What if, in each hosting plan...
Your website runs on a containerized setup that gives you all the resources you need to cover huge traffic peaks
Every project receives the same prioritization on the server
You'll get an awesome time to first byte, no matter what
You still pay typical shared hosting prices
But: Your plan has a fixed number of monthly visits, and you need to pay per visits when it is reached
Would you like that? What would be your concerns?
Top comments (25)
I launch a thing expecting it to be a flop and it actually turns viral, then known 1.2b for my CSS cat art generator , that would be my fear
Maybe user could add a "max budget" to prevent them from getting poor if things get viral unintentionally.
Or maybe a feature that allows fans of the CSS cat art generator to crowdfound your hosting bills? :-)
That and a plethora of alerts, SMS and email telling the user that there budget limit is nigh and they should pay or we'll pull the plug π
Otherwise purrfect
What about a Hosting setup like this:
By default, the throughput of a hosting plan is limited, just like in regular shared hosting. Non-technical users and users that don't which to scale don't need to worry about scaling costs.
Users can remove this limitation if they want, allowing the platform to scale up (similar to serverless hosting as mentioned by @hunghvu ). Everything consumed above the former throughput limit is added up and billed based on total CPU or RAM minute consumption. If bots hit parts of your website, that don't consume much, it would not drastically increase your bill. Users could define max. budgets.
Would that way of introducing scalability to web hosting make more sense?
@adam_cyclones @hunghvu @cicirello @dagnelies @devjour_app @fnh
Well, it's pretty much how all other hosting providers do it. However I really wonder why you are focused on RAM and CPU... that's definitely not the bottleneck. You can serve millions of page views per day with modest CPU/RAM. The bottleneck with normal hosting is usually rather bandwidth and latency. You don't need a "always online or priority thing", what you need CDNs distributing the content globally. Don't take me wrong, but I have the feeling you have your focus on the wrong things. Resources like CPU/RAM is irrelevant for static hosting, it's for running code, serverless functions, intensive databases and so on.
Really, do other hosters have a pay-per-use scaling path in a managed hosting product?
I totally agree with you when it comes to static sites. But I'm talking about managed hosting for any kind of web application or backend service - not Deploy Now alone, if that is what you had in mind :)
It's also unclear to me what you want to provide. Plain HTML/PHP hosting? Root access to VMs? Build/deploy pipelines for various languages?
...and yeah, plenty of providers scale up with "pay as you go", especially if they are in the "serverless" landscape. Actually, I think all do it: aws, azure, google, cloudflare, vercel... just to name a few.
From a price perspective I don't see that a variable price (especially without a hard upper cap) would make sense for an individual. I certainly would not be comfortable exposing anything publically without tight control over the spending.
BTW I'm a customer of your current company (most of my domains and the 1 euro VPS)
Maybe for businesses the calculation might look differently, depending on SLAs with their customers, but I assume those are already on one of the big three clouds or have their dedicated servers in a data center.
One could make a harder upper cap configurable for users. But I get the argument of not having enough control.
It would be hard to make a comparison, unless there is a detailed plan to look at. For example, if the your plan separate traffic cost (e.g. download) with visitor cost, then it may make a difference. Traditionally, depending on how optimal the website is, 1 GB of traffic can serve a many more users than an average one.
Otherwise, under the same no-separation plan but there is an artificial visitor cap, then I would say a big NO to that. Besides, how can you precisely separate real user vs. bot visit?
So you would prefer a pricing model that rewards lean/optimized websites? Would be billing per bandwith is a single metric be fair from your perspective? @hunghvu
One way to filter out bots would be blacklisting of known bots. Of course, not bullet proof.
Let's assume your competitor is as below (GCP free tier). Specs can be found here
CPU: 0.25 vCPU (Thread) Intel Skylake
RAM: 1 GB
HDD: 30 GB
Network: 1 GB egress
I remember seeing people showing their record for 10000+ monthly visits using the spec above (or even weaker as Google upgraded their free tier spec at one point IIRC).
Now, if your plan has 4 times the resources, but the monthly visit count is capped at 2500, is it worth it? Maybe, it depends on your client goal (e.g., lean website to showcase which is prone to "viral" traffic spike vs. heavy web app with multiple concurrent users which require resources). Assuming you can define what a
visit
is, using a monthly cap can be a better language to advertise your product to non-technical people though. However, this can be more vulnerable to non-DDoS bot visits (hard to detect).Using a single metric is certainly not fair. From a hosting provider perspective, I suppose the choice is based on what your targeted customer demographic is. From a developer perspective, I choose plan based on my technical goal.
If you have fixed resources, it's certainly unfair to limit what you can do with it. But if resources are not fixed, and instead you provide whatever it takes to handle the volume, you need to measure and bill consumption in a way. As you said, "visits" are the metric most likely understood by non-technical people.
Is it like a serverless platform (infinite automatic scalability), but with a pricing model of fixed pay per amount of visit per month? That would be an interesting thing to see, none of providers on top of my head has this pricing model.
A competitor can drive up yours costs with bot visitors. The web host itself, if unscrupulous, can drive up your costs with bot visitors. I don't think it would be very difficult to implement a bot that could "visit" often enough while appearing as if different visitors to impact a site's costs under such a model, while evading detection as a ddos (e.g., avoid overwhelming system itself).
True. Any idea how countermeasures could be designed? @cicirello
Not really. That's just one of the concerns I'd have. I would need to be convinced by the host that they are able to counter this.
Of course, we would hope that users trust the web host enough that he wouldn't drive up their customers visits. Attacks from competitors are a fair point - however, regardless of how hosting is designed, competitor fake traffic always harms you if not identified as DDoS. A bot blacklist maybe can help - however this would require bots to be identified as bots (and maybe even reported to the user).
If all you are trying to do is increase number of visits with a bot, you can probably do it without looking like a DDoS. Pick random times to visit perhaps modeled by common traffic times. For each such visit, visit home page, pause appropriate time to simulate reading, follow a link, etc to make look real.
If you allow me to ask these questions:
Sure :)
AWS Amplify might be the most similar service. I would like to have a less steep learning curve and simpler and more predictable pricing.
I think Web Hosting has two target groups: SMBs (mostly playing around with CMS like WordPress) and WebPros (Agencies/Freelancers/Developers). It would be great to make both sides happy.
You mean how hosting limitations affect your visitors? Well, depending on factors like hardware quality and the number of other hosting packages on the same server, your website can be slow on average. If other hosting packages go through the roof, your website can be especially slow at some points in time. If your own traffic scales up quickly, hosting will either queue additional requests or cut them off. Display Marketing rather increases baseline traffic than comes with huge peaks, so the latter shouldn't be a problem.
Pricing for web hosting is so low nowadays. Either you are a small website and you even get hosted for free, or you are a big fish and you directly pay for consumed resources. I also wonder if a "visit" is a fair metric. A visit that displays a readme and a visit that downloads gigabytes of data dumps are slightly different.
Regarding the first point: Don't you think that there is a group in between? Or that it would be nice to have a seamless growth path when starting with (almost) free hosting?
True, a "visit" can be different things. One could for example use CPU or RAM minutes instead. However, those are hard to estimate, especially for less technical audiences. Would it a compromise to measure CPU/RAM minutes but provide estimates of how many "visits" of an average website that would be? @dagnelies
The thing is: plain web hosting is already free or cheap. It is rarely a relevant factor... Except if you have a busy website. In that case, the bottleneck will likely be the traffic, not the CPU/RAM! ...perhaps storage if you are archiving massive data. ... And you will soon turn to CDN to have your content distributed with low latency.