Most of our partners don't care about anything but our ability to deliver kick ass AI chatbots. However, Magic Cloud is actually a complete low-code and no-code software development automation platform - And some of our partners require more than just an AI chatbot, such as complex integrations with their existing systems - At which point Hyperlambda and our low-code and no-code features becomes interesting.
In addition, there are according to Supabase themselves roughly 1 million people having a PostgreSQL database hosted by Supabase, without the ability to use it for anything intelligently what so ever, without using "edge functions" - Which of course completely eliminates all low-code features, while also ensuring you end up with with something that can only be described as Spaghetti.
In this article I will therefore explain the details about the first touch point in Magic, specifically the URL resolver, and help you understand why Magic is a solution to your Supabase scalability and code quality problems.
One Endpoint to Rule them All
Magic actually has only one single endpoint. It's written in C#, and here's roughly how it looks like.
namespace magic.endpoint.controller
{
public class EndpointController : ControllerBase
{
[HttpGet]
[HttpPut]
[HttpPost]
[HttpPatch]
[HttpDelete]
[Route("{*url}")]
[RequestFormLimits(ValueLengthLimit = int.MaxValue, MultipartBodyLengthLimit = int.MaxValue)]
public async Task<IActionResult> Execute(string url)
{
return await HandleRequest(Request.Method?.ToLowerInvariant(), url);
}
}
}
Ignoring the fact that C# on .Net Core 8 is "a bajillion" times faster, more scalable, and less resource demanding than Python, PHP, and NodeJS - Let's try to look at how the above code actually does what Magic allows for it to do.
The above wildcard character ensures that the above method is invoked for every single HTTP request towards the server, for all 5 most commonly used verbs. This allows us to use the path of the URL as a "parameter", which results in loading up a Hyperlambda file matching the specified URL, parse this file into a lambda object - Which then is executed and returns some result to the client.
Internals
I'm not going to give you all the gory details of how it was implemented, but the above HandleRequest
invokes a dependency injected service according to the following logic.
services.AddScoped<IHttpExecutorAsync>(provider => {
var context = provider.GetRequiredService<IHttpContextAccessor>().HttpContext;
if (context.Request.Path.StartsWithSegments("/magic"))
return provider.GetRequiredService<HttpApiExecutorAsync>();
return provider.GetRequiredService<HttpFileExecutorAsync>();
});
Basically, it's got two service implementations; One service for API requests and another service for file requests. API requests requires the URL to start out with "/magic/", while everything else is handled by the default file server.
The File Server
The default file server is a Hyperlambda web server, allowing us to serve static content such as HTML, CSS, and JavaScript - While also applying "server-side mixin logic" to HTML files. Mixin logic is Hyperlambda code dynamically substituting parts of the HTML with the result of executing Hyperlambda code.
The latter allows us to have Hyperlambda "code behind files" that are executed when some URL is requested.
The file server service appends ".html" by default on requests without an extension, prepends "/etc/www/" to the path, and then simply returns whatever it finds at this specific location, trying to serve it as the correct MIME type, and also applying HTTP headers in the process. To understand the beauty and flexibility of this approach, realise our website was entirely created as a Hyperlambda website.
Not only did we build our own CMS based upon GitHub, GitHub Workflow Actions, publishing blogs as Markdown dynamically parsed by the backend as they're being served - But we even built our own programming language and web server to serve the frikkin' thing.
But please, don't tell your senior dev, he'll think I'm crazy ... 😂
Below you can literally see with your own eyes how even our website is in fact a cloudlet by itself, allowing you to use the cloudlet to serve your own websites, including sites built on ReactJS, Angular, or VueJS. So not only is Magic a low-code and no-code software development alternative to your existing software development platform - But it's also in theory a replacement of WordPress and Joomla if you wish.
This allows me to write blogs using Visual Studio Code, push towards GitHub, and have a workflow action triggered that moves the updated code into our website, which is simply a Magic Cloudlet being resolved from ainiro.io.
Once the code makes it to our ainiro.io cloudlet, it will dynamically execute any associated Hyperlambda code behind files as some URL is being requested.
Since Magic Cloud therefore by the very definition of the term is a "web server", we can also (duh!) serve other types of web apps, such as React, Angular, static HTML, etc.
This is how we deploy the AI expert system in fact, which is just an Angular app, built during push towards GitHub, zipped, transferred to our internal "AppStore", and served through plugins in your cloudlet. Every time I push towards the master branch in GitHub, a new "version" of the AI Expert System is automatically built and uploaded to our "AppStore".
The API server
The API service is a bit different. First of all, it'll only kick in on requests that starts out with "/magic/". Then it will only resolve URLs going towards either your "/system/" folder or your "/modules/" folder. This allows you to have private files that you don't publicly expose to the web directly, without having some piece of Hyperlambda code explicitly returning the file of course.
Basically, unless your file is in one of these 3 folders, it's not even possible in theory to access it from the outside of your cloudlet.
- /etc/www/
- /modules/
- /system/
The first thing the service does is to remove the "/magic" parts, for then to append the verb in small case, and finally end the request with ".hl". So an HTTP GET request resembling the following "/magic/modules/foo/bar" will end up being resolved by a Hyperlambda file at the path "/modules/foo/bar.get.hl".
This is often referred to as "programming by convention" since there's no need for any configuration object to declare which files are served by which URL. Need a new endpoint? Chose a verb and create a new Hyperlambda file, and you're done!
The Hyperlambda is parsed and executed, after having parameters passed into it - For then to return the result of the execution to the client. Since Hyperlambda is a Turing complete environment, this allows you to do literally anything you wish within this Hyperlambda file, and have the result of the execution returned to the client.
At least in theory this allows you to do anything you want inside of this Hyperlambda file, including reading from your database, invoking 3rd party services, or accessing the file system to read and write files. Below is a Hyperlambda file for reference purposes.
And yes, the system is scattered with AI features, such as using AI to automatically generate documentation for files, in its help system, and even an integrated code support AI chatbot that will to some extent even generate code for you!
Since only files with an HTTP verb as the second last parts of their path are possibly to resolve, this allows you to "hide" internals of your application that's not possible to resolve using the API web server.
And if you're missing something in Hyperlambda, adding C# code and interacting with it from your existing endpoint is literally as easy as creating a new C# class as follows.
[Slot(Name = "foo.bar")]
public class FooBar : ISlot
{
public void Signal(ISignaler signaler, Node input)
{
input.Value = 42;
}
}
The above C# class will once your cloudlet is initialised allow you to invoke a slot named [foo.bar] that simply returns the value 42.
This feature of Hyperlambda allows you to create C# code, and "mix it in" with your low-code software development automation parts, resulting in that it literally becomes impossible to get stuck with something you cannot do!
If Supabase is a steam locomotive, then Hyperlambda is an inter-galactical spaceship with warp drive from the future!
The process to port your existing Supabase solutions is as follows.
- Take your existing Supabase PostgreSQL database and point Magic towards it
- Click a button and you're now done with 80% of your job
- Solve 99% of the rest of the problem with Hyperlambda, typically using declarative programming constructs, not even requiring you understand Hyperlambda
- Solve the remaining parts with C# and integrate with Hyperlambda
Porting any Supabase database towards the above is typically a 5 minute to 10 hour job, at which point you'd end up with something 1,000x faster and 1,000,000 times more scalable, and 1,000,000,000 more flexible!
Wrapping Up
Magic Cloud is a web server, similarly to PostgREST. The difference is that PostgREST doesn't generate code files, it only parses QUERY parameters and JSON payloads, and uses these as its foundation to interact with your database, dynamically building SQL in the process.
Hyperlambda is also a web server, but a web server that allows you to add code to your endpoints. Most of this code is automatically generated by the machine, which explains our slogan being "Where the Machine Creates the Code". However, the ability to dynamically add business logic to your database endpoints is what makes Magic a real low-code framework, while Supabase is a toy low-code framework. Below you can see the difference for yourself.
This allows you to "intercept" database access in your cloudlet with business logic, allowing Magic to do "a bajillion" things that PostgREST and Supabase can simply never do.
So now you understand why it is that once you're stuck with your Supabase solution, and you need help, we at AINIRO can help you and "port" your existing backend solution to Magic in a couple of hours, and add whatever amounts of business logic you want to on top of your database CRUD endpoints.
The funny thing is that you can still keep your PostgreSQL database hosted by Supabase, using your Magic cloudlet as an intermediary and API "gateway" to your database, resulting in zero down time during the porting phase. And when you're done, you simply swap out a simple CNAME DNS record, and you've painlessly upgraded to a modern low-code web application solution.
Have a najs day 😊
Top comments (0)