Mass messaging in messengers is always a challenging task that's hard to solve effectively. I won't be discussing any hacks or workarounds right now; instead, I'll share my approach that fully complies with the platform's usage rules (in this case, Telegram). And yes, I'm a fan of TypeScript + Deno. The latter is actively evolving, gaining new features and functions. One of the recent innovations is queues based on Deno KV, which is why I chose Deno for this article. Intrigued? Then, let's dive in!
Limitations
The legal way to send messages on Telegram is through bots. However, they come with indefinite restrictions that may change based on the bot's load, Telegram itself, the positions of stars, and magnetic storms. In general, it's approximately ~28 messages per second. What happens if you exceed this limit? Telegram will return a status 429 and indicate in the 'retry_after' header how many seconds you need to wait before continuing with the sending.
Requirements
Based on the aforementioned limitations, I propose implementing a mechanism for mass message distribution on Telegram with the following characteristics:
- Strict adherence to the number of messages sent per second.
- Tracking instances of exceeding the limit with a subsequent delay for the specified number of seconds.
- Ability to resend messages in case of failure a certain number of times.
- Waiting for a response to sending is interrupted after a specified timeout.
- Option for delayed sending.
- Script failures and crashes should not result in data loss, and message sending should resume after the script is back on track.
- Whew, sounds complicated? Well, not really!"
Instruments
>> Deno KV
Let's start by choosing our tools. The first thing I want to draw your attention to is Deno KV, the built-in key-value database in the Deno runtime. As mentioned earlier, Deno KV now includes a queue mechanism. It's quite straightforward to use and comes with a delay feature. Here's a brief example where a message will be taken from the queue after 1 minute:
const db = await Deno.openKv();
db.listenQueue(async (msg) => {
await post(msg.chat_id, msg.text);
});
await db.enqueue({ chat_id: -23423423423, text: "Hello, World!" }, {
delay: 60000,
});
It's crucial to understand that Deno KV is stored as files on your hard drive. So, we have a mechanism for saving data and their deferred processing through a queue right 'out of the box'!
>> Fetchify
Next up are the requests, and we need to somehow limit their quantity per unit of time. It would also be good to handle retries in case of errors and manage delays when receiving a 429 status. Yes, this could be implemented manually, but I'll opt for my fetchify package, about which I've jotted down notes: one, two. It more than meets the needs in this case. Here's a quick example:
const api = fetchify.create({
limiter: {
// Number of requests per second
rps: 3,
// You can handle the occurrence of a 429 error
// and return the time in ms that the request loop should delay
"429": (response) => 1000,
},
baseURL: "https://jsonplaceholder.typicode.com",
headers: {
"hello": "world",
},
});
for (let i = 30; i--;) {
console.log(`send ${i}`);
// All basic methods supported: get post put delete head patch
api.get(`/posts/${i}`).then((data) => console.log(`${i} ${data.status}`))
.catch((err) => console.log(`${i} ${err}`))
.finally(() => {
});
}
You can find more details here: GitHub deno.land/x npm
Implementation
Now we have everything we need. Ready? Let's take a look at the next 49 lines of code and rejoice:
import fetchify from "https://deno.land/x/fetchify@0.2.8/mod.ts";
class TelegramMailer {
private tgApi;
private queue: Deno.Kv = undefined as unknown as Deno.Kv;
constructor(token: string) {
this.tgApi = fetchify.create({
baseURL: `https://api.telegram.org/bot${token}/`,
limiter: {
rps: 28, // 28 requests per second
"429": (res) =>
res.headers.get("retry_after") as unknown as number *
1000 || 1000,
},
headers: {
"Content-Type": "application/json",
},
});
}
async start() { // call before sending!!!
this.queue = await Deno.openKv();
this.queue.listenQueue(async (msg) => {
// @ts-expect-error
const { chat_id, text } = msg;
try {
const res = await this.tgApi.post("sendMessage", {
timeout: 10000,
body: JSON.stringify({ chat_id, text }),
});
console.log(
`message delivered to ${chat_id}; ${res.status} ${res.statusText}`,
);
} catch (e) {
console.log(`the message was not delivered to ${chat_id}`);
console.log(e);
}
});
}
async notify(chat_id: number, text: string, delay?: number) {
await this.queue.enqueue({ text, chat_id }, {
delay,
});
}
}
That's all for them! Nothing more is needed. Now we can invoke the notify function, save messages to the queue, and send them to Telegram with delays and limits. Of course, this code can be broken with excessive load, but for most tasks, it'll do just fine :)
Conclusion
In the end, I managed to implement a fairly complex message distribution mechanism for Telegram, considering limits, making it resilient to crashes, and network issues. And all of this in just 55 lines of TypeScript code for Deno!
PS: I'd be happy if you subscribe to my channel IT NIGILIZM and join the russian-speaking deno runtime community ❤️!
Top comments (0)