node-telegram-bot-api
node-telegram-bot-api copied to clipboard
🔥 How to solve brakes while Huge traffic
Hello, I have a problem, I have a huge traffic on bot ~ 90.000 users per hour.
At this time, when there is a big traffic of users, the bot starts to slow down.
Is there any solutions of this? Thank you in advance
Hi, I don't have an answer to your question, but I am seriously impressed with how much user traffic your bot gets!
What on earth is your bot doing that warrants so much traffic?
You mind sharing the bot's name here so I can have a look?
Thanks
P.S. I am also interested in an answer to your question, because if my dreams come true for my bots (not finished yet), I wish to also see lots of traffic.
@neroxFB That's a general question, But you should investiget any problem in your project and then upgrade to a dedicated server.
Hi, I don't have an answer to your question, but I am seriously impressed with how much user traffic your bot gets!
What on earth is your bot doing that warrants so much traffic?
You mind sharing the bot's name here so I can have a look?
Thanks
P.S. I am also interested in an answer to your question, because if my dreams come true for my bots (not finished yet), I wish to also see lots of traffic.
Hi, it's a bot selling clothes. I can share bot's name by email. Glad to hear that you are interested in
@NeroxFB That's a general question, But you should investiget any problem in your project and then upgrade to a dedicated server.
I've made a lot) Upgraded server, increased RAM for node script, optimized a lot of systems. But some times it's still begins to slow down.
Could there be a reason for a frequent request to MySQL?
Maybe, You Should Check and test speed of your commands with/without mysql query. I prefer nosql mongodb database for most of my projects with pm2 process manager.
Hi, I don't have an answer to your question, but I am seriously impressed with how much user traffic your bot gets! What on earth is your bot doing that warrants so much traffic? You mind sharing the bot's name here so I can have a look? Thanks P.S. I am also interested in an answer to your question, because if my dreams come true for my bots (not finished yet), I wish to also see lots of traffic.
Hi, it's a bot selling clothes. I can share bot's name by email. Glad to hear that you are interested in @NeroxFB
You can send it to this email address please: [email protected]
Thanks
Hi, I don't have an answer to your question, but I am seriously impressed with how much user traffic your bot gets! What on earth is your bot doing that warrants so much traffic? You mind sharing the bot's name here so I can have a look? Thanks P.S. I am also interested in an answer to your question, because if my dreams come true for my bots (not finished yet), I wish to also see lots of traffic.
Hi, it's a bot selling clothes. I can share bot's name by email. Glad to hear that you are interested in @NeroxFB
You can send it to this email address please: [email protected]
Thanks
Done 👌
The reasons here might be multiples.
Are you using a webhook, aren't you? The problem, also, might be also related to Javascript core: since Javascript is not a language that runs on multiple threads, applications might suffer if have to do a lot of work and lot of requests to process. You could try to work with Node.JS Clusters but this library does not work with them. You may try with "worker_threads", even if they are considered experimental, to execute in parallel some operations but you should not use them for I/O, are the official documentation says.
You should investigate by looking at resource monitors of your servers if there is any hardware-related issue.
Also, do you perform a lot of database operations? All asynchronous operations, right?
The reasons here might be multiples.
Are you using a webhook, aren't you? The problem, also, might be also related to Javascript core: since Javascript is not a language that runs on multiple threads, applications might suffer if have to do a lot of work and lot of requests to process. You could try to work with Node.JS Clusters but this library does not work with them. You may try with "worker_threads", even if they are considered experimental, to execute in parallel some operations but you should not use them for I/O, are the official documentation says.
You should investigate by looking at resource monitors of your servers if there is any hardware-related issue.
Also, do you perform a lot of database operations? All asynchronous operations, right?
Thanx for your reply, yes bot is full of database operations. But no they are not asynchronous...
Can you show me a piece of code that makes a DB operation? And which library are you using to perform those? Database operations should always be asynchronous. Or better, all your app should work in async (only at the startup or in very specific cases, sync operations should be allowed)
All code is like this
> let BotMenu = (msg) => {
> connection.query('SELECT * FROM users WHERE ID = ? LIMIT 1', [msg.chat.id], (error, results) => {
> if(error) return 1;
> if (results.length != 0){
> let lang = results[0].Lang;
> let botmenu = {
> parse_mode: 'markdown',
> reply_markup: JSON.stringify({
> keyboard: [
> [`${ptext.acts.MenuCreatePost[lang]}`, `${ptext.acts.MenuMyChannels[lang]}`],
> [`${ptext.acts.MenuStatistic[lang]}`, `${ptext.acts.MenuSettings[lang]}`]
> ],
> resize_keyboard: true,
> one_time_keyboard: true
> })
> }
> bot.sendMessage(msg.chat.id, ptext.acts.Menu[lang], botmenu);
> }
> });
> connection.query('UPDATE users SET isNow = "NULL" WHERE ID = ?', [msg.chat.id]);
> return 1;
> }
Okay, the operations are asynchronous. How many of these do you have?
Are you working with a webhook?
about 5-6k of lines No polling, using forever
Polling is way slower than webhook. Webhook is always recommended for apps with big traffic. The main reason is this: with polling, your apps makes a request to telegram servers to retrieve messages data and then it has to process them.
This means that is you have 90k users per hour, and we suppose that every user sends 1 request (message, action, and so on...), we might suppose you receive
90.000 / 3600 = 25 requests per second
But they'll never be exactly 25, but much more than 25, as we might count all the steps a user should do to order a clothe (or multiple clothes) - so I won't do that.
If you have left polling interval to 300ms, you are going make a request that will receive about 8 messages every 300ms, if I'm not making wrong calculations.
While if you use webhook, you don't have to wait 300ms to get the updates, but your bot will receive a request with messages data as soon as the user sends you a message.
So consider first moving to webhooks. Here's a guide provided by Telegram: https://core.telegram.org/bots/webhooks
Thank you very much 🔥 . I'll try your suggestion. But how I can restart script that used webhook?
I don't know if I've understood correctly what you are asking me. When you decide to use webhooks, you provide your address to telegram every time you start your bot, so you don't have to worry about this.
Anyway, the above message might provide you a way to improve this, but won't be for sure won't be the only improvement you can do.
There are lots of issues/resources on how to scale a tg bot, you could refer to them.
Are you using pm2 process manager?
@irhosseinz You can use it.
Guys, I don't wanna be a party pooper, but NTBA does't support high-load by design. 🤷♂️
Let me explain: NTBA uses EventEmitter
as pipeline engine and thats means that back pressure regulation is impossible.
For following example, all updates runs in parallel. In case update rate is 5_000 updates per second, you need to simultaneously query database 10_000 times.
It's fine if you can use some sort of connection pool, but EventLoop
will be bloated anyway.
let BotMenu = (msg) => {
connection.query('SELECT * FROM users WHERE ID = ? LIMIT 1', [msg.chat.id], (error, results) => {});
connection.query('UPDATE users SET isNow = "NULL" WHERE ID = ?', [msg.chat.id]);
return 1;
}
Only couple solutions here:
- Rewrite NTBA processing pipeline with
back pressure
support. - Use another bot api library(I'm a bit biased here, but Telegraf doing well with high-load).
UPD: One more thing: NTBA bot with lots of I/O
(db/network/files) may process updates in wrong order.
I believe this library is used in LibreTaxi in production, maybe we could get some feedback from @ro31337 on whether he has encountered these issues. Anyways that's something that should be investigated.
cc/ @gochomugo
Well, when it comes to LibreTaxi, it's slow. I restart the app every hour. Not sure what it is.
I think the better solution would be to rewrite everything in Go. Unfortunately Node.js is super-hard to debug when it comes to issues like this.
Guys, I don't wanna be a party pooper, but NTBA does't support high-load by design. 🤷♂️
Let me explain: NTBA uses
EventEmitter
as pipeline engine and thats means that back pressure regulation is impossible.For following example, all updates runs in parallel. In case update rate is 5_000 updates per second, you need to simultaneously query database 10_000 times. It's fine if you can use some sort of connection pool, but
EventLoop
will be bloated anyway.let BotMenu = (msg) => { connection.query('SELECT * FROM users WHERE ID = ? LIMIT 1', [msg.chat.id], (error, results) => {}); connection.query('UPDATE users SET isNow = "NULL" WHERE ID = ?', [msg.chat.id]); return 1; }
Only couple solutions here:
- Rewrite NTBA processing pipeline with
back pressure
support.- Use another bot api library(I'm a bit biased here, but Telegraf doing well with high-load).
UPD: One more thing: NTBA bot with lots of
I/O
(db/network/files) may process updates in wrong order.
So how to be with this case?
@pironmind you can switch to Telegraf library 😉
@pironmind you can switch to Telegraf library
Why switching? How much it can cost if I switch? Any plans for the library dev to support high-load server requests?
EDIT: Speaking of switching libraries, is there any converters that I can use without starting over again from scratch?
@pironmind you can switch to Telegraf library
Why switching? How much it can cost if I switch? Any plans for the library dev to support high-load server requests?
EDIT: Speaking of switching libraries, is there any converters that I can use without starting over again from scratch?
Yes expensive, I need now totally rebuild application to start with new library.
Well, it's better to switch to Golang TBH. I use Node.js for LibreTaxi (.org for the website), and it's bad. Memory leaks you don't know where, and lots of stuff that just could be avoided with synchronous programming approach. Golang is better option if you consider rewriting an app for "Huge" traffic
Well, it's better to switch to Golang TBH. I use Node.js for LibreTaxi (.org for the website), and it's bad. Memory leaks you don't know where, and lots of stuff that just could be avoided with synchronous programming approach. Golang is better option if you consider rewriting an app for "Huge" traffic
Okey let see, I dont know Golang, so what would be better, GO or RUST?
Well, Golang is the fast way to move forward. Rust is slow way to move forward and probably the right choice if you're going to host your app on a CPU of a flying Boeing with 300 passengers onboard.