Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UWebsockets server keeps the allocated memory for the client as long as the client is active. #1150

Open
igorsadovskii opened this issue Jan 16, 2025 · 3 comments

Comments

@igorsadovskii
Copy link

igorsadovskii commented Jan 16, 2025

Good evening,

I am facing a memory growth problem on UWebsockets server (RSS). In the example below

In the first phase

  • uwebsockets server is created, and WS clients connect to it one by one (10 clients in the total), the server sends 100Mb buffer to each of them, after that the client remains active. As a result of these actions RSS memory grows, for each client +100Mb additional memory, which is not allocated back as long as the client will be alive

On the second phase

  • clients are closed one by one and we can see that the total memory of the process also starts to free 100Mb for the closed client.

How to bypass this behavior so that uws server frees memory and does not keep it for as long as the client is alive (otherwise memory usage grows too high) ?

nodejs precompiled uwebsocket (require uws2) - https://www.npmjs.com/package/@luminati-io/uws2?activeTab=readme

output for the script is

mem_usage {"rss":"0.15 Gb","heapTotal":"0 Gb","heapUsed":"0 Gb","external":"0.09 Gb","arrayBuffers":"0.09 Gb"}
--- 1'st phase: sending 100Mb to multiple clients
client received 100 Mb
mem_usage {"rss":"0.44 Gb","heapTotal":"0 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"0.6 Gb","heapTotal":"0 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"0.69 Gb","heapTotal":"0.01 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"0.79 Gb","heapTotal":"0.01 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"0.89 Gb","heapTotal":"0.01 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"0.98 Gb","heapTotal":"0.01 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"1.08 Gb","heapTotal":"0.01 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"1.17 Gb","heapTotal":"0.01 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"1.27 Gb","heapTotal":"0.01 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
client received 100 Mb
mem_usage {"rss":"1.36 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
--- 2'nd phase: closing clients
mem_usage {"rss":"1.27 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"1.17 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"1.08 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"0.98 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"0.89 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"0.79 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"0.7 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"0.6 Gb","heapTotal":"0.02 Gb","heapUsed":"0 Gb","external":"0.29 Gb","arrayBuffers":"0.29 Gb"}
mem_usage {"rss":"0.4 Gb","heapTotal":"0 Gb","heapUsed":"0 Gb","external":"0.09 Gb","arrayBuffers":"0.09 Gb"}
mem_usage {"rss":"0.31 Gb","heapTotal":"0 Gb","heapUsed":"0 Gb","external":"0.09 Gb","arrayBuffers":"0.09 Gb"}

script

const uws = require('uws2');
const {WebSocket} = require('ws');

const PORT = 3000;
const MB = 1024**2, GB = 1024**3;
const buffer = new Buffer(100*MB).fill(0);

const show_mem_usage = ()=>{
    const mem = process.memoryUsage(), rmem = {};
    Object.keys(mem).forEach(k=>{
        let r = mem[k]/GB;
        r = Math.trunc(r*100)/100;
        rmem[k] = `${r} Gb`;
    });
    console.log('mem_usage '+JSON.stringify(rmem));
};

let wait_for = ()=>{
    let _r;
    let prom = new Promise(r=>{ _r = r; });
    prom.resolve = v=>_r(v);
    return prom;
};

let sleep = ts=>new Promise(r=>setTimeout(r, ts));

const server = async function(){
    const ws_handler = {
        maxBackpressure: 0,
        maxPayloadLength: 200*MB,
        message: (ws, message, is_bin)=>{ ws.send(buffer, true); }
    }
    let server = uws.App({}).ws('/', ws_handler);
    let listen = wait_for();
    server.listen('127.0.0.1', PORT, token=>{
        if (token)
        {
            console.log('listen');
            listen.resolve();
        }
    });
    await listen;
};

const client = async function(){
    const ws = new WebSocket(`ws://127.0.0.1:${PORT}`,
        {perMessageDeflate: false});
    let first_income = wait_for();
    ws.on('message', w=>{
        console.log('client received', w.length/MB, 'Mb');
        first_income.resolve();
    });
    let connected = wait_for();
    ws.once('open', ()=>connected.resolve());
    await connected;
    ws.send(1);
    await first_income;
    return ws;
};

const main = async function(){
    await server();
    let clients = [];
    show_mem_usage();
    console.log(`--- 1\'st phase: sending ${buffer.length/MB}Mb to multiple clients`);
    for (let i = 0; i<10; i++)
    {
        clients.push(await client());
        show_mem_usage();
    }
    console.log('--- 2\'nd phase: closing clients');
    for (let client of clients)
    {
        client.close();
        await sleep(1000);
        show_mem_usage();
    }
};

main()
.then(()=>process.exit())
.catch(e=>{
    console.log('Uncaught', e);
    process.exit(1);
});

@uNetworkingAB
Copy link
Contributor

Can you rerun with this commit: uNetworking/uWebSockets@42b368e

std::string::erase(0, pendingRemoval) does not guarantee a shrink of capacity, so it should now be fixed

@igorsadovskii
Copy link
Author

@uNetworkingAB thank you for fast answer ! will check on sunday & get back

@uNetworkingAB
Copy link
Contributor

There is definitely a bug (erase does not shrink the buffer) - this is fixed. However, in practice this cannot cause the bug you report because when the buffer is empty, it does clear which does shrink. So the bug is really just that it waits until the backpressure is FULLY drained before it frees rather than freeing along the drainage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants