-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
performance reduction (Node.js vs Browser) #52
Comments
In terms of performance, the primary difference between NodeJS and the browser is that msgpackr is able to use a native add-on that significantly boosts the performance of extracting/deserializing strings. And unfortunately, the browser environment has pretty poor facilities for fast decoding of strings. The only options are on the browsers for decoding strings is I don't believe there are any real differences in performance between reading from Buffer and Uint8Array themselves (Node has some extra machinery to reuse blocks of memory for Buffer allocation, but Buffer is actually a subclass of Uint8Array, so once you have an instance the interaction is the same). msgpackr switches between TextDecoder and plain-JS decoding based on the string length, using TextDecoder for strings above 64 character. It is possible that there might be slight tweaks that could be done on some of that, but I believe most of that is pretty well adjusted and optimized. To really achieve substantial performance gains in browser decoding, we need a way to to bundle all the string data into a single sequential block of data that can be decoded in single pass. TextDecoder has a high per-call overhead, but is plenty fast in terms per-character performance, and if it is used decode the entirety of the string data for a msgpack structure, in one pass, I believe it would be very fast. I have been thinking about making some type of custom extension for such a string bundle. And it seems likely that would be the type of thing that could improve performance in your situation. |
Thanks for your reply! Here is my test data Front end code use socket.io library (via WebSocket) for data transmission. If the message payload's type is String then socket.io will try to parse the text message to a JSON object. if the message payload's type is binary, the raw ArrayBuffer message will be passed to the user code for subscribing to events using socket.io. And then, for example, if the binary is a MessagePack binary, we use msgpackr's unpack method to turn it into a JSON object. If its performance is better than JSON.parse, it would be so cool because we not only make the data compressed but also make the deserialization faster! By the way, before calling unpack, we have to use Thanks again! |
I have added a new |
…for very large data structures to reduce memory entrapment and facilitate streaming in the future, #52
I have written a benchmark in our nodejs project comparing with JSON.stringify / JSON.parse method,the result seems good and meets our needs. But when I run my bench on Chrome, the result was inverted.
I guess the reason for the performance reduction is that the byte representation has changed from
Buffer
toUint8Array
because the browser environment doesn't have Buffer.In the browser scene, is there still room for optimization of unpack's performance? (Otherwise, replacing JSON.parse will introduce additional overhead)
Nodejs 17.2.0 (V8 version: 9.6.180.14)
Chrome 96.0.4664.93 (V8 version: 9.6.180.20)
benchmark code:
The text was updated successfully, but these errors were encountered: