Skip to content

Commit

Permalink
Revert "Revert "performance tooling and improvements (DataDog#858)" (D…
Browse files Browse the repository at this point in the history
…ataDog#898)" (DataDog#936)

This reverts commit c00c8e0.
  • Loading branch information
bengl authored Apr 30, 2020
1 parent 487d9cc commit adaebe6
Show file tree
Hide file tree
Showing 24 changed files with 816 additions and 107 deletions.
1 change: 1 addition & 0 deletions .eslintignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ docs
out
node_modules
versions
acmeair-nodejs
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -113,3 +113,4 @@ prebuilds.*
docs/test.js
!packages/*/test/**/node_modules
dist
acmeair-nodejs
2 changes: 2 additions & 0 deletions LICENSE-3rdparty.csv
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ require,lodash.uniq,MIT,Copyright JS Foundation and other contributors
require,methods,MIT,Copyright 2013-2014 TJ Holowaychuk
require,module-details-from-path,MIT,Copyright 2016 Thomas Watson Steen
require,msgpack-lite,MIT,Copyright 2015 Yusuke Kawasaki
require,mnemonist,MIT,Copyright 2016 Guillaume Plique (Yomguithereal)
require,nan,MIT,Copyright 2018 NAN contributors
require,node-gyp-build,MIT,Copyright 2017 Mathias Buus
require,opentracing,MIT,Copyright 2016 Resonance Labs Inc
Expand All @@ -27,6 +28,7 @@ require,url-parse,MIT,Copyright 2015 Unshift.io Arnout Kazemier the Contributors
require,whatwg-fetch,MIT,Copyright 2014-2016 GitHub Inc.
dev,@babel/core,MIT,Copyright 2014-present Sebastian McKenzie and other contributors
dev,@babel/preset-env,MIT,Copyright 2014-present Sebastian McKenzie and other contributors
dev,autocannon,MIT,Copyright 2016 Matteo Collina
dev,axios,MIT,Copyright 2014-present Matt Zabriskie
dev,babel-loader,MIT,Copyright 2014-2019 Luís Couto
dev,benchmark,MIT,Copyright 2010-2016 Mathias Bynens Robert Kieffer John-David Dalton
Expand Down
4 changes: 3 additions & 1 deletion benchmark/core.js
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@ let sampler
const spanStub = require('./stubs/span')
const span = format(spanStub)

const buffer = Buffer.alloc(10 * 1024 * 1024)

suite
.add('DatadogTracer#startSpan', {
onStart () {
Expand Down Expand Up @@ -93,7 +95,7 @@ suite
})
.add('encode', {
fn () {
encode([span])
encode(buffer, 0, [span])
}
})
.add('id', {
Expand Down
65 changes: 65 additions & 0 deletions benchmark/e2e/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# End-to-End Benchmarking

The purpose of this folder is to be able to test the overhead of dd-trace on an
application. The primary focus here is on easily measurable metrics like
latency, RPS and throughput.

We're using a sample app called AcmeAir, which is used by Node.js to benchmark
itself (results are at <https://benchmarking.nodejs.org/>). Load is produced
with [autocannon](https://npm.im/autocannon), which also gives us results. We
test with and without the tracer to get a measure of overhead. We test using two
separate endpoints that are measured independently, to get a measure of worst
case (a static landing page) and a more realistic case (a DB call is done).

## Requirements

This test should work with all versions of Node.js supported by dd-trace. In
addition, the sample app uses MongoDB, so you'll have to have that running and
listening on the default port. If you're set up with the `docker-compose.yml` in
the root of this repo, you should be ready.

## Usage

To start the test, run `yarn bench:e2e`. This will install AcmeAir if it hasn't
yet been installed, and populate MongoDB if that hasn't already been done.

Next, it will run three tests for 10 seconds each, sequentially, on each of the
2 endpoints. The three tests are:

1. Without any tracing (i.e. a control test)
2. With async hooks enabled
3. With tracing enabled

That means 60 seconds of testing. Results will appear on stdout.

You can change the duration of the tests by setting the `DD_BENCH_DURATION`
environment variable to the number of seconds to run. Keep in mind that this
will be run 6 times (the three tests above on two endpoints), so if you set it
to `60`, you'll have to wait 6 minutes before it's done.

### Profiling, Method 1

To profile the app, the easiest thing to do is set the `DD_BENCH_PROF`
environment variable to a truthy string. This adds `--prof` to the node
processes, which writes a file called `isolate-0x${SOMEHEX}-${PID}-v8.log` for
each of the 4 tests. You can then use `node --prof-process` or a tool like
[pflames](https://npm.im/pflames) to view the profile data.

### Profiling, Method 2

You can run the app manually, using a tool like [0x](https://npm.im/0x) to get
profiling data. To do that, you'll need to run the fake agent (`node
fake-agent.js`) and run the app using `preamble.js` as a pre-require. You'll also
need to set `DD_BENCH_TRACE_ENABLE=1`, which is the switch used to turn on
tracing for the test script (leave it off to get a non-traced baseline).

For example, you might use a shell script like this:

```
node fake-agent.js > /dev/null &
FAKE_AGENT_PID=$!
cd acmeair-nodejs
DD_BENCH_TRACE_ENABLE=1 0x -P 'autocannon http://localhost:$PORT/' -- node -r ../preamble.js app.js
# Ctrl-C when it's done
kill $FAKE_AGENT_PID
```
170 changes: 170 additions & 0 deletions benchmark/e2e/benchmark-run.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
'use strict'

/* eslint-disable no-console */

const { spawn, fork } = require('child_process')
const { promisify } = require('util')
const { stat } = require('fs')
const { get: _get } = require('http')
const path = require('path')
const mongoService = require('../../packages/dd-trace/test/setup/services/mongo')
const autocannon = require('autocannon')
const { chdir: cd } = process

const preambleArgs = ['--require', '../preamble.js']

function sh (cmd) {
return new Promise((resolve, reject) => {
console.log('>', cmd)
spawn(cmd, [], { stdio: 'inherit', shell: true })
.on('error', reject)
.on('close', resolve)
})
}

function forkProcess (file, options = {}) {
return new Promise((resolve, reject) => {
console.log(`> node ${options.execArgv ? options.execArgv.join(' ') + ' ' : ''}${file}`)
options.stdio = 'pipe'
const subProcess = fork(file, options)
console.log('## PID', subProcess.pid)
subProcess.on('message', message => {
if (message.ready) {
resolve({ subProcess })
}
})
})
}

const statAsync = promisify(stat)
async function exists (filename) {
try {
const stats = await statAsync(filename)
return stats.isDirectory() || stats.isFile()
} catch (e) {
return false
}
}

function get (url) {
return new Promise((resolve, reject) => {
_get(url, res => {
const chunks = []
res.on('data', d => chunks.push(d))
res.on('end', () => {
resolve(Buffer.concat(chunks).toString())
})
})
})
}

async function checkDb () {
console.log('# checking that db is populated')
cd('acmeair-nodejs')
const { subProcess } = await forkProcess('./app.js', {
execArgv: process.execArgv.concat(preambleArgs)
})

const customers = await get('http://localhost:9080/rest/api/config/countCustomers')

if (parseInt(customers) < 10000) {
console.log('# populating db')
await get('http://localhost:9080/rest/api/loader/load?numCustomers=10000')
}

subProcess.kill()
cd(__dirname)
console.log('# db is populated')
}

async function ensureAppIsInstalled () {
cd(__dirname)
if (!(await exists(path.join(__dirname, 'acmeair-nodejs')))) {
await sh('git clone [email protected]:acmeair/acmeair-nodejs.git')
}
cd('acmeair-nodejs')
await sh('yarn')
cd(__dirname)
}

async function testOneScenario (url, duration, prof, additionalEnv = {}) {
const execArgv = preambleArgs.slice()
if (prof) {
execArgv.unshift('--prof')
}
const { subProcess } = await forkProcess('./app.js', {
execArgv: execArgv.concat(process.execArgv),
env: Object.assign({}, process.env, additionalEnv)
})

const results = await autocannon({ url, duration })

subProcess.kill()
return results
}

async function withFakeAgent (fn) {
console.log('# Starting fake agent')
const { subProcess } = await forkProcess('../fake-agent.js')
await fn()
subProcess.kill()
}

async function testBoth (url, duration, prof) {
// TODO We should have ways of invoking the individual tests in isolation
cd(path.join(__dirname, 'acmeair-nodejs'))
const results = {}
await withFakeAgent(async () => {
console.log(' # Running with the tracer ...')
results.withTracer = await testOneScenario(url, duration, prof, { DD_BENCH_TRACE_ENABLE: 1 })
})

console.log('# Running without the tracer (control) ...')
results.withoutTracer = await testOneScenario(url, duration, prof)

console.log('# Running with async_hooks ...')
results.withAsyncHooks = await testOneScenario(url, duration, prof, { DD_BENCH_ASYNC_HOOKS: 1 })

console.log(`>>>>>> RESULTS FOR ${url} RUNNING FOR ${duration} SECONDS`)

logResult(results, 'requests')
logResult(results, 'latency')
logResult(results, 'throughput')

console.log(`<<<<<< RESULTS FOR ${url} RUNNING FOR ${duration} SECONDS`)
cd(__dirname)
}

function pad (str, num) {
return Array(num - String(str).length).fill(' ').join('') + str
}

function logResult (results, type) {
console.log(`\n${type.toUpperCase()}:`)
console.log(` without tracer with async_hooks with tracer`)
for (const name in results.withoutTracer[type]) {
console.log(
pad(name, 7),
`\t${pad(results.withoutTracer[type][name], 16)}`,
`\t${pad(results.withAsyncHooks[type][name], 16)}`,
`\t${pad(results.withTracer[type][name], 16)}`
)
}
}

async function main () {
const duration = parseInt(process.env.DD_BENCH_DURATION || '10')
const prof = !!process.env.DD_BENCH_PROF
await ensureAppIsInstalled()
console.log('# checking that mongo is alive')
await mongoService()
console.log('# it is alive')
await checkDb()
await testBoth('http://localhost:9080/', duration, prof)
await testBoth('http://localhost:9080/rest/api/config/countCustomers', duration, prof)
}

main().catch(e => {
console.error(e)
process.exitCode = 1
})
24 changes: 24 additions & 0 deletions benchmark/e2e/fake-agent.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
'use strict'
const http = require('http')

http.createServer(async (req, res) => {
res.statusCode = 200
if (await streamLen(req) > 0) {
res.write(JSON.stringify({ rate_by_service: { 'service:,env:': 1 } }))
}
res.end()
}).listen(8126, 'localhost', () => {
if (process.send) { process.send({ ready: true }) }
})

async function streamLen (strm) {
try {
let len = 0
for await (const buf of strm) {
len += buf.length
}
return len
} catch (e) {
return 0
}
}
20 changes: 20 additions & 0 deletions benchmark/e2e/preamble.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
'use strict'

if (process.env.DD_BENCH_TRACE_ENABLE) {
require('../..').init({})
} else if (process.env.DD_BENCH_ASYNC_HOOKS) {
const asyncHooks = require('async_hooks')
const hook = asyncHooks.createHook({
init () {},
before () {},
after () {},
destroy () {}
})
hook.enable()
}
const { Server } = require('http')
const origEmit = Server.prototype.emit
Server.prototype.emit = function (name) {
if (name === 'listening') { process.send && process.send({ ready: true }) }
return origEmit.apply(this, arguments)
}
7 changes: 0 additions & 7 deletions benchmark/platform/node.js
Original file line number Diff line number Diff line change
Expand Up @@ -9,24 +9,17 @@ platform.use(node)

const suite = benchmark('platform (node)')

const traceStub = require('../stubs/trace')
const spanStub = require('../stubs/span')
const config = new Config()

platform.configure(config)
platform.metrics().start()

suite
.add('now', {
fn () {
platform.now()
}
})
.add('msgpack#prefix', {
fn () {
platform.msgpack.prefix(traceStub)
}
})
.add('metrics#track', {
fn () {
platform.metrics().track(spanStub).finish()
Expand Down
3 changes: 3 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
"prepublishOnly": "node scripts/prepublish.js",
"postpublish": "node scripts/postpublish.js",
"bench": "node benchmark",
"bench:e2e": "cd benchmark/e2e && node benchmark-run.js",
"type:doc": "cd docs && yarn && yarn build",
"type:test": "cd docs && yarn && yarn test",
"lint": "node scripts/check_licenses.js && eslint .",
Expand Down Expand Up @@ -61,6 +62,7 @@
"lodash.sortby": "^4.7.0",
"lodash.uniq": "^4.5.0",
"methods": "^1.1.2",
"mnemonist": "^0.32.0",
"module-details-from-path": "^1.0.3",
"msgpack-lite": "^0.1.26",
"nan": "^2.12.1",
Expand All @@ -79,6 +81,7 @@
"devDependencies": {
"@babel/core": "^7.6.4",
"@babel/preset-env": "^7.6.3",
"autocannon": "^4.5.2",
"axios": "^0.18.0",
"babel-loader": "^8.0.6",
"benchmark": "^2.1.4",
Expand Down
Loading

0 comments on commit adaebe6

Please sign in to comment.