Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various test prep work #246

Closed
wants to merge 5 commits into from
Closed

Conversation

evelikov
Copy link
Collaborator

Nothing too spectacular here, maybe apart from the last commit:

Some dedup (props to Lucas for the idea), use 77 for skipped tests, run all tests, use distinct test names... remove need_spawn = false path

While working on some more tests/cleanups, I may or may-not have accidentally fork bombed my system, props to the two code-paths in our test suite. That combined with the buggy (seemingly since day 1) output let me to nuke one of them.

If anyone is strongly in favour of keeping it, I would request a patch for fix the output in return :-P

Too much copy and paste for simple exec tests. Just provide an
EXEC_MODPROBE macro to help.

Hats off to Lucas for the initial idea and porting the depmod test.

Signed-off-by: Emil Velikov <[email protected]>
Using 77 is a de-facto standard for skipped tests that both autotools
and meson support OOTB. Use that instead of EXIT_SUCCESS aka 0.

Semi-recently we removed the only skip = true tests, although with more
tests incoming we're need bound to have a few users.

Signed-off-by: Emil Velikov <[email protected]>
Instead of exiting on the first failing test execute all tests and
report overall failure as needed.

This gives us a better overview - which do we have single failure or
multiple instances need work - and more consistent results numbers.

Signed-off-by: Emil Velikov <[email protected]>
Our test suite is a little unique in my experience in that the test can
be either a normal (fork) child or a (re)spawned one. All the other test
frameworks I have used opt for only one of the two.

I'm not entirely sure why we have both since the latter is sufficient for
all use-cases that we have. Perhaps the former was kept as
micro-optimisation?

Currently I am exploring a way to provide the results summary and the
need_spawn = false ones, are printed multiple times. At a glance I
couldn't quite find a way to fix it.

In addition I am also looking at removing/reducing the use of exit()
across the test suite. Where the two code-flows makes that process more
convoluted.

So let's remove one of the code-paths, simplify things and fix the
logging output. If needed we can re-introduce it later on.

NOTE: there's a lot going on here, because clang-format insist on
reformatting bunch of the DEFINE_TEST() instances :-\

Signed-off-by: Emil Velikov <[email protected]>
Copy link

codecov bot commented Nov 15, 2024

Codecov Report

Attention: Patch coverage is 95.00000% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
testsuite/testsuite.c 66.66% 1 Missing ⚠️
Files with missing lines Coverage Δ
testsuite/test-blacklist.c 53.48% <ø> (ø)
testsuite/test-dependencies.c 75.75% <ø> (ø)
testsuite/test-init.c 55.22% <ø> (ø)
testsuite/test-initstate.c 52.08% <ø> (ø)
testsuite/test-loaded.c 63.88% <ø> (ø)
testsuite/test-modprobe.c 61.81% <100.00%> (ø)
testsuite/test-new-module.c 86.11% <ø> (ø)
testsuite/test-testsuite.c 52.17% <ø> (ø)
testsuite/test-util.c 85.71% <ø> (ø)
testsuite/test-weakdep.c 72.22% <ø> (ø)
... and 1 more

🚨 Try these New Features:

executing the test. If you are not exec'ing an external binary, you need to
pass "need_spawn = true" below, otherwise it will not work (LD_PRELOAD is
only applied when exec'ing a binary). See each config description in
testsuite.h
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this explains why it was doing that... Basically to avoid a double exec when calling e.g. modprobe or depmod. Maybe we should let spawn_prog() do the right thing rather than reading this info from the test config.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but indeed, the way it's coded is confusing and I'm ok with dropping it if it works

lucasdemarchi pushed a commit that referenced this pull request Nov 21, 2024
Too much copy and paste for simple exec tests. Just provide an
EXEC_MODPROBE macro to help.

Hats off to Lucas for the initial idea and porting the depmod test.

Signed-off-by: Emil Velikov <[email protected]>
Link: #246
Signed-off-by: Lucas De Marchi <[email protected]>
lucasdemarchi pushed a commit that referenced this pull request Nov 21, 2024
Using 77 is a de-facto standard for skipped tests that both autotools
and meson support OOTB. Use that instead of EXIT_SUCCESS aka 0.

Semi-recently we removed the only skip = true tests, although with more
tests incoming we're need bound to have a few users.

Signed-off-by: Emil Velikov <[email protected]>
Link: #246
Signed-off-by: Lucas De Marchi <[email protected]>
lucasdemarchi pushed a commit that referenced this pull request Nov 21, 2024
Instead of exiting on the first failing test execute all tests and
report overall failure as needed.

This gives us a better overview - which do we have single failure or
multiple instances need work - and more consistent results numbers.

Signed-off-by: Emil Velikov <[email protected]>
Link: #246
Signed-off-by: Lucas De Marchi <[email protected]>
lucasdemarchi pushed a commit that referenced this pull request Nov 21, 2024
Signed-off-by: Emil Velikov <[email protected]>
Link: #246
Signed-off-by: Lucas De Marchi <[email protected]>
lucasdemarchi pushed a commit that referenced this pull request Nov 21, 2024
Our test suite is a little unique in my experience in that the test can
be either a normal (fork) child or a (re)spawned one. All the other test
frameworks I have used opt for only one of the two.

I'm not entirely sure why we have both since the latter is sufficient for
all use-cases that we have. Perhaps the former was kept as
micro-optimisation?

Currently I am exploring a way to provide the results summary and the
need_spawn = false ones, are printed multiple times. At a glance I
couldn't quite find a way to fix it.

In addition I am also looking at removing/reducing the use of exit()
across the test suite. Where the two code-flows makes that process more
convoluted.

So let's remove one of the code-paths, simplify things and fix the
logging output. If needed we can re-introduce it later on.

NOTE: there's a lot going on here, because clang-format insist on
reformatting bunch of the DEFINE_TEST() instances :-\

Signed-off-by: Emil Velikov <[email protected]>
Link: #246
Signed-off-by: Lucas De Marchi <[email protected]>
@lucasdemarchi
Copy link
Contributor

Applied, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants