Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use GCC 13 in CUDA 12 conda builds. #6221

Open
wants to merge 4 commits into
base: branch-25.02
Choose a base branch
from

Conversation

bdice
Copy link
Contributor

@bdice bdice commented Jan 13, 2025

Description

conda-forge is using GCC 13 for CUDA 12 builds. This PR updates CUDA 12 conda builds to use GCC 13, for alignment.

These PRs should be merged in a specific order, see rapidsai/build-planning#129 for details.

@bdice bdice added non-breaking Non-breaking change improvement Improvement / enhancement to an existing function labels Jan 13, 2025
Copy link

copy-pr-bot bot commented Jan 13, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@bdice bdice marked this pull request as ready for review January 13, 2025 18:52
@bdice bdice requested review from a team as code owners January 13, 2025 18:52
@bdice bdice added the 5 - DO NOT MERGE Hold off on merging; see PR for details label Jan 13, 2025
@bdice bdice self-assigned this Jan 13, 2025
@jameslamb jameslamb removed the request for review from msarahan January 13, 2025 19:58
@jakirkham
Copy link
Member

Seeing the following error on CI:

2025-01-13T19:13:00.8954431Z     inlined from 'static cudaError_t cub::CUB_200700_700_750_800_860_900_NS::DeviceReduce::TransformReduce(void*, size_t&, InputIteratorT, OutputIteratorT, NumItemsT, ReductionOpT, TransformOpT, T, cudaStream_t) [with InputIteratorT = int*; OutputIteratorT = int*; ReductionOpT = thrust::plus<int>; TransformOpT = cuda::__4::__detail::__return_type_wrapper<bool, __nv_dl_wrapper_t<__nv_dl_trailing_return_tag<void (ML::HDBSCAN::Common::CondensedHierarchy<int, float>::*)(int*, int*, float*, int*, int), &ML::HDBSCAN::Common::CondensedHierarchy<int, float>::condense, bool, 1> > >; T = int; NumItemsT = int]' at $SRC_DIR/cpp/build/_deps/cccl-src/cub/cub/cmake/../../cub/device/device_reduce.cuh:1000:143:
2025-01-13T19:13:00.8957470Z $SRC_DIR/cpp/build/_deps/cccl-src/thrust/thrust/cmake/../../thrust/system/cuda/detail/core/triple_chevron_launch.h:143:19: error: 'dispatch' may be used uninitialized [-Werror=maybe-uninitialized]
2025-01-13T19:13:00.8958686Z   143 |     NV_IF_TARGET(NV_IS_HOST, (return doit_host(k, args...);), (return doit_device(k, args...);));
2025-01-13T19:13:00.8959168Z       |          ~~~~~~~~~^~~~~~~~~~~~

@@ -420,6 +420,25 @@ if(BUILD_CUML_CPP_LIBRARY)
src/hdbscan/hdbscan.cu
src/hdbscan/condensed_hierarchy.cu
src/hdbscan/prediction_data.cu)

set_property(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@divyegala We need a comment about why these are here. Can you add a link to an issue everywhere that this workaround was used?

Suggested change
set_property(
# When using GCC 13, some maybe-uninitialized warnings appear from CCCL and are treated as errors.
# See this CCCL issue: <INSERT LINK HERE>
set_property(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
5 - DO NOT MERGE Hold off on merging; see PR for details CMake conda conda issue CUDA/C++ improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants