-
-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU example do not run #959
Comments
This seems to be happening a lot in Lux CI as well (somewhat stochastically), but I haven't been able to pin-point the source of it |
Could you suggest an alternative implication to run on GPU? Thanks |
cc @maleadt @ChrisRackauckas do you happen to know the source for this? I remember it came up once in the CI long back, but I don't know how we fixed it |
It looked like a cache corruption, but we weren't able to reduce, let alone fix it. IIRC Cody or Gabriel took the most recent look at it. |
what's the current recommendations if I plan to use the package? will retreat back to a previous version viable? |
We're having people take a look at this. This bug is really elusive and it is not clear what is causing it so it's not clear right now what the workaround is, but @gbaraldi is on the case and hopefully we will finally be able to track it down. This example seems to recreate it easier than what we found before which happened to be very dependent on what machine it was run on. We will update you ASAP on this attempt to recreate and isolate it. |
Great to know! Thank you so much! |
Describe the bug 🐞
I tried to run the ODE on GPU example but encountered error
Expected behavior
expect the example code to run without problem:
Minimal Reproducible Example 👇
Error & Stacktrace⚠️
ERROR: Not implemented
Stacktrace:
[1] error(s::String)
@ Base .\error.jl:35
[2] runtime_module(job::GPUCompiler.CompilerJob)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\interface.jl:176
[3] build_runtime(job::GPUCompiler.CompilerJob)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\rtlib.jl:106
[4] (::GPUCompiler.var"#168#170"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}})()
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\rtlib.jl:152
[5] lock(f::GPUCompiler.var"#168#170"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}}, l::ReentrantLock)
@ Base .\lock.jl:232
[6] macro expansion
@ C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\rtlib.jl:130 [inlined]
[7] load_runtime(job::GPUCompiler.CompilerJob)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\utils.jl:108
[8] macro expansion
@ C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\driver.jl:264 [inlined]
[9] emit_llvm(job::GPUCompiler.CompilerJob; toplevel::Bool, libraries::Bool, optimize::Bool, cleanup::Bool, validate::Bool, only_entry::Bool)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\utils.jl:108
[10] emit_llvm
@ C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\utils.jl:106 [inlined]
[11] codegen(output::Symbol, job::GPUCompiler.CompilerJob; toplevel::Bool, libraries::Bool, optimize::Bool, cleanup::Bool, validate::Bool, strip::Bool, only_entry::Bool, parent_job::Nothing)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\driver.jl:100
[12] codegen(output::Symbol, job::GPUCompiler.CompilerJob)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\driver.jl:82
[13] compile(target::Symbol, job::GPUCompiler.CompilerJob; kwargs::@kwargs{})
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\driver.jl:79
[14] compile
@ C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\driver.jl:74 [inlined]
[15] #1145
@ C:\Users\maw48.julia\packages\CUDA\2kjXI\src\compiler\compilation.jl:250 [inlined]
[16] JuliaContext(f::CUDA.var"#1145#1148"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}}; kwargs::@kwargs{})
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\driver.jl:34
[17] JuliaContext(f::Function)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\driver.jl:25
[18] compile(job::GPUCompiler.CompilerJob)
@ CUDA C:\Users\maw48.julia\packages\CUDA\2kjXI\src\compiler\compilation.jl:249
[19] actual_compilation(cache::Dict{Any, CuFunction}, src::Core.MethodInstance, world::UInt64, cfg::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, compiler::typeof(CUDA.compile), linker::typeof(CUDA.link))
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\execution.jl:237
[20] cached_compilation(cache::Dict{Any, CuFunction}, src::Core.MethodInstance, cfg::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, compiler::Function, linker::Function)
@ GPUCompiler C:\Users\maw48.julia\packages\GPUCompiler\2CW9L\src\execution.jl:151
[21] macro expansion
@ C:\Users\maw48.julia\packages\CUDA\2kjXI\src\compiler\execution.jl:380 [inlined]
[22] macro expansion
@ .\lock.jl:273 [inlined]
[23] cufunction(f::typeof(CUDA.partial_mapreduce_grid), tt::Type{Tuple{typeof(identity), typeof(Base.add_sum), Float32, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, CartesianIndices{1, Tuple{Base.OneTo{Int64}}}, Val{true}, CuDeviceMatrix{Float32, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1, CUDA.DeviceMemory}, Tuple{Base.OneTo{Int64}}, typeof(DiffEqBase.sse), Tuple{CuDeviceVector{Float32, 1}}}}}; kwargs::@kwargs{})
@ CUDA C:\Users\maw48.julia\packages\CUDA\2kjXI\src\compiler\execution.jl:375
[24] cufunction
@ C:\Users\maw48.julia\packages\CUDA\2kjXI\src\compiler\execution.jl:372 [inlined]
[25] macro expansion
@ C:\Users\maw48.julia\packages\CUDA\2kjXI\src\compiler\execution.jl:112 [inlined]
[26] mapreducedim!(f::typeof(identity), op::typeof(Base.add_sum), R::CuArray{Float32, 1, CUDA.DeviceMemory}, A::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1, CUDA.DeviceMemory}, Tuple{Base.OneTo{Int64}}, typeof(DiffEqBase.sse), Tuple{CuArray{Float32, 1, CUDA.DeviceMemory}}}; init::Float32)
@ CUDA C:\Users\maw48.julia\packages\CUDA\2kjXI\src\mapreduce.jl:234
[27] mapreducedim!
@ C:\Users\maw48.julia\packages\CUDA\2kjXI\src\mapreduce.jl:169 [inlined]
[28] _mapreduce(f::typeof(DiffEqBase.sse), op::typeof(Base.add_sum), As::CuArray{Float32, 1, CUDA.DeviceMemory}; dims::Colon, init::Float32)
@ GPUArrays C:\Users\maw48.julia\packages\GPUArrays\qt4ax\src\host\mapreduce.jl:67
[29] _mapreduce
@ C:\Users\maw48.julia\packages\GPUArrays\qt4ax\src\host\mapreduce.jl:33 [inlined]
[30] mapreduce
@ C:\Users\maw48.julia\packages\GPUArrays\qt4ax\src\host\mapreduce.jl:28 [inlined]
[31] _sum
@ .\reducedim.jl:987 [inlined]
[32] sum
@ .\reducedim.jl:983 [inlined]
[33] ODE_DEFAULT_NORM
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\ext\DiffEqBaseCUDAExt.jl:7 [inlined]
[34] __init(prob::ODEProblem{CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Float32, Float32}, false, ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#17"{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::WrappedFunction{var"#1#2"}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, LinearAlgebra.UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @kwargs{}, SciMLBase.StandardODEProblem}, alg::Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, timeseries_init::Tuple{}, ts_init::Tuple{}, ks_init::Tuple{}, recompile::Type{Val{true}}; saveat::Tuple{}, tstops::Tuple{}, d_discontinuities::Tuple{}, save_idxs::Nothing, save_everystep::Bool, save_on::Bool, save_start::Bool, save_end::Bool, callback::Nothing, dense::Bool, calck::Bool, dt::Float32, dtmin::Float32, dtmax::Float32, force_dtmin::Bool, adaptive::Bool, gamma::Rational{Int64}, abstol::Nothing, reltol::Nothing, qmin::Rational{Int64}, qmax::Int64, qsteady_min::Int64, qsteady_max::Int64, beta1::Nothing, beta2::Nothing, qoldinit::Rational{Int64}, controller::Nothing, fullnormalize::Bool, failfactor::Int64, maxiters::Int64, internalnorm::typeof(DiffEqBase.ODE_DEFAULT_NORM), internalopnorm::typeof(LinearAlgebra.opnorm), isoutofdomain::typeof(DiffEqBase.ODE_DEFAULT_ISOUTOFDOMAIN), unstable_check::typeof(DiffEqBase.ODE_DEFAULT_UNSTABLE_CHECK), verbose::Bool, timeseries_errors::Bool, dense_errors::Bool, advance_to_tstop::Bool, stop_at_next_tstop::Bool, initialize_save::Bool, progress::Bool, progress_steps::Int64, progress_name::String, progress_message::typeof(DiffEqBase.ODE_DEFAULT_PROG_MESSAGE), progress_id::Symbol, userdata::Nothing, allow_extrapolation::Bool, initialize_integrator::Bool, alias::ODEAliasSpecifier, initializealg::OrdinaryDiffEqCore.DefaultInit, kwargs::@kwargs{save_noise::Bool})
@ OrdinaryDiffEqCore C:\Users\maw48.julia\packages\OrdinaryDiffEqCore\3Talm\src\solve.jl:383
[35] __init (repeats 5 times)
@ C:\Users\maw48.julia\packages\OrdinaryDiffEqCore\3Talm\src\solve.jl:11 [inlined]
[36] #__solve#62
@ C:\Users\maw48.julia\packages\OrdinaryDiffEqCore\3Talm\src\solve.jl:6 [inlined]
[37] __solve
@ C:\Users\maw48.julia\packages\OrdinaryDiffEqCore\3Talm\src\solve.jl:1 [inlined]
[38] solve_call(_prob::ODEProblem{CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Float32, Float32}, false, ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#17"{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::WrappedFunction{var"#1#2"}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, LinearAlgebra.UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @kwargs{}, SciMLBase.StandardODEProblem}, args::Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}; merge_callbacks::Bool, kwargshandle::Nothing, kwargs::@kwargs{save_noise::Bool, save_start::Bool, save_end::Bool})
@ DiffEqBase C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:634
[39] solve_call
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:591 [inlined]
[40] #solve_up#53
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1122 [inlined]
[41] solve_up
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1101 [inlined]
[42] #solve#51
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1038 [inlined]
[43] _concrete_solve_adjoint(::ODEProblem{CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Float32, Float32}, false, ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#17"{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::WrappedFunction{var"#1#2"}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, LinearAlgebra.UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @kwargs{}, SciMLBase.StandardODEProblem}, ::Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, ::InterpolatingAdjoint{0, true, Val{:central}, ZygoteVJP}, ::CuArray{Float32, 1, CUDA.DeviceMemory}, ::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ::SciMLBase.ChainRulesOriginator; save_start::Bool, save_end::Bool, saveat::StepRangeLen{Float32, Float64, Float64, Int64}, save_idxs::Nothing, kwargs::@kwargs{})
@ SciMLSensitivity C:\Users\maw48.julia\packages\SciMLSensitivity\RQ8Av\src\concrete_solve.jl:424
[44] _concrete_solve_adjoint
@ C:\Users\maw48.julia\packages\SciMLSensitivity\RQ8Av\src\concrete_solve.jl:361 [inlined]
[45] #_solve_adjoint#75
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1585 [inlined]
[46] _solve_adjoint
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1558 [inlined]
[47] #rrule#4
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\ext\DiffEqBaseChainRulesCoreExt.jl:26 [inlined]
[48] rrule
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\ext\DiffEqBaseChainRulesCoreExt.jl:22 [inlined]
[49] rrule
@ C:\Users\maw48.julia\packages\ChainRulesCore\U6wNx\src\rules.jl:144 [inlined]
[50] chain_rrule_kw
@ C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\chainrules.jl:236 [inlined]
[51] macro expansion
@ C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0 [inlined]
[52] _pullback
@ C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:91 [inlined]
[53] _apply
@ .\boot.jl:946 [inlined]
[54] adjoint
@ C:\Users\maw48.julia\packages\Zygote\TWpme\src\lib\lib.jl:202 [inlined]
[55] _pullback
@ C:\Users\maw48.julia\packages\ZygoteRules\M4xmc\src\adjoint.jl:67 [inlined]
[56] #solve#51
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1038 [inlined]
[57] _pullback(::Zygote.Context{false}, ::DiffEqBase.var"##solve#51", ::InterpolatingAdjoint{0, true, Val{:central}, ZygoteVJP}, ::Nothing, ::Nothing, ::Val{true}, ::@kwargs{saveat::StepRangeLen{Float32, Float64, Float64, Int64}}, ::typeof(solve), ::ODEProblem{CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Float32, Float32}, false, ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#17"{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::WrappedFunction{var"#1#2"}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, LinearAlgebra.UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @kwargs{}, SciMLBase.StandardODEProblem}, ::Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False})
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
[58] _apply
@ .\boot.jl:946 [inlined]
[59] adjoint
@ C:\Users\maw48.julia\packages\Zygote\TWpme\src\lib\lib.jl:202 [inlined]
[60] _pullback
@ C:\Users\maw48.julia\packages\ZygoteRules\M4xmc\src\adjoint.jl:67 [inlined]
[61] solve
@ C:\Users\maw48.julia\packages\DiffEqBase\R2Vjs\src\solve.jl:1028 [inlined]
[62] _pullback(::Zygote.Context{false}, ::typeof(Core.kwcall), ::@NamedTuple{sensealg::InterpolatingAdjoint{0, true, Val{:central}, ZygoteVJP}, saveat::StepRangeLen{Float32, Float64, Float64, Int64}}, ::typeof(solve), ::ODEProblem{CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Float32, Float32}, false, ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ODEFunction{false, SciMLBase.FullSpecialize, DiffEqFlux.var"#dudt#17"{StatefulLuxLayer{Static.True, Chain{@NamedTuple{layer_1::WrappedFunction{var"#1#2"}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Nothing, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}}, LinearAlgebra.UniformScaling{Bool}, Nothing, typeof(DiffEqFlux.basic_tgrad), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing, Nothing, Nothing}, @kwargs{}, SciMLBase.StandardODEProblem}, ::Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False})
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
[63] _apply(::Function, ::Vararg{Any})
@ Core .\boot.jl:946
[64] adjoint
@ C:\Users\maw48.julia\packages\Zygote\TWpme\src\lib\lib.jl:202 [inlined]
[65] _pullback
@ C:\Users\maw48.julia\packages\ZygoteRules\M4xmc\src\adjoint.jl:67 [inlined]
[66] NeuralODE
@ C:\Users\maw48.julia\packages\DiffEqFlux\lXF4l\src\neural_de.jl:54 [inlined]
[67] _pullback(::Zygote.Context{false}, ::NeuralODE{Chain{@NamedTuple{layer_1::WrappedFunction{var"#1#2"}, layer_2::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Tuple{Float32, Float32}, Tuple{Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, @kwargs{saveat::StepRangeLen{Float32, Float64, Float64, Int64}}}, ::CuArray{Float32, 1, CUDA.DeviceMemory}, ::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}})
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
[68] predict_neuralode
@ .\In[3]:31 [inlined]
[69] _pullback(ctx::Zygote.Context{false}, f::typeof(predict_neuralode), args::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}})
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
[70] loss_neuralode
@ .\In[3]:33 [inlined]
[71] _pullback(ctx::Zygote.Context{false}, f::typeof(loss_neuralode), args::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}})
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
[72] #6
@ .\In[3]:58 [inlined]
[73] _pullback(::Zygote.Context{false}, ::var"#6#7", ::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ::SciMLBase.NullParameters)
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0
[74] pullback(::Function, ::Zygote.Context{false}, ::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ::Vararg{Any})
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface.jl:90
[75] pullback(::Function, ::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ::SciMLBase.NullParameters)
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface.jl:88
[76] withgradient(::Function, ::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, ::Vararg{Any})
@ Zygote C:\Users\maw48.julia\packages\Zygote\TWpme\src\compiler\interface.jl:205
[77] value_and_gradient
@ C:\Users\maw48.julia\packages\DifferentiationInterface\6QHLL\ext\DifferentiationInterfaceZygoteExt\DifferentiationInterfaceZygoteExt.jl:97 [inlined]
[78] value_and_gradient!(f::Function, grad::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, prep::DifferentiationInterface.NoGradientPrep, backend::AutoZygote, x::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, contexts::DifferentiationInterface.Constant{SciMLBase.NullParameters})
@ DifferentiationInterfaceZygoteExt C:\Users\maw48.julia\packages\DifferentiationInterface\6QHLL\ext\DifferentiationInterfaceZygoteExt\DifferentiationInterfaceZygoteExt.jl:119
[79] (::OptimizationZygoteExt.var"#fg!#16"{SciMLBase.NullParameters, OptimizationFunction{true, AutoZygote, var"#6#7", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoZygote})(res::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, θ::ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}})
@ OptimizationZygoteExt C:\Users\maw48.julia\packages\OptimizationBase\gvXsf\ext\OptimizationZygoteExt.jl:53
[80] macro expansion
@ C:\Users\maw48.julia\packages\OptimizationOptimisers\i6VZS\src\OptimizationOptimisers.jl:101 [inlined]
[81] macro expansion
@ C:\Users\maw48.julia\packages\Optimization\cfp9i\src\utils.jl:32 [inlined]
[82] __solve(cache::OptimizationCache{OptimizationFunction{true, AutoZygote, var"#6#7", OptimizationZygoteExt.var"#grad#14"{SciMLBase.NullParameters, OptimizationFunction{true, AutoZygote, var"#6#7", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoZygote}, OptimizationZygoteExt.var"#fg!#16"{SciMLBase.NullParameters, OptimizationFunction{true, AutoZygote, var"#6#7", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoZygote}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, OptimizationBase.ReInitCache{ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, SciMLBase.NullParameters}, Nothing, Nothing, Nothing, Nothing, Nothing, Adam, Bool, var"#3#5", Nothing})
@ OptimizationOptimisers C:\Users\maw48.julia\packages\OptimizationOptimisers\i6VZS\src\OptimizationOptimisers.jl:83
[83] solve!(cache::OptimizationCache{OptimizationFunction{true, AutoZygote, var"#6#7", OptimizationZygoteExt.var"#grad#14"{SciMLBase.NullParameters, OptimizationFunction{true, AutoZygote, var"#6#7", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoZygote}, OptimizationZygoteExt.var"#fg!#16"{SciMLBase.NullParameters, OptimizationFunction{true, AutoZygote, var"#6#7", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, AutoZygote}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, OptimizationBase.ReInitCache{ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, SciMLBase.NullParameters}, Nothing, Nothing, Nothing, Nothing, Nothing, Adam, Bool, var"#3#5", Nothing})
@ SciMLBase C:\Users\maw48.julia\packages\SciMLBase\XzPx0\src\solve.jl:186
[84] solve(::OptimizationProblem{true, OptimizationFunction{true, AutoZygote, var"#6#7", Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentVector{Float32, CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:150, Axis(weight = ViewAxis(1:100, ShapedAxis((50, 2))), bias = 101:150)), layer_3 = ViewAxis(151:252, Axis(weight = ViewAxis(1:100, ShapedAxis((2, 50))), bias = 101:102)))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @kwargs{}}, ::Adam; kwargs::@kwargs{callback::var"#3#5", maxiters::Int64})
@ SciMLBase C:\Users\maw48.julia\packages\SciMLBase\XzPx0\src\solve.jl:94
Environment (please complete the following information):
using Pkg; Pkg.status()
using Pkg; Pkg.status(; mode = PKGMODE_MANIFEST)
versioninfo()
Additional context
The error is due to this code: p |> ComponentArray |> gdev
The text was updated successfully, but these errors were encountered: