Skip to content

Commit

Permalink
Merge pull request #463 from JuliaParallel/jps/stream2
Browse files Browse the repository at this point in the history
Add streaming API
  • Loading branch information
jpsamaroo authored Dec 9, 2024
2 parents 4c51d84 + 3656030 commit 62f8307
Show file tree
Hide file tree
Showing 30 changed files with 1,838 additions and 102 deletions.
2 changes: 2 additions & 0 deletions .buildkite/pipeline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
os: linux
arch: x86_64
command: "julia --project -e 'using Pkg; Pkg.develop(;path=\"lib/TimespanLogging\")'"

.bench: &bench
if: build.message =~ /\[run benchmarks\]/
agents:
Expand All @@ -14,6 +15,7 @@
os: linux
arch: x86_64
num_cpus: 16

steps:
- label: Julia 1.9
timeout_in_minutes: 90
Expand Down
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ makedocs(;
"Task Spawning" => "task-spawning.md",
"Data Management" => "data-management.md",
"Distributed Arrays" => "darray.md",
"Streaming Tasks" => "streaming.md",
"Scopes" => "scopes.md",
"Processors" => "processors.md",
"Task Queues" => "task-queues.md",
Expand Down
35 changes: 35 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -394,3 +394,38 @@ Dagger.@spawn copyto!(C, X)

In contrast to the previous example, here, the tasks are executed without argument annotations. As a result, there is a possibility of the `copyto!` task being executed before the `sort!` task, leading to unexpected results in the output array `C`.

## Quickstart: Streaming

Dagger.jl provides a streaming API that allows you to process data in a streaming fashion, where data is processed as it becomes available, rather than waiting for the entire dataset to be loaded into memory.

For more details: [Streaming](@ref)

### Syntax

The `Dagger.spawn_streaming()` function is used to create a streaming region,
where tasks are executed continuously, processing data as it becomes available:

```julia
# Open a file to write to on this worker
f = Dagger.@mutable open("output.txt", "w")
t = Dagger.spawn_streaming() do
# Generate random numbers continuously
val = Dagger.@spawn rand()
# Write each random number to a file
Dagger.@spawn (f, val) -> begin
if val < 0.01
# Finish streaming when the random number is less than 0.01
Dagger.finish_stream()
end
println(f, val)
end
end
# Wait for all values to be generated and written
wait(t)
```

The above example demonstrates a streaming region that generates random numbers
continuously and writes each random number to a file. The streaming region is
terminated when a random number less than 0.01 is generated, which is done by
calling `Dagger.finish_stream()` (this terminates the current task, and will
also terminate all streaming tasks launched by `spawn_streaming`).
105 changes: 105 additions & 0 deletions docs/src/streaming.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Streaming

Dagger tasks have a limited lifetime - they are created, execute, finish, and
are eventually destroyed when they're no longer needed. Thus, if one wants
to run the same kind of computations over and over, one might re-create a
similar set of tasks for each unit of data that needs processing.

This might be fine for computations which take a long time to run (thus
dwarfing the cost of task creation, which is quite small), or when working with
a limited set of data, but this approach is not great for doing lots of small
computations on a large (or endless) amount of data. For example, processing
image frames from a webcam, reacting to messages from a message bus, reading
samples from a software radio, etc. All of these tasks are better suited to a
"streaming" model of data processing, where data is simply piped into a
continuously-running task (or DAG of tasks) forever, or until the data runs
out.

Thankfully, if you have a problem which is best modeled as a streaming system
of tasks, Dagger has you covered! Building on its support for
[Task Queues](@ref), Dagger provides a means to convert an entire DAG of
tasks into a streaming DAG, where data flows into and out of each task
asynchronously, using the `spawn_streaming` function:

```julia
Dagger.spawn_streaming() do # enters a streaming region
vals = Dagger.@spawn rand()
print_vals = Dagger.@spawn println(vals)
end # exits the streaming region, and starts the DAG running
```

In the above example, `vals` is a Dagger task which has been transformed to run
in a streaming manner - instead of just calling `rand()` once and returning its
result, it will re-run `rand()` endlessly, continuously producing new random
values. In typical Dagger style, `print_vals` is a Dagger task which depends on
`vals`, but in streaming form - it will continuously `println` the random
values produced from `vals`. Both tasks will run forever, and will run
efficiently, only doing the work necessary to generate, transfer, and consume
values.

As the comments point out, `spawn_streaming` creates a streaming region, during
which `vals` and `print_vals` are created and configured. Both tasks are halted
until `spawn_streaming` returns, allowing large DAGs to be built all at once,
without any task losing a single value. If desired, streaming regions can be
connected, although some values might be lost while tasks are being connected:

```julia
vals = Dagger.spawn_streaming() do
Dagger.@spawn rand()
end

# Some values might be generated by `vals` but thrown away
# before `print_vals` is fully setup and connected to it

print_vals = Dagger.spawn_streaming() do
Dagger.@spawn println(vals)
end
```

More complicated streaming DAGs can be easily constructed, without doing
anything different. For example, we can generate multiple streams of random
numbers, write them all to their own files, and print the combined results:

```julia
Dagger.spawn_streaming() do
all_vals = [Dagger.spawn(rand) for i in 1:4]
all_vals_written = map(1:4) do i
Dagger.spawn(all_vals[i]) do val
open("results_$i.txt"; write=true, create=true, append=true) do io
println(io, repr(val))
end
return val
end
end
Dagger.spawn(all_vals_written...) do all_vals_written...
vals_sum = sum(all_vals_written)
println(vals_sum)
end
end
```

If you want to stop the streaming DAG and tear it all down, you can call
`Dagger.cancel!(all_vals[1])` (or with any other task in the streaming DAG) to
terminate all streaming tasks.

Alternatively, tasks can stop themselves from the inside with
`finish_stream`, optionally returning a value that can be `fetch`'d. Let's
do this when our randomly-drawn number falls within some arbitrary range:

```julia
vals = Dagger.spawn_streaming() do
Dagger.spawn() do
x = rand()
if x < 0.001
# That's good enough, let's be done
return Dagger.finish_stream("Finished!")
end
return x
end
end
fetch(vals)
```

In this example, the call to `fetch` will hang (while random numbers continue
to be drawn), until a drawn number is less than 0.001; at that point, `fetch`
will return with `"Finished!"`, and the task `vals` will have terminated.
24 changes: 22 additions & 2 deletions src/Dagger.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ if !isdefined(Base, :ScopedValues)
else
import Base.ScopedValues: ScopedValue, with
end
import TaskLocalValues: TaskLocalValue

if !isdefined(Base, :get_extension)
import Requires: @require
Expand Down Expand Up @@ -55,16 +56,16 @@ include("processor.jl")
include("threadproc.jl")
include("context.jl")
include("utils/processors.jl")
include("dtask.jl")
include("cancellation.jl")
include("task-tls.jl")
include("scopes.jl")
include("utils/scopes.jl")
include("dtask.jl")
include("queue.jl")
include("thunk.jl")
include("submission.jl")
include("chunks.jl")
include("memory-spaces.jl")
include("cancellation.jl")

# Task scheduling
include("compute.jl")
Expand All @@ -76,6 +77,11 @@ include("sch/Sch.jl"); using .Sch
# Data dependency task queue
include("datadeps.jl")

# Streaming
include("stream.jl")
include("stream-buffers.jl")
include("stream-transfer.jl")

# Array computations
include("array/darray.jl")
include("array/alloc.jl")
Expand Down Expand Up @@ -169,6 +175,20 @@ function __init__()
ThreadProc(myid(), tid)
end
end

# Set up @dagdebug categories, if specified
try
if haskey(ENV, "JULIA_DAGGER_DEBUG")
empty!(DAGDEBUG_CATEGORIES)
for category in split(ENV["JULIA_DAGGER_DEBUG"], ",")
if category != ""
push!(DAGDEBUG_CATEGORIES, Symbol(category))
end
end
end
catch err
@warn "Error parsing JULIA_DAGGER_DEBUG" exception=err
end
end

end # module
2 changes: 0 additions & 2 deletions src/array/indexing.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
import TaskLocalValues: TaskLocalValue

### getindex

struct GetIndex{T,N} <: ArrayOp{T,N}
Expand Down
77 changes: 64 additions & 13 deletions src/cancellation.jl
Original file line number Diff line number Diff line change
@@ -1,11 +1,61 @@
# DTask-level cancellation

mutable struct CancelToken
@atomic cancelled::Bool
@atomic graceful::Bool
event::Base.Event
end
CancelToken() = CancelToken(false, false, Base.Event())
function cancel!(token::CancelToken; graceful::Bool=true)
if !graceful
@atomic token.graceful = false
end
@atomic token.cancelled = true
notify(token.event)
return
end
function is_cancelled(token::CancelToken; must_force::Bool=false)
if token.cancelled[]
if must_force && token.graceful[]
# If we're only responding to forced cancellation, ignore graceful cancellations
return false
end
return true
end
return false
end
Base.wait(token::CancelToken) = wait(token.event)
# TODO: Enable this for safety
#Serialization.serialize(io::AbstractSerializer, ::CancelToken) =
# throw(ConcurrencyViolationError("Cannot serialize a CancelToken"))

const DTASK_CANCEL_TOKEN = TaskLocalValue{Union{CancelToken,Nothing}}(()->nothing)

function clone_cancel_token_remote(orig_token::CancelToken, wid::Integer)
remote_token = remotecall_fetch(wid) do
return poolset(CancelToken())
end
errormonitor_tracked("remote cancel_token communicator", Threads.@spawn begin
wait(orig_token)
@dagdebug nothing :cancel "Cancelling remote token on worker $wid"
MemPool.access_ref(remote_token) do remote_token
cancel!(remote_token)
end
end)
end

# Global-level cancellation

"""
cancel!(task::DTask; force::Bool=false, halt_sch::Bool=false)
cancel!(task::DTask; force::Bool=false, graceful::Bool=true, halt_sch::Bool=false)
Cancels `task` at any point in its lifecycle, causing the scheduler to abandon
it. If `force` is `true`, the task will be interrupted with an
`InterruptException` (not recommended, this is unsafe). If `halt_sch` is
`true`, the scheduler will be halted after the task is cancelled (it will
restart automatically upon the next `@spawn`/`spawn` call).
it.
# Keyword arguments
- `force`: If `true`, the task will be interrupted with an `InterruptException` (not recommended, this is unsafe).
- `graceful`: If `true`, the task will be allowed to finish its current execution before being cancelled; otherwise, it will be cancelled as soon as possible.
- `halt_sch`: If `true`, the scheduler will be halted after the task is cancelled (it will restart automatically upon the next `@spawn`/`spawn` call).
As an example, the following code will cancel task `t` before it finishes
executing:
Expand All @@ -21,24 +71,24 @@ tasks which are waiting to run. Using `cancel!` is generally a much safer
alternative to Ctrl+C, as it cooperates with the scheduler and runtime and
avoids unintended side effects.
"""
function cancel!(task::DTask; force::Bool=false, halt_sch::Bool=false)
function cancel!(task::DTask; force::Bool=false, graceful::Bool=true, halt_sch::Bool=false)
tid = lock(Dagger.Sch.EAGER_ID_MAP) do id_map
id_map[task.uid]
end
cancel!(tid; force, halt_sch)
cancel!(tid; force, graceful, halt_sch)
end
function cancel!(tid::Union{Int,Nothing}=nothing;
force::Bool=false, halt_sch::Bool=false)
force::Bool=false, graceful::Bool=true, halt_sch::Bool=false)
remotecall_fetch(1, tid, force, halt_sch) do tid, force, halt_sch
state = Sch.EAGER_STATE[]

# Check that the scheduler isn't stopping or has already stopped
if !isnothing(state) && !state.halt.set
@lock state.lock _cancel!(state, tid, force, halt_sch)
@lock state.lock _cancel!(state, tid, force, graceful, halt_sch)
end
end
end
function _cancel!(state, tid, force, halt_sch)
function _cancel!(state, tid, force, graceful, halt_sch)
@assert islocked(state.lock)

# Get the scheduler uid
Expand All @@ -48,7 +98,7 @@ function _cancel!(state, tid, force, halt_sch)
for task in state.ready
tid !== nothing && task.id != tid && continue
@dagdebug tid :cancel "Cancelling ready task"
state.cache[task] = InterruptException()
state.cache[task] = DTaskFailedException(task, task, InterruptException())
state.errored[task] = true
Sch.set_failed!(state, task)
end
Expand All @@ -58,7 +108,7 @@ function _cancel!(state, tid, force, halt_sch)
for task in keys(state.waiting)
tid !== nothing && task.id != tid && continue
@dagdebug tid :cancel "Cancelling waiting task"
state.cache[task] = InterruptException()
state.cache[task] = DTaskFailedException(task, task, InterruptException())
state.errored[task] = true
Sch.set_failed!(state, task)
end
Expand All @@ -80,11 +130,11 @@ function _cancel!(state, tid, force, halt_sch)
Tf === typeof(Sch.eager_thunk) && continue
istaskdone(task) && continue
any_cancelled = true
@dagdebug tid :cancel "Cancelling running task ($Tf)"
if force
@dagdebug tid :cancel "Interrupting running task ($Tf)"
Threads.@spawn Base.throwto(task, InterruptException())
else
@dagdebug tid :cancel "Cancelling running task ($Tf)"
# Tell the processor to just drop this task
task_occupancy = task_spec[4]
time_util = task_spec[2]
Expand All @@ -93,6 +143,7 @@ function _cancel!(state, tid, force, halt_sch)
push!(istate.cancelled, tid)
to_proc = istate.proc
put!(istate.return_queue, (myid(), to_proc, tid, (InterruptException(), nothing)))
cancel!(istate.cancel_tokens[tid]; graceful)
end
end
end
Expand Down
6 changes: 0 additions & 6 deletions src/compute.jl
Original file line number Diff line number Diff line change
Expand Up @@ -36,12 +36,6 @@ end
Base.@deprecate gather(ctx, x) collect(ctx, x)
Base.@deprecate gather(x) collect(x)

cleanup() = cleanup(Context(global_context()))
function cleanup(ctx::Context)
Sch.cleanup(ctx)
nothing
end

function get_type(s::String)
local T
for t in split(s, ".")
Expand Down
Loading

0 comments on commit 62f8307

Please sign in to comment.