tokio/runtime/task/mod.rs
1//! The task module.
2//!
3//! The task module contains the code that manages spawned tasks and provides a
4//! safe API for the rest of the runtime to use. Each task in a runtime is
5//! stored in an `OwnedTasks` or `LocalOwnedTasks` object.
6//!
7//! # Task reference types
8//!
9//! A task is usually referenced by multiple handles, and there are several
10//! types of handles.
11//!
12//! * `OwnedTask` - tasks stored in an `OwnedTasks` or `LocalOwnedTasks` are of this
13//! reference type.
14//!
15//! * `JoinHandle` - each task has a `JoinHandle` that allows access to the output
16//! of the task.
17//!
18//! * `Waker` - every waker for a task has this reference type. There can be any
19//! number of waker references.
20//!
21//! * `Notified` - tracks whether the task is notified.
22//!
23//! * `Unowned` - this task reference type is used for tasks not stored in any
24//! runtime. Mainly used for blocking tasks, but also in tests.
25//!
26//! The task uses a reference count to keep track of how many active references
27//! exist. The `Unowned` reference type takes up two ref-counts. All other
28//! reference types take up a single ref-count.
29//!
30//! Besides the waker type, each task has at most one of each reference type.
31//!
32//! # State
33//!
34//! The task stores its state in an atomic `usize` with various bitfields for the
35//! necessary information. The state has the following bitfields:
36//!
37//! * `RUNNING` - Tracks whether the task is currently being polled or cancelled.
38//! This bit functions as a lock around the task.
39//!
40//! * `COMPLETE` - Is one once the future has fully completed and has been
41//! dropped. Never unset once set. Never set together with RUNNING.
42//!
43//! * `NOTIFIED` - Tracks whether a Notified object currently exists.
44//!
45//! * `CANCELLED` - Is set to one for tasks that should be cancelled as soon as
46//! possible. May take any value for completed tasks.
47//!
48//! * `JOIN_INTEREST` - Is set to one if there exists a `JoinHandle`.
49//!
50//! * `JOIN_WAKER` - Acts as an access control bit for the join handle waker. The
51//! protocol for its usage is described below.
52//!
53//! The rest of the bits are used for the ref-count.
54//!
55//! # Fields in the task
56//!
57//! The task has various fields. This section describes how and when it is safe
58//! to access a field.
59//!
60//! * The state field is accessed with atomic instructions.
61//!
62//! * The `OwnedTask` reference has exclusive access to the `owned` field.
63//!
64//! * The Notified reference has exclusive access to the `queue_next` field.
65//!
66//! * The `owner_id` field can be set as part of construction of the task, but
67//! is otherwise immutable and anyone can access the field immutably without
68//! synchronization.
69//!
70//! * If COMPLETE is one, then the `JoinHandle` has exclusive access to the
71//! stage field. If COMPLETE is zero, then the RUNNING bitfield functions as
72//! a lock for the stage field, and it can be accessed only by the thread
73//! that set RUNNING to one.
74//!
75//! * The waker field may be concurrently accessed by different threads: in one
76//! thread the runtime may complete a task and *read* the waker field to
77//! invoke the waker, and in another thread the task's `JoinHandle` may be
78//! polled, and if the task hasn't yet completed, the `JoinHandle` may *write*
79//! a waker to the waker field. The `JOIN_WAKER` bit ensures safe access by
80//! multiple threads to the waker field using the following rules:
81//!
82//! 1. `JOIN_WAKER` is initialized to zero.
83//!
84//! 2. If `JOIN_WAKER` is zero, then the `JoinHandle` has exclusive (mutable)
85//! access to the waker field.
86//!
87//! 3. If `JOIN_WAKER` is one, then the `JoinHandle` has shared (read-only)
88//! access to the waker field.
89//!
90//! 4. If `JOIN_WAKER` is one and COMPLETE is one, then the runtime has shared
91//! (read-only) access to the waker field.
92//!
93//! 5. If the `JoinHandle` needs to write to the waker field, then the
94//! `JoinHandle` needs to (i) successfully set `JOIN_WAKER` to zero if it is
95//! not already zero to gain exclusive access to the waker field per rule
96//! 2, (ii) write a waker, and (iii) successfully set `JOIN_WAKER` to one.
97//! If the `JoinHandle` unsets `JOIN_WAKER` in the process of being dropped
98//! to clear the waker field, only steps (i) and (ii) are relevant.
99//!
100//! 6. The `JoinHandle` can change `JOIN_WAKER` only if COMPLETE is zero (i.e.
101//! the task hasn't yet completed). The runtime can change `JOIN_WAKER` only
102//! if COMPLETE is one.
103//!
104//! 7. If `JOIN_INTEREST` is zero and COMPLETE is one, then the runtime has
105//! exclusive (mutable) access to the waker field. This might happen if the
106//! `JoinHandle` gets dropped right after the task completes and the runtime
107//! sets the `COMPLETE` bit. In this case the runtime needs the mutable access
108//! to the waker field to drop it.
109//!
110//! Rule 6 implies that the steps (i) or (iii) of rule 5 may fail due to a
111//! race. If step (i) fails, then the attempt to write a waker is aborted. If
112//! step (iii) fails because COMPLETE is set to one by another thread after
113//! step (i), then the waker field is cleared. Once COMPLETE is one (i.e.
114//! task has completed), the `JoinHandle` will not modify `JOIN_WAKER`. After the
115//! runtime sets COMPLETE to one, it invokes the waker if there is one so in this
116//! case when a task completes the `JOIN_WAKER` bit implicates to the runtime
117//! whether it should invoke the waker or not. After the runtime is done with
118//! using the waker during task completion, it unsets the `JOIN_WAKER` bit to give
119//! the `JoinHandle` exclusive access again so that it is able to drop the waker
120//! at a later point.
121//!
122//! All other fields are immutable and can be accessed immutably without
123//! synchronization by anyone.
124//!
125//! # Safety
126//!
127//! This section goes through various situations and explains why the API is
128//! safe in that situation.
129//!
130//! ## Polling or dropping the future
131//!
132//! Any mutable access to the future happens after obtaining a lock by modifying
133//! the RUNNING field, so exclusive access is ensured.
134//!
135//! When the task completes, exclusive access to the output is transferred to
136//! the `JoinHandle`. If the `JoinHandle` is already dropped when the transition to
137//! complete happens, the thread performing that transition retains exclusive
138//! access to the output and should immediately drop it.
139//!
140//! ## Non-Send futures
141//!
142//! If a future is not Send, then it is bound to a `LocalOwnedTasks`. The future
143//! will only ever be polled or dropped given a `LocalNotified` or inside a call
144//! to `LocalOwnedTasks::shutdown_all`. In either case, it is guaranteed that the
145//! future is on the right thread.
146//!
147//! If the task is never removed from the `LocalOwnedTasks`, then it is leaked, so
148//! there is no risk that the task is dropped on some other thread when the last
149//! ref-count drops.
150//!
151//! ## Non-Send output
152//!
153//! When a task completes, the output is placed in the stage of the task. Then,
154//! a transition that sets COMPLETE to true is performed, and the value of
155//! `JOIN_INTEREST` when this transition happens is read.
156//!
157//! If `JOIN_INTEREST` is zero when the transition to COMPLETE happens, then the
158//! output is immediately dropped.
159//!
160//! If `JOIN_INTEREST` is one when the transition to COMPLETE happens, then the
161//! `JoinHandle` is responsible for cleaning up the output. If the output is not
162//! Send, then this happens:
163//!
164//! 1. The output is created on the thread that the future was polled on. Since
165//! only non-Send futures can have non-Send output, the future was polled on
166//! the thread that the future was spawned from.
167//! 2. Since `JoinHandle<Output>` is not Send if Output is not Send, the
168//! `JoinHandle` is also on the thread that the future was spawned from.
169//! 3. Thus, the `JoinHandle` will not move the output across threads when it
170//! takes or drops the output.
171//!
172//! ## Recursive poll/shutdown
173//!
174//! Calling poll from inside a shutdown call or vice-versa is not prevented by
175//! the API exposed by the task module, so this has to be safe. In either case,
176//! the lock in the RUNNING bitfield makes the inner call return immediately. If
177//! the inner call is a `shutdown` call, then the CANCELLED bit is set, and the
178//! poll call will notice it when the poll finishes, and the task is cancelled
179//! at that point.
180
181mod core;
182use self::core::Cell;
183use self::core::Header;
184
185mod error;
186pub use self::error::JoinError;
187
188mod harness;
189use self::harness::Harness;
190
191mod id;
192pub use id::{id, try_id, Id};
193
194#[cfg(feature = "rt")]
195mod abort;
196mod join;
197
198#[cfg(feature = "rt")]
199pub use self::abort::AbortHandle;
200
201pub use self::join::JoinHandle;
202
203mod list;
204pub(crate) use self::list::{LocalOwnedTasks, OwnedTasks};
205
206mod raw;
207pub(crate) use self::raw::RawTask;
208
209mod state;
210use self::state::State;
211
212mod waker;
213
214pub(crate) use self::spawn_location::SpawnLocation;
215
216cfg_taskdump! {
217 pub(crate) mod trace;
218}
219
220use crate::future::Future;
221use crate::util::linked_list;
222use crate::util::sharded_list;
223
224use crate::runtime::TaskCallback;
225use std::marker::PhantomData;
226use std::panic::Location;
227use std::ptr::NonNull;
228use std::{fmt, mem};
229
230/// An owned handle to the task, tracked by ref count.
231#[repr(transparent)]
232pub(crate) struct Task<S: 'static> {
233 raw: RawTask,
234 _p: PhantomData<S>,
235}
236
237unsafe impl<S> Send for Task<S> {}
238unsafe impl<S> Sync for Task<S> {}
239
240/// A task was notified.
241#[repr(transparent)]
242pub(crate) struct Notified<S: 'static>(Task<S>);
243
244impl<S> Notified<S> {
245 #[cfg(all(tokio_unstable, feature = "rt-multi-thread"))]
246 #[inline]
247 pub(crate) fn task_meta<'meta>(&self) -> crate::runtime::TaskMeta<'meta> {
248 self.0.task_meta()
249 }
250}
251
252// safety: This type cannot be used to touch the task without first verifying
253// that the value is on a thread where it is safe to poll the task.
254unsafe impl<S: Schedule> Send for Notified<S> {}
255unsafe impl<S: Schedule> Sync for Notified<S> {}
256
257/// A non-Send variant of Notified with the invariant that it is on a thread
258/// where it is safe to poll it.
259#[repr(transparent)]
260pub(crate) struct LocalNotified<S: 'static> {
261 task: Task<S>,
262 _not_send: PhantomData<*const ()>,
263}
264
265impl<S> LocalNotified<S> {
266 #[cfg(tokio_unstable)]
267 #[inline]
268 pub(crate) fn task_meta<'meta>(&self) -> crate::runtime::TaskMeta<'meta> {
269 self.task.task_meta()
270 }
271}
272
273/// A task that is not owned by any `OwnedTasks`. Used for blocking tasks.
274/// This type holds two ref-counts.
275pub(crate) struct UnownedTask<S: 'static> {
276 raw: RawTask,
277 _p: PhantomData<S>,
278}
279
280// safety: This type can only be created given a Send task.
281unsafe impl<S> Send for UnownedTask<S> {}
282unsafe impl<S> Sync for UnownedTask<S> {}
283
284/// Task result sent back.
285pub(crate) type Result<T> = std::result::Result<T, JoinError>;
286
287/// Hooks for scheduling tasks which are needed in the task harness.
288#[derive(Clone)]
289pub(crate) struct TaskHarnessScheduleHooks {
290 pub(crate) task_terminate_callback: Option<TaskCallback>,
291}
292
293pub(crate) trait Schedule: Sync + Sized + 'static {
294 /// The task has completed work and is ready to be released. The scheduler
295 /// should release it immediately and return it. The task module will batch
296 /// the ref-dec with setting other options.
297 ///
298 /// If the scheduler has already released the task, then None is returned.
299 fn release(&self, task: &Task<Self>) -> Option<Task<Self>>;
300
301 /// Schedule the task
302 fn schedule(&self, task: Notified<Self>);
303
304 fn hooks(&self) -> TaskHarnessScheduleHooks;
305
306 /// Schedule the task to run in the near future, yielding the thread to
307 /// other tasks.
308 fn yield_now(&self, task: Notified<Self>) {
309 self.schedule(task);
310 }
311
312 /// Polling the task resulted in a panic. Should the runtime shutdown?
313 fn unhandled_panic(&self) {
314 // By default, do nothing. This maintains the 1.0 behavior.
315 }
316}
317
318cfg_rt! {
319 /// This is the constructor for a new task. Three references to the task are
320 /// created. The first task reference is usually put into an `OwnedTasks`
321 /// immediately. The Notified is sent to the scheduler as an ordinary
322 /// notification.
323 fn new_task<T, S>(
324 task: T,
325 scheduler: S,
326 id: Id,
327 spawned_at: SpawnLocation,
328 ) -> (Task<S>, Notified<S>, JoinHandle<T::Output>)
329 where
330 S: Schedule,
331 T: Future + 'static,
332 T::Output: 'static,
333 {
334 let raw = RawTask::new::<T, S>(
335 task,
336 scheduler,
337 id,
338 spawned_at,
339 );
340 let task = Task {
341 raw,
342 _p: PhantomData,
343 };
344 let notified = Notified(Task {
345 raw,
346 _p: PhantomData,
347 });
348 let join = JoinHandle::new(raw);
349
350 (task, notified, join)
351 }
352
353 /// Creates a new task with an associated join handle. This method is used
354 /// only when the task is not going to be stored in an `OwnedTasks` list.
355 ///
356 /// Currently only blocking tasks use this method.
357 pub(crate) fn unowned<T, S>(
358 task: T,
359 scheduler: S,
360 id: Id,
361 spawned_at: SpawnLocation,
362 ) -> (UnownedTask<S>, JoinHandle<T::Output>)
363 where
364 S: Schedule,
365 T: Send + Future + 'static,
366 T::Output: Send + 'static,
367 {
368 let (task, notified, join) = new_task(
369 task,
370 scheduler,
371 id,
372 spawned_at,
373 );
374
375 // This transfers the ref-count of task and notified into an UnownedTask.
376 // This is valid because an UnownedTask holds two ref-counts.
377 let unowned = UnownedTask {
378 raw: task.raw,
379 _p: PhantomData,
380 };
381 std::mem::forget(task);
382 std::mem::forget(notified);
383
384 (unowned, join)
385 }
386}
387
388impl<S: 'static> Task<S> {
389 unsafe fn new(raw: RawTask) -> Task<S> {
390 Task {
391 raw,
392 _p: PhantomData,
393 }
394 }
395
396 /// # Safety
397 ///
398 /// `ptr` must be a valid pointer to a [`Header`].
399 unsafe fn from_raw(ptr: NonNull<Header>) -> Task<S> {
400 unsafe { Task::new(RawTask::from_raw(ptr)) }
401 }
402
403 #[cfg(all(
404 tokio_unstable,
405 feature = "taskdump",
406 feature = "rt",
407 target_os = "linux",
408 any(target_arch = "aarch64", target_arch = "x86", target_arch = "x86_64")
409 ))]
410 pub(super) fn as_raw(&self) -> RawTask {
411 self.raw
412 }
413
414 fn header(&self) -> &Header {
415 self.raw.header()
416 }
417
418 fn header_ptr(&self) -> NonNull<Header> {
419 self.raw.header_ptr()
420 }
421
422 /// Returns a [task ID] that uniquely identifies this task relative to other
423 /// currently spawned tasks.
424 ///
425 /// [task ID]: crate::task::Id
426 #[cfg(tokio_unstable)]
427 pub(crate) fn id(&self) -> crate::task::Id {
428 // Safety: The header pointer is valid.
429 unsafe { Header::get_id(self.raw.header_ptr()) }
430 }
431
432 #[cfg(tokio_unstable)]
433 pub(crate) fn spawned_at(&self) -> &'static Location<'static> {
434 // Safety: The header pointer is valid.
435 unsafe { Header::get_spawn_location(self.raw.header_ptr()) }
436 }
437
438 // Explicit `'task` and `'meta` lifetimes are necessary here, as otherwise,
439 // the compiler infers the lifetimes to be the same, and considers the task
440 // to be borrowed for the lifetime of the returned `TaskMeta`.
441 #[cfg(tokio_unstable)]
442 pub(crate) fn task_meta<'meta>(&self) -> crate::runtime::TaskMeta<'meta> {
443 crate::runtime::TaskMeta {
444 id: self.id(),
445 spawned_at: self.spawned_at().into(),
446 _phantom: PhantomData,
447 }
448 }
449
450 cfg_taskdump! {
451 /// Notify the task for task dumping.
452 ///
453 /// Returns `None` if the task has already been notified.
454 pub(super) fn notify_for_tracing(&self) -> Option<Notified<S>> {
455 if self.as_raw().state().transition_to_notified_for_tracing() {
456 // SAFETY: `transition_to_notified_for_tracing` increments the
457 // refcount.
458 Some(unsafe { Notified(Task::new(self.raw)) })
459 } else {
460 None
461 }
462 }
463
464 }
465}
466
467impl<S: 'static> Notified<S> {
468 fn header(&self) -> &Header {
469 self.0.header()
470 }
471
472 #[cfg(tokio_unstable)]
473 #[allow(dead_code)]
474 pub(crate) fn task_id(&self) -> crate::task::Id {
475 self.0.id()
476 }
477}
478
479impl<S: 'static> Notified<S> {
480 /// # Safety
481 ///
482 /// [`RawTask::ptr`] must be a valid pointer to a [`Header`].
483 pub(crate) unsafe fn from_raw(ptr: RawTask) -> Notified<S> {
484 Notified(unsafe { Task::new(ptr) })
485 }
486}
487
488impl<S: 'static> Notified<S> {
489 pub(crate) fn into_raw(self) -> RawTask {
490 let raw = self.0.raw;
491 mem::forget(self);
492 raw
493 }
494}
495
496impl<S: Schedule> Task<S> {
497 /// Preemptively cancels the task as part of the shutdown process.
498 pub(crate) fn shutdown(self) {
499 let raw = self.raw;
500 mem::forget(self);
501 raw.shutdown();
502 }
503}
504
505impl<S: Schedule> LocalNotified<S> {
506 /// Runs the task.
507 pub(crate) fn run(self) {
508 let raw = self.task.raw;
509 mem::forget(self);
510 raw.poll();
511 }
512}
513
514impl<S: Schedule> UnownedTask<S> {
515 // Used in test of the inject queue.
516 #[cfg(test)]
517 #[cfg_attr(target_family = "wasm", allow(dead_code))]
518 pub(super) fn into_notified(self) -> Notified<S> {
519 Notified(self.into_task())
520 }
521
522 fn into_task(self) -> Task<S> {
523 // Convert into a task.
524 let task = Task {
525 raw: self.raw,
526 _p: PhantomData,
527 };
528 mem::forget(self);
529
530 // Drop a ref-count since an UnownedTask holds two.
531 task.header().state.ref_dec();
532
533 task
534 }
535
536 pub(crate) fn run(self) {
537 let raw = self.raw;
538 mem::forget(self);
539
540 // Transfer one ref-count to a Task object.
541 let task = Task::<S> {
542 raw,
543 _p: PhantomData,
544 };
545
546 // Use the other ref-count to poll the task.
547 raw.poll();
548 // Decrement our extra ref-count
549 drop(task);
550 }
551
552 pub(crate) fn shutdown(self) {
553 self.into_task().shutdown();
554 }
555}
556
557impl<S: 'static> Drop for Task<S> {
558 fn drop(&mut self) {
559 // Decrement the ref count
560 if self.header().state.ref_dec() {
561 // Deallocate if this is the final ref count
562 self.raw.dealloc();
563 }
564 }
565}
566
567impl<S: 'static> Drop for UnownedTask<S> {
568 fn drop(&mut self) {
569 // Decrement the ref count
570 if self.raw.header().state.ref_dec_twice() {
571 // Deallocate if this is the final ref count
572 self.raw.dealloc();
573 }
574 }
575}
576
577impl<S> fmt::Debug for Task<S> {
578 fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
579 write!(fmt, "Task({:p})", self.header())
580 }
581}
582
583impl<S> fmt::Debug for Notified<S> {
584 fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
585 write!(fmt, "task::Notified({:p})", self.0.header())
586 }
587}
588
589/// # Safety
590///
591/// Tasks are pinned.
592unsafe impl<S> linked_list::Link for Task<S> {
593 type Handle = Task<S>;
594 type Target = Header;
595
596 fn as_raw(handle: &Task<S>) -> NonNull<Header> {
597 handle.raw.header_ptr()
598 }
599
600 unsafe fn from_raw(ptr: NonNull<Header>) -> Task<S> {
601 unsafe { Task::from_raw(ptr) }
602 }
603
604 unsafe fn pointers(target: NonNull<Header>) -> NonNull<linked_list::Pointers<Header>> {
605 unsafe { self::core::Trailer::addr_of_owned(Header::get_trailer(target)) }
606 }
607}
608
609/// # Safety
610///
611/// The id of a task is never changed after creation of the task, so the return value of
612/// `get_shard_id` will not change. (The cast may throw away the upper 32 bits of the task id, but
613/// the shard id still won't change from call to call.)
614unsafe impl<S> sharded_list::ShardedListItem for Task<S> {
615 unsafe fn get_shard_id(target: NonNull<Self::Target>) -> usize {
616 // SAFETY: The caller guarantees that `target` points at a valid task.
617 let task_id = unsafe { Header::get_id(target) };
618 task_id.0.get() as usize
619 }
620}
621
622/// Wrapper around [`std::panic::Location`] that's conditionally compiled out
623/// when `tokio_unstable` is not enabled.
624#[cfg(tokio_unstable)]
625mod spawn_location {
626
627 use std::panic::Location;
628
629 #[derive(Copy, Clone)]
630 pub(crate) struct SpawnLocation(pub &'static Location<'static>);
631
632 impl From<&'static Location<'static>> for SpawnLocation {
633 fn from(location: &'static Location<'static>) -> Self {
634 Self(location)
635 }
636 }
637}
638
639#[cfg(not(tokio_unstable))]
640mod spawn_location {
641 use std::panic::Location;
642
643 #[derive(Copy, Clone)]
644 pub(crate) struct SpawnLocation();
645
646 impl From<&'static Location<'static>> for SpawnLocation {
647 fn from(_: &'static Location<'static>) -> Self {
648 Self()
649 }
650 }
651
652 #[cfg(test)]
653 #[test]
654 fn spawn_location_is_zero_sized() {
655 assert_eq!(std::mem::size_of::<SpawnLocation>(), 0);
656 }
657}
658
659impl SpawnLocation {
660 #[track_caller]
661 #[inline]
662 pub(crate) fn capture() -> Self {
663 Self::from(Location::caller())
664 }
665}