Scheduler Activations
i. Not – too expensive for a thread
i. Lag in TOCS publishing – takes 2 years
i. Allow work on multiple processors independently
ii. Coordinate for synchronization
i. user threads: purely userlevel code for:
1. context switching
2. synchronization
3. scheduling
ii. kernel threads: same features in the kernel (e.g. mach threads, windows NT threads)
i. Must completely control scheduling of processors
1. CanŐt let user code have control
2. Can let user code advice
3. CanŐt let advice be used for correctness – otherwise commandeering
ii. Kernel multiplexes processors between processes – canŐt be done within a process
i. spawn a thread
ii. synchronize (e.g. locking)
iii. terminate a thread
iv. schedule a new thread
i. User level threads fast; no need to go to kernel
1. Expensive to enter kernel
2. Kernel approach must be general & work for all applications
ii. But: Kernel events pre-empt user threads
1. Block high-priority threads on I/o, or preemption for time slicing
2. Schedule kernel threads on separate mechanism; may run wrong thread or too many threads
a. e.g. wake up low-priority thread instead of high-priority
3. Correctness: kernel may schedule wrong threads – ones that are blocked waiting for a user thread blocked in kernel: may not have enough kernel threads to schedule a new user thread and make progress.
i. When threads runnable
ii. When processors not needed
i. When thread pre-empted
ii. When thread resumed
iii. When processor removed
iv. When processor returned
i. Expose info to across information
i. Kernel controls which kernel threads run, how many run
ii. User controls which user threads run
iii. Kernel notifies user of any scheduling events (e.g. taking away a kernel thread, adding a kernel thread, blocking & unblocking)
iv. User notifies kernel when needs fewer or more processors
i. Scheduler activations run to completion; they are never re-scheduled
ii. System upcalls into user-level scheduler on all interesting events
1. Thread blocked
2. Thread wakes up
3. Thread pre-empted
4. Processor available
5. Processor removed
iii. Up-call may pre-empt existing thread, causes notification that two threads are runnable
iv. KEY POINT:
1. No need to get permission to pre-empt; just notify on another thread afterward
2. Detail: on page fault, may delay of fault again on same point
3. Detail: may wait until next time a kernel thread is available if no threads currently running
v. Application notifies kernel of available parallelism
1. Fewer contexts needed
2. More contexts could be used
3. NOTE: no need to notify of other events, such as which thread is running now or context switches
vi. SHOW EXAMPLE
vii. Critical sections
1. Problem: may preempt thread holding ready list lock. When send up activation, canŐt get lock to schedule itself -- DEADLOCK
2. Preemption control kernel lets user decide which threads should not be preempted
a. May require pinning memory to avoid page faults
b. Yields control to user level from kernel
c. To avoid deadlock, must be a guarantee – not just a wish
3. Recovery:
a. When preempt thread holding a lock, can continue it instead of running scheduler. On release of lock, goes back to upcall and into scheduler
b. Mechanism:
i. Goal: zero overhead for common case
ii. Solution: mark critical sections in assembly
iii. Copy code to new place
iv. New code returns control to scheduler on releasing a lock instead of continuing
v. Problem: locks acquired in one indirect function call, released in another
vi. Key idea: use knowledge of source code for fast common case behavior – zero overhead.
vii. Used in Linux for trap handling: certain places are marked as ŇsafeÓ for traps, stores a fixup routine to recover. E.g. copy from user
viii.
viii. Performance:
1. QUESTION: What do you want to show?
a. A: no cost for cpu-bound operations
b. A: Better than both for blocking operations
2. As fast as fast threads when CPU bound
3. Fixed amount better than kernel threads when I/O bound – no unnecessary blocking
ix. QUESTION: where did we see this before?
1. Spin, Exokernel – allow choice of what thread to run next when CPU given to a process
i. Took reservation idea for critical sections, nothing else