Table of Contents
Processor Scheduling
Basic Concept
Typical scheduling algorithm
First-come first-served scheduling algorithm ):
Short job (process) priority scheduling algorithm (short job first):
Priority-scheduling algorithm:
Highest response closed priority scheduling algorithm (highest request ratio next):
Time slice rotation scheduling algorithm (RR)
Multi-level feedback queue scheduling algorithm (multileved feedback queue):
Process scheduling algorithm in real-time system
The earliest deadline takes precedence Algorithm (EDF)
Least Slack First Algorithm (LLF)
Home Operation and Maintenance Linux Operation and Maintenance Detailed introduction to processor scheduling

Detailed introduction to processor scheduling

Jul 18, 2017 am 09:35 AM
operating system

Processor Scheduling

Tags (space separated): Process Scheduling Scheduling Algorithm Operating System


Basic Concept

Definition
: Operating System Management Limited resources of the system. When multiple processes (or requests issued by multiple processes) want to use these resources, due to the limited nature of the resources, processes (requests) must be selected according to certain principles to occupy the resources. We call this Scheduling.

The purpose is to control the number of resource users and select the permission of resource users to occupy resources or occupy resources. Three levels of processor scheduling:

  • Advanced scheduling: job scheduling, the scheduling object is the job.

  • Intermediate scheduling: memory scheduling (essential It is the memory swap function)

  • Low-level scheduling: process scheduling, the scheduling object is a process or kernel-level thread

Job scheduling The algorithm of \)

: The time interval from when a job or process is submitted to the system until the job or process is completed.
Usually includes:

The time the job waits for scheduling on the external storage backup queue.
The time the process waits for process scheduling on the ready queue.The time the process executes on the CPU.The time the process waits for the I/O operation to complete.

Average turnaround time:


\[T = \frac{\sum_{i=1}^n T_i}{n}\]

Powered turnaround time: The turnaround time of a job and the number of people who serve it Time ratio

\[W_i = \frac{T_i}{T_s}\]


Average weighted turnaround time:\[W = \frac{\sum_ {i=1}^n \frac{T_i}{T_s}}{n}\]

Scheduling timing, switching and process
The process scheduling and switching program is the operating system kernel program. When the event requesting scheduling occurs, the process scheduler may be run. When a new ready process is scheduled, switching between processes will be performed. In theory, these three things should be executed sequentially, but in actual design, when the operating system kernel program is running, if factors that cause process scheduling occur at some point, scheduling and switching may not be possible immediately.
In modern operating systems, there are the following situations where process scheduling and switching cannot be performed:

In the process of handling interrupts: the interrupt processing process is complicated, It is difficult to switch processes in implementation, and interrupt processing is part of the system's work. It does not logically belong to a certain process and should not be deprived of processor resources.

The process is in the critical section of the operating system kernel program: after entering the critical section, it needs to access shared data exclusively. In theory, it must be locked to prevent other parallel programs from entering. After unlocking You should not switch to other processes before running to speed up the release of the shared data.
  1. During other atomic operations that require complete shielding of interrupts: such as locking, unlocking, interrupting site protection, recovery and other atomic operations. In an atomic process, even interrupts must be masked, and process scheduling and switching should not be performed.
  2. If conditions that cause scheduling occur during the above process, and scheduling and switching cannot be performed immediately, the system's request scheduling flag should be set, and the corresponding scheduling will not be performed until the above process is completed. with switch.
  3. The situations where process scheduling and switching should be performed are:

When a scheduling condition occurs and the current process cannot continue to run, scheduling and switching can be performed immediately. If the operating system only performs process scheduling under this situation, it is non-deprivation scheduling.

When the interrupt processing or the trap processing is completed, before returning to the user mode program execution site of the interrupted process, if the request scheduling flag is set, process scheduling and switching can be performed immediately . If the operating system supports a run scheduler in this case, deprivation mode scheduling is implemented.
  1. Process switching often occurs immediately after scheduling is completed. It requires saving the scene information of the current switching point of the original process and restoring the scene information of the scheduled process. When switching context, the operating system kernel pushes the context information of the original process into the kernel stack of the current process to save them and updates the stack pointer. After the kernel completes the loading of the new process's on-site information from the new process's kernel stack, updates the currently running process space pointer, resets the PC register and other related work, it starts to run the new process.
  2. Scheduling method

Non-preemptive method

Once the processor is assigned to a process, it will continue to run, never due to clock interruption or For any other reason, the processor of the currently running process is preempted. The processor will not be allocated to other processes until the process is completed or blocked due to an event.

    Preemption method
  • The system allows the scheduler to suspend an executing process and reallocate the processor assigned to the process to another process based on certain principles.

    Certain principles should be followed when "preempting":

  • Priority principle

  • Short process priority principle
    • Time slice principle

Typical scheduling algorithm

First-come first-served scheduling algorithm ):

is the FCFS scheduling algorithm, which can be used for both job scheduling and process scheduling. The system follows the order in which jobs arrive (the one with the longest waiting time in the ready queue is given priority Job) for scheduling.
Disadvantages: The urgency of the job is not considered and can only be run sequentially

Short job (process) priority scheduling algorithm (short job first):

It is set up for short jobs, because most of the operating systems are short jobs. When the job is running, the system selects the job with the shortest running time and transfers it into the memory.

  1. SJF (non-preemptible): The algorithm calculates the priority based on the length of the job (the time the job needs to run). The shorter the job, the higher its priority.

  2. SPF (preemptible): Same as above, but when there is a job with a higher priority, it can preempt resources and execute them first.

Disadvantages:

  • The running time of the job must be predicted in advance

  • It is not good for the job process

  • Human-computer interaction cannot be achieved

  • Does not consider the urgency of the job

Priority-scheduling algorithm:

PSA algorithm is based on the job Urgency, the corresponding priority is given to the job from the outside, and scheduled according to the job priority.

Highest response closed priority scheduling algorithm (highest request ratio next):

The HRRN algorithm takes into account both the waiting time of the job and the job running time. It introduces a dynamic priority for no jobs:

Priority = (waiting time + required service time) / required service time

can be abbreviated as:

R = response time/required service time

Features:

  1. If the job waiting time is the same, the shorter the service time required, the higher the priority, similar to the SJF algorithm, which is conducive to short jobs

  2. When the job requires the same service time, then The longer the job waiting time is, the higher the priority is, similar to the FCFS algorithm, which is beneficial to long jobs

  3. For long jobs (requiring long service time), as the waiting time is long enough, You can also get a higher priority. You will not wait forever.

Time slice rotation scheduling algorithm (RR)

Principle: The system will allocate all The ready process is arranged in a ready queue, and is set to generate sequential interrupts at regular intervals, activate the process scheduler in the system, complete the sequential scheduling, and allocate the CPU to the new queue leader process (or newly arrived urgent process).

Process switching

  1. If a time slice has not been used up and the running process has been completed, the scheduler will be activated immediately and deleted from the execution queue. The head process of the ready queue runs and starts a new time slice.

  2. When a time slice runs out, the timing interrupt handler is activated. If the process has not finished running, then The scheduler sends it to the end of the ready queue, schedules the head process of the ready queue to run, and starts a new time slice.

Note: The time slice is selected too If it is small, frequent execution process scheduling and process long context switching will increase system overhead; if the time slice is selected too long, the RR algorithm will degenerate into the FCFS algorithm, which cannot meet the needs of short jobs and interactive users. The time slice should be selected slightly larger than the sequential The time required for a typical interaction is that most processes complete it within a time slice.

Multi-level feedback queue scheduling algorithm (multileved feedback queue):

  1. Settings Multiple ready queues, and assign different priorities to each queue. The higher the priority, the smaller the time slice. The queue with the highest priority enters the queue to be scheduled first

  2. The FCFS scheduling algorithm is used between queues. Only when all processes in the high-priority queue are completed will the next queue be scheduled.

  3. The processes in the queue are scheduled according to the rotation algorithm. New processes enter After the memory, it first enters the queue with the highest priority

  4. When a low-priority queue is running, if a new process arrives, then after running this time slice, the CPU is immediately allocated to the new arrival jobs (preemptive).

Process scheduling algorithm in real-time system

Real-time system means that the system can respond to requests from external events in a timely manner and process these requests in a timely manner. Based on this feature, real-time systems are widely used in industry ( Common in weapons) control, multimedia and other systems.
There is a deadline in real-time systems. Hard real-time tasks (HRT) require that they must be executed before the start deadline and must end before the end deadline. Soft real-time tasks Same as above, but not strict.
There are two types of algorithms worth noting in real-time systems: Earliest Deadline First algorithm (Earliest Deadline First) and Least Laxity First algorithm (Least Laxity First). Both types of algorithms can use preemptive methods and non-preemptive scheduling, but the latter is often used for preemptive scheduling.
In m periodic real-time scheduling, the processing time of each real-time task\(C_i\), cycle time\(P_i\), in the case of but one processor, the conditions must be met: $\sum_{i=1}^m\frac{C_i}{P_i} \(less than 1; more Under the processor condition, the condition must be met\)\sum_{i=1}^m\frac{C_i}{P_i} $ is less than N, N is the number of processors.

The earliest deadline takes precedence Algorithm (EDF)

This algorithm determines the priority of tasks according to their deadlines in real-time systems.

  1. Non-preemptive

  2. Preemptive

Least Slack First Algorithm (LLF)

In this method, tasks are given priority according to their urgency (slack), the higher the The higher the priority of the urgent task.

The slack degree of the task = the time it must be completed - its own running time - the current time

The tasks of the system are arranged according to the slack degree A ready queue, tasks with low slack are placed at the front of the queue, that is, the scheduler executes them first.

The above is the detailed content of Detailed introduction to processor scheduling. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

What are the methods of tuning performance of Zookeeper on CentOS What are the methods of tuning performance of Zookeeper on CentOS Apr 14, 2025 pm 03:18 PM

Zookeeper performance tuning on CentOS can start from multiple aspects, including hardware configuration, operating system optimization, configuration parameter adjustment, monitoring and maintenance, etc. Here are some specific tuning methods: SSD is recommended for hardware configuration: Since Zookeeper's data is written to disk, it is highly recommended to use SSD to improve I/O performance. Enough memory: Allocate enough memory resources to Zookeeper to avoid frequent disk read and write. Multi-core CPU: Use multi-core CPU to ensure that Zookeeper can process it in parallel.

What is Linux actually good for? What is Linux actually good for? Apr 12, 2025 am 12:20 AM

Linux is suitable for servers, development environments, and embedded systems. 1. As a server operating system, Linux is stable and efficient, and is often used to deploy high-concurrency applications. 2. As a development environment, Linux provides efficient command line tools and package management systems to improve development efficiency. 3. In embedded systems, Linux is lightweight and customizable, suitable for environments with limited resources.

How Debian improves Hadoop data processing speed How Debian improves Hadoop data processing speed Apr 13, 2025 am 11:54 AM

This article discusses how to improve Hadoop data processing efficiency on Debian systems. Optimization strategies cover hardware upgrades, operating system parameter adjustments, Hadoop configuration modifications, and the use of efficient algorithms and tools. 1. Hardware resource strengthening ensures that all nodes have consistent hardware configurations, especially paying attention to CPU, memory and network equipment performance. Choosing high-performance hardware components is essential to improve overall processing speed. 2. Operating system tunes file descriptors and network connections: Modify the /etc/security/limits.conf file to increase the upper limit of file descriptors and network connections allowed to be opened at the same time by the system. JVM parameter adjustment: Adjust in hadoop-env.sh file

Centos install mysql Centos install mysql Apr 14, 2025 pm 08:09 PM

Installing MySQL on CentOS involves the following steps: Adding the appropriate MySQL yum source. Execute the yum install mysql-server command to install the MySQL server. Use the mysql_secure_installation command to make security settings, such as setting the root user password. Customize the MySQL configuration file as needed. Tune MySQL parameters and optimize databases for performance.

Debian Hadoop data transmission optimization method Debian Hadoop data transmission optimization method Apr 12, 2025 pm 08:24 PM

The key to improving the efficiency of data transmission in DebianHadoop cluster lies in the comprehensive application of multiple strategies. This article will elaborate on optimization methods to help you significantly improve cluster performance. 1. The data localization strategy maximizes the allocation of computing tasks to the data storage nodes, effectively reducing data transmission between nodes. Hadoop's data localization mechanism will automatically move data blocks to the node where the computing task is located, thereby avoiding performance bottlenecks caused by network transmission. 2. Data compression technology adopts data compression technology during data transmission to reduce the amount of data transmitted on the network and thereby improve transmission efficiency. Hadoop supports a variety of compression algorithms, such as Snappy, Gzip, LZO, etc. You can choose the optimal algorithm according to the actual situation. three,

How to run programs in terminal vscode How to run programs in terminal vscode Apr 15, 2025 pm 06:42 PM

In VS Code, you can run the program in the terminal through the following steps: Prepare the code and open the integrated terminal to ensure that the code directory is consistent with the terminal working directory. Select the run command according to the programming language (such as Python's python your_file_name.py) to check whether it runs successfully and resolve errors. Use the debugger to improve debugging efficiency.

Is the vscode extension malicious? Is the vscode extension malicious? Apr 15, 2025 pm 07:57 PM

VS Code extensions pose malicious risks, such as hiding malicious code, exploiting vulnerabilities, and masturbating as legitimate extensions. Methods to identify malicious extensions include: checking publishers, reading comments, checking code, and installing with caution. Security measures also include: security awareness, good habits, regular updates and antivirus software.

vscode cannot install extension vscode cannot install extension Apr 15, 2025 pm 07:18 PM

The reasons for the installation of VS Code extensions may be: network instability, insufficient permissions, system compatibility issues, VS Code version is too old, antivirus software or firewall interference. By checking network connections, permissions, log files, updating VS Code, disabling security software, and restarting VS Code or computers, you can gradually troubleshoot and resolve issues.

See all articles