diff --git a/Documentation/scheduler/sched-BFS.txt b/Documentation/scheduler/sched-BFS.txt deleted file mode 100644 index 5115858ac98..00000000000 --- a/Documentation/scheduler/sched-BFS.txt +++ /dev/null @@ -1,326 +0,0 @@ -BFS - The Brain Fuck Scheduler by Con Kolivas. - -Goals. - -The goal of the Brain Fuck Scheduler, referred to as BFS from here on, is to -completely do away with the complex designs of the past for the cpu process -scheduler and instead implement one that is very simple in basic design. -The main focus of BFS is to achieve excellent desktop interactivity and -responsiveness without heuristics and tuning knobs that are difficult to -understand, impossible to model and predict the effect of, and when tuned to -one workload cause massive detriment to another. - - -Design summary. - -BFS is best described as a single runqueue, O(n) lookup, earliest effective -virtual deadline first design, loosely based on EEVDF (earliest eligible virtual -deadline first) and my previous Staircase Deadline scheduler. Each component -shall be described in order to understand the significance of, and reasoning for -it. The codebase when the first stable version was released was approximately -9000 lines less code than the existing mainline linux kernel scheduler (in -2.6.31). This does not even take into account the removal of documentation and -the cgroups code that is not used. - -Design reasoning. - -The single runqueue refers to the queued but not running processes for the -entire system, regardless of the number of CPUs. The reason for going back to -a single runqueue design is that once multiple runqueues are introduced, -per-CPU or otherwise, there will be complex interactions as each runqueue will -be responsible for the scheduling latency and fairness of the tasks only on its -own runqueue, and to achieve fairness and low latency across multiple CPUs, any -advantage in throughput of having CPU local tasks causes other disadvantages. -This is due to requiring a very complex balancing system to at best achieve some -semblance of fairness across CPUs and can only maintain relatively low latency -for tasks bound to the same CPUs, not across them. To increase said fairness -and latency across CPUs, the advantage of local runqueue locking, which makes -for better scalability, is lost due to having to grab multiple locks. - -A significant feature of BFS is that all accounting is done purely based on CPU -used and nowhere is sleep time used in any way to determine entitlement or -interactivity. Interactivity "estimators" that use some kind of sleep/run -algorithm are doomed to fail to detect all interactive tasks, and to falsely tag -tasks that aren't interactive as being so. The reason for this is that it is -close to impossible to determine that when a task is sleeping, whether it is -doing it voluntarily, as in a userspace application waiting for input in the -form of a mouse click or otherwise, or involuntarily, because it is waiting for -another thread, process, I/O, kernel activity or whatever. Thus, such an -estimator will introduce corner cases, and more heuristics will be required to -cope with those corner cases, introducing more corner cases and failed -interactivity detection and so on. Interactivity in BFS is built into the design -by virtue of the fact that tasks that are waking up have not used up their quota -of CPU time, and have earlier effective deadlines, thereby making it very likely -they will preempt any CPU bound task of equivalent nice level. See below for -more information on the virtual deadline mechanism. Even if they do not preempt -a running task, because the rr interval is guaranteed to have a bound upper -limit on how long a task will wait for, it will be scheduled within a timeframe -that will not cause visible interface jitter. - - -Design details. - -Task insertion. - -BFS inserts tasks into each relevant queue as an O(1) insertion into a double -linked list. On insertion, *every* running queue is checked to see if the newly -queued task can run on any idle queue, or preempt the lowest running task on the -system. This is how the cross-CPU scheduling of BFS achieves significantly lower -latency per extra CPU the system has. In this case the lookup is, in the worst -case scenario, O(n) where n is the number of CPUs on the system. - -Data protection. - -BFS has one single lock protecting the process local data of every task in the -global queue. Thus every insertion, removal and modification of task data in the -global runqueue needs to grab the global lock. However, once a task is taken by -a CPU, the CPU has its own local data copy of the running process' accounting -information which only that CPU accesses and modifies (such as during a -timer tick) thus allowing the accounting data to be updated lockless. Once a -CPU has taken a task to run, it removes it from the global queue. Thus the -global queue only ever has, at most, - - (number of tasks requesting cpu time) - (number of logical CPUs) + 1 - -tasks in the global queue. This value is relevant for the time taken to look up -tasks during scheduling. This will increase if many tasks with CPU affinity set -in their policy to limit which CPUs they're allowed to run on if they outnumber -the number of CPUs. The +1 is because when rescheduling a task, the CPU's -currently running task is put back on the queue. Lookup will be described after -the virtual deadline mechanism is explained. - -Virtual deadline. - -The key to achieving low latency, scheduling fairness, and "nice level" -distribution in BFS is entirely in the virtual deadline mechanism. The one -tunable in BFS is the rr_interval, or "round robin interval". This is the -maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy) -tasks of the same nice level will be running for, or looking at it the other -way around, the longest duration two tasks of the same nice level will be -delayed for. When a task requests cpu time, it is given a quota (time_slice) -equal to the rr_interval and a virtual deadline. The virtual deadline is -offset from the current time in jiffies by this equation: - - jiffies + (prio_ratio * rr_interval) - -The prio_ratio is determined as a ratio compared to the baseline of nice -20 -and increases by 10% per nice level. The deadline is a virtual one only in that -no guarantee is placed that a task will actually be scheduled by this time, but -it is used to compare which task should go next. There are three components to -how a task is next chosen. First is time_slice expiration. If a task runs out -of its time_slice, it is descheduled, the time_slice is refilled, and the -deadline reset to that formula above. Second is sleep, where a task no longer -is requesting CPU for whatever reason. The time_slice and deadline are _not_ -adjusted in this case and are just carried over for when the task is next -scheduled. Third is preemption, and that is when a newly waking task is deemed -higher priority than a currently running task on any cpu by virtue of the fact -that it has an earlier virtual deadline than the currently running task. The -earlier deadline is the key to which task is next chosen for the first and -second cases. Once a task is descheduled, it is put back on the queue, and an -O(n) lookup of all queued-but-not-running tasks is done to determine which has -the earliest deadline and that task is chosen to receive CPU next. - -The CPU proportion of different nice tasks works out to be approximately the - - (prio_ratio difference)^2 - -The reason it is squared is that a task's deadline does not change while it is -running unless it runs out of time_slice. Thus, even if the time actually -passes the deadline of another task that is queued, it will not get CPU time -unless the current running task deschedules, and the time "base" (jiffies) is -constantly moving. - -Task lookup. - -BFS has 103 priority queues. 100 of these are dedicated to the static priority -of realtime tasks, and the remaining 3 are, in order of best to worst priority, -SCHED_ISO (isochronous), SCHED_NORMAL, and SCHED_IDLEPRIO (idle priority -scheduling). When a task of these priorities is queued, a bitmap of running -priorities is set showing which of these priorities has tasks waiting for CPU -time. When a CPU is made to reschedule, the lookup for the next task to get -CPU time is performed in the following way: - -First the bitmap is checked to see what static priority tasks are queued. If -any realtime priorities are found, the corresponding queue is checked and the -first task listed there is taken (provided CPU affinity is suitable) and lookup -is complete. If the priority corresponds to a SCHED_ISO task, they are also -taken in FIFO order (as they behave like SCHED_RR). If the priority corresponds -to either SCHED_NORMAL or SCHED_IDLEPRIO, then the lookup becomes O(n). At this -stage, every task in the runlist that corresponds to that priority is checked -to see which has the earliest set deadline, and (provided it has suitable CPU -affinity) it is taken off the runqueue and given the CPU. If a task has an -expired deadline, it is taken and the rest of the lookup aborted (as they are -chosen in FIFO order). - -Thus, the lookup is O(n) in the worst case only, where n is as described -earlier, as tasks may be chosen before the whole task list is looked over. - - -Scalability. - -The major limitations of BFS will be that of scalability, as the separate -runqueue designs will have less lock contention as the number of CPUs rises. -However they do not scale linearly even with separate runqueues as multiple -runqueues will need to be locked concurrently on such designs to be able to -achieve fair CPU balancing, to try and achieve some sort of nice-level fairness -across CPUs, and to achieve low enough latency for tasks on a busy CPU when -other CPUs would be more suited. BFS has the advantage that it requires no -balancing algorithm whatsoever, as balancing occurs by proxy simply because -all CPUs draw off the global runqueue, in priority and deadline order. Despite -the fact that scalability is _not_ the prime concern of BFS, it both shows very -good scalability to smaller numbers of CPUs and is likely a more scalable design -at these numbers of CPUs. - -It also has some very low overhead scalability features built into the design -when it has been deemed their overhead is so marginal that they're worth adding. -The first is the local copy of the running process' data to the CPU it's running -on to allow that data to be updated lockless where possible. Then there is -deference paid to the last CPU a task was running on, by trying that CPU first -when looking for an idle CPU to use the next time it's scheduled. Finally there -is the notion of "sticky" tasks that are flagged when they are involuntarily -descheduled, meaning they still want further CPU time. This sticky flag is -used to bias heavily against those tasks being scheduled on a different CPU -unless that CPU would be otherwise idle. When a cpu frequency governor is used -that scales with CPU load, such as ondemand, sticky tasks are not scheduled -on a different CPU at all, preferring instead to go idle. This means the CPU -they were bound to is more likely to increase its speed while the other CPU -will go idle, thus speeding up total task execution time and likely decreasing -power usage. This is the only scenario where BFS will allow a CPU to go idle -in preference to scheduling a task on the earliest available spare CPU. - -The real cost of migrating a task from one CPU to another is entirely dependant -on the cache footprint of the task, how cache intensive the task is, how long -it's been running on that CPU to take up the bulk of its cache, how big the CPU -cache is, how fast and how layered the CPU cache is, how fast a context switch -is... and so on. In other words, it's close to random in the real world where we -do more than just one sole workload. The only thing we can be sure of is that -it's not free. So BFS uses the principle that an idle CPU is a wasted CPU and -utilising idle CPUs is more important than cache locality, and cache locality -only plays a part after that. - -Early benchmarking of BFS suggested scalability dropped off at the 16 CPU mark. -However this benchmarking was performed on an earlier design that was far less -scalable than the current one so it's hard to know how scalable it is in terms -of both CPUs (due to the global runqueue) and heavily loaded machines (due to -O(n) lookup) at this stage. Note that in terms of scalability, the number of -_logical_ CPUs matters, not the number of _physical_ CPUs. Thus, a dual (2x) -quad core (4X) hyperthreaded (2X) machine is effectively a 16X. Newer benchmark -results are very promising indeed, without needing to tweak any knobs, features -or options. Benchmark contributions are most welcome. - - -Features - -As the initial prime target audience for BFS was the average desktop user, it -was designed to not need tweaking, tuning or have features set to obtain benefit -from it. Thus the number of knobs and features has been kept to an absolute -minimum and should not require extra user input for the vast majority of cases. -There are precisely 2 tunables, and 2 extra scheduling policies. The rr_interval -and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO policies. In addition -to this, BFS also uses sub-tick accounting. What BFS does _not_ now feature is -support for CGROUPS. The average user should neither need to know what these -are, nor should they need to be using them to have good desktop behaviour. - -rr_interval - -There is only one "scheduler" tunable, the round robin interval. This can be -accessed in - - /proc/sys/kernel/rr_interval - -The value is in milliseconds, and the default value is set to 6ms. Valid values -are from 1 to 1000. Decreasing the value will decrease latencies at the cost of -decreasing throughput, while increasing it will improve throughput, but at the -cost of worsening latencies. The accuracy of the rr interval is limited by HZ -resolution of the kernel configuration. Thus, the worst case latencies are -usually slightly higher than this actual value. BFS uses "dithering" to try and -minimise the effect the Hz limitation has. The default value of 6 is not an -arbitrary one. It is based on the fact that humans can detect jitter at -approximately 7ms, so aiming for much lower latencies is pointless under most -circumstances. It is worth noting this fact when comparing the latency -performance of BFS to other schedulers. Worst case latencies being higher than -7ms are far worse than average latencies not being in the microsecond range. -Experimentation has shown that rr intervals being increased up to 300 can -improve throughput but beyond that, scheduling noise from elsewhere prevents -further demonstrable throughput. - -Isochronous scheduling. - -Isochronous scheduling is a unique scheduling policy designed to provide -near-real-time performance to unprivileged (ie non-root) users without the -ability to starve the machine indefinitely. Isochronous tasks (which means -"same time") are set using, for example, the schedtool application like so: - - schedtool -I -e amarok - -This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works -is that it has a priority level between true realtime tasks and SCHED_NORMAL -which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie, -if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval -rate). However if ISO tasks run for more than a tunable finite amount of time, -they are then demoted back to SCHED_NORMAL scheduling. This finite amount of -time is the percentage of _total CPU_ available across the machine, configurable -as a percentage in the following "resource handling" tunable (as opposed to a -scheduler tunable): - - /proc/sys/kernel/iso_cpu - -and is set to 70% by default. It is calculated over a rolling 5 second average -Because it is the total CPU available, it means that on a multi CPU machine, it -is possible to have an ISO task running as realtime scheduling indefinitely on -just one CPU, as the other CPUs will be available. Setting this to 100 is the -equivalent of giving all users SCHED_RR access and setting it to 0 removes the -ability to run any pseudo-realtime tasks. - -A feature of BFS is that it detects when an application tries to obtain a -realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the -appropriate privileges to use those policies. When it detects this, it will -give the task SCHED_ISO policy instead. Thus it is transparent to the user. -Because some applications constantly set their policy as well as their nice -level, there is potential for them to undo the override specified by the user -on the command line of setting the policy to SCHED_ISO. To counter this, once -a task has been set to SCHED_ISO policy, it needs superuser privileges to set -it back to SCHED_NORMAL. This will ensure the task remains ISO and all child -processes and threads will also inherit the ISO policy. - -Idleprio scheduling. - -Idleprio scheduling is a scheduling policy designed to give out CPU to a task -_only_ when the CPU would be otherwise idle. The idea behind this is to allow -ultra low priority tasks to be run in the background that have virtually no -effect on the foreground tasks. This is ideally suited to distributed computing -clients (like setiathome, folding, mprime etc) but can also be used to start -a video encode or so on without any slowdown of other tasks. To avoid this -policy from grabbing shared resources and holding them indefinitely, if it -detects a state where the task is waiting on I/O, the machine is about to -suspend to ram and so on, it will transiently schedule them as SCHED_NORMAL. As -per the Isochronous task management, once a task has been scheduled as IDLEPRIO, -it cannot be put back to SCHED_NORMAL without superuser privileges. Tasks can -be set to start as SCHED_IDLEPRIO with the schedtool command like so: - - schedtool -D -e ./mprime - -Subtick accounting. - -It is surprisingly difficult to get accurate CPU accounting, and in many cases, -the accounting is done by simply determining what is happening at the precise -moment a timer tick fires off. This becomes increasingly inaccurate as the -timer tick frequency (HZ) is lowered. It is possible to create an application -which uses almost 100% CPU, yet by being descheduled at the right time, records -zero CPU usage. While the main problem with this is that there are possible -security implications, it is also difficult to determine how much CPU a task -really does use. BFS tries to use the sub-tick accounting from the TSC clock, -where possible, to determine real CPU usage. This is not entirely reliable, but -is far more likely to produce accurate CPU usage data than the existing designs -and will not show tasks as consuming no CPU usage when they actually are. Thus, -the amount of CPU reported as being used by BFS will more accurately represent -how much CPU the task itself is using (as is shown for example by the 'time' -application), so the reported values may be quite different to other schedulers. -Values reported as the 'load' are more prone to problems with this design, but -per process values are closer to real usage. When comparing throughput of BFS -to other designs, it is important to compare the actual completed work in terms -of total wall clock time taken and total work done, rather than the reported -"cpu usage". - - -Con Kolivas Tue, 5 Apr 2011 diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt index 97cd8a3de12..322a00bb99d 100644 --- a/Documentation/sysctl/kernel.txt +++ b/Documentation/sysctl/kernel.txt @@ -27,7 +27,6 @@ show up in /proc/sys/kernel: - domainname - hostname - hotplug -- iso_cpu - java-appletviewer [ binfmt_java, obsolete ] - java-interpreter [ binfmt_java, obsolete ] - kstack_depth_to_print [ X86 only ] @@ -50,7 +49,6 @@ show up in /proc/sys/kernel: - randomize_va_space - real-root-dev ==> Documentation/initrd.txt - reboot-cmd [ SPARC only ] -- rr_interval - rtsig-max - rtsig-nr - sem @@ -173,16 +171,6 @@ Default value is "/sbin/hotplug". ============================================================== -iso_cpu: (BFS CPU scheduler only). - -This sets the percentage cpu that the unprivileged SCHED_ISO tasks can -run effectively at realtime priority, averaged over a rolling five -seconds over the -whole- system, meaning all cpus. - -Set to 70 (percent) by default. - -============================================================== - l2cr: (PPC only) This flag controls the L2 cache of G3 processor boards. If @@ -345,20 +333,6 @@ rebooting. ??? ============================================================== -rr_interval: (BFS CPU scheduler only) - -This is the smallest duration that any cpu process scheduling unit -will run for. Increasing this value can increase throughput of cpu -bound tasks substantially but at the expense of increased latencies -overall. Conversely decreasing it will decrease average and maximum -latencies but at the expense of throughput. This value is in -milliseconds and the default value chosen depends on the number of -cpus available at scheduler initialisation with a minimum of 6. - -Valid values are from 1-5000. - -============================================================== - rtsig-max & rtsig-nr: The file rtsig-max can be used to tune the maximum number diff --git a/arch/arm/configs/mx51_efikamx_defconfig b/arch/arm/configs/mx51_efikamx_defconfig index 282c2cd0045..b27842c7a82 100644 --- a/arch/arm/configs/mx51_efikamx_defconfig +++ b/arch/arm/configs/mx51_efikamx_defconfig @@ -1,7 +1,7 @@ # # Automatically generated make config: don't edit # Linux kernel version: 2.6.31.14.27 -# Mon Nov 19 11:59:35 2012 +# Mon Dec 10 20:11:59 2012 # CONFIG_ARM=y CONFIG_HAVE_PWM=y @@ -31,7 +31,6 @@ CONFIG_CONSTRUCTORS=y # # General setup # -CONFIG_SCHED_BFS=y CONFIG_EXPERIMENTAL=y CONFIG_BROKEN_ON_SMP=y CONFIG_INIT_ENV_ARG_LIMIT=32 @@ -59,6 +58,7 @@ CONFIG_RCU_FANOUT=32 CONFIG_IKCONFIG=y CONFIG_IKCONFIG_PROC=y CONFIG_LOG_BUF_SHIFT=16 +# CONFIG_GROUP_SCHED is not set CONFIG_CGROUPS=y # CONFIG_CGROUP_DEBUG is not set CONFIG_CGROUP_NS=y @@ -66,6 +66,7 @@ CONFIG_CGROUP_FREEZER=y # CONFIG_CGROUP_DEVICE is not set CONFIG_CPUSETS=y # CONFIG_PROC_PID_CPUSET is not set +CONFIG_CGROUP_CPUACCT=y CONFIG_RESOURCE_COUNTERS=y # CONFIG_CGROUP_MEM_RES_CTLR is not set # CONFIG_SYSFS_DEPRECATED_V2 is not set @@ -280,7 +281,7 @@ CONFIG_VMSPLIT_2G=y # CONFIG_VMSPLIT_1G is not set CONFIG_PAGE_OFFSET=0x80000000 # CONFIG_PREEMPT is not set -CONFIG_HZ=256 +CONFIG_HZ=100 CONFIG_AEABI=y # CONFIG_OABI_COMPAT is not set # CONFIG_ARCH_SPARSEMEM_DEFAULT is not set diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c index 0aa00247376..f085369301b 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c @@ -61,6 +61,11 @@ static struct task_struct *spusched_task; static struct timer_list spusched_timer; static struct timer_list spuloadavg_timer; +/* + * Priority of a normal, non-rt, non-niced'd process (aka nice level 0). + */ +#define NORMAL_PRIO 120 + /* * Frequency of the spu scheduler tick. By default we do one SPU scheduler * tick for every 10 CPU scheduler ticks. diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index bf3701d757c..3b651ca4309 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c @@ -444,10 +444,8 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info) freq_target = 5; this_dbs_info->requested_freq += freq_target; - if (this_dbs_info->requested_freq >= policy->max) { + if (this_dbs_info->requested_freq > policy->max) this_dbs_info->requested_freq = policy->max; - cpu_nonscaling(policy->cpu); - } __cpufreq_driver_target(policy, this_dbs_info->requested_freq, CPUFREQ_RELATION_H); @@ -472,7 +470,6 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info) if (policy->cur == policy->min) return; - cpu_scaling(policy->cpu); __cpufreq_driver_target(policy, this_dbs_info->requested_freq, CPUFREQ_RELATION_H); return; @@ -588,7 +585,6 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, dbs_timer_init(this_dbs_info); - cpu_scaling(cpu); break; case CPUFREQ_GOV_STOP: @@ -610,7 +606,6 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, mutex_unlock(&dbs_mutex); - cpu_nonscaling(cpu); break; case CPUFREQ_GOV_LIMITS: diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index fba1e859f5e..7ff1044b745 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -470,7 +470,6 @@ static void dbs_check_cpu(struct cpu_dbs_info_s *this_dbs_info) if (freq_next < policy->min) freq_next = policy->min; - cpu_scaling(policy->cpu); if (!dbs_tuners_ins.powersave_bias) { __cpufreq_driver_target(policy, freq_next, CPUFREQ_RELATION_L); @@ -594,7 +593,6 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, mutex_unlock(&dbs_mutex); dbs_timer_init(this_dbs_info); - cpu_scaling(cpu); break; case CPUFREQ_GOV_STOP: @@ -606,7 +604,6 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, dbs_enable--; mutex_unlock(&dbs_mutex); - cpu_nonscaling(cpu); break; case CPUFREQ_GOV_LIMITS: diff --git a/drivers/cpufreq/cpufreq_userspace.c b/drivers/cpufreq/cpufreq_userspace.c index 15153d8281f..66d2d1d6c80 100644 --- a/drivers/cpufreq/cpufreq_userspace.c +++ b/drivers/cpufreq/cpufreq_userspace.c @@ -23,7 +23,6 @@ #include #include #include -#include /** * A few values needed by the userspace governor @@ -98,10 +97,6 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq) * cpufreq_governor_userspace (lock userspace_mutex) */ ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L); - if (freq == cpu_max_freq) - cpu_nonscaling(policy->cpu); - else - cpu_scaling(policy->cpu); err: mutex_unlock(&userspace_mutex); @@ -147,7 +142,6 @@ static int cpufreq_governor_userspace(struct cpufreq_policy *policy, per_cpu(cpu_cur_freq, cpu)); mutex_unlock(&userspace_mutex); - cpu_scaling(cpu); break; case CPUFREQ_GOV_STOP: mutex_lock(&userspace_mutex); @@ -164,7 +158,6 @@ static int cpufreq_governor_userspace(struct cpufreq_policy *policy, per_cpu(cpu_set_freq, cpu) = 0; dprintk("managing cpu %u stopped\n", cpu); mutex_unlock(&userspace_mutex); - cpu_nonscaling(cpu); break; case CPUFREQ_GOV_LIMITS: mutex_lock(&userspace_mutex); diff --git a/fs/proc/base.c b/fs/proc/base.c index 056fcad1358..baf53d92081 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -366,7 +366,7 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns, static int proc_pid_schedstat(struct task_struct *task, char *buffer) { return sprintf(buffer, "%llu %llu %lu\n", - (unsigned long long)tsk_seruntime(task), + (unsigned long long)task->se.sum_exec_runtime, (unsigned long long)task->sched_info.run_delay, task->sched_info.pcount); } diff --git a/include/linux/init_task.h b/include/linux/init_task.h index 755dc5a8077..7fc01b13be4 100644 --- a/include/linux/init_task.h +++ b/include/linux/init_task.h @@ -109,68 +109,6 @@ extern struct cred init_cred; * INIT_TASK is used to set up the first task table, touch at * your own risk!. Base=0, limit=0x1fffff (=2MB) */ -#ifdef CONFIG_SCHED_BFS -#define INIT_TASK(tsk) \ -{ \ - .state = 0, \ - .stack = &init_thread_info, \ - .usage = ATOMIC_INIT(2), \ - .flags = PF_KTHREAD, \ - .lock_depth = -1, \ - .prio = NORMAL_PRIO, \ - .static_prio = MAX_PRIO-20, \ - .normal_prio = NORMAL_PRIO, \ - .deadline = 0, \ - .policy = SCHED_NORMAL, \ - .cpus_allowed = CPU_MASK_ALL, \ - .mm = NULL, \ - .active_mm = &init_mm, \ - .run_list = LIST_HEAD_INIT(tsk.run_list), \ - .time_slice = HZ, \ - .tasks = LIST_HEAD_INIT(tsk.tasks), \ - .pushable_tasks = PLIST_NODE_INIT(tsk.pushable_tasks, MAX_PRIO), \ - .ptraced = LIST_HEAD_INIT(tsk.ptraced), \ - .ptrace_entry = LIST_HEAD_INIT(tsk.ptrace_entry), \ - .real_parent = &tsk, \ - .parent = &tsk, \ - .children = LIST_HEAD_INIT(tsk.children), \ - .sibling = LIST_HEAD_INIT(tsk.sibling), \ - .group_leader = &tsk, \ - .real_cred = &init_cred, \ - .cred = &init_cred, \ - .cred_guard_mutex = \ - __MUTEX_INITIALIZER(tsk.cred_guard_mutex), \ - .comm = "swapper", \ - .thread = INIT_THREAD, \ - .fs = &init_fs, \ - .files = &init_files, \ - .signal = &init_signals, \ - .sighand = &init_sighand, \ - .nsproxy = &init_nsproxy, \ - .pending = { \ - .list = LIST_HEAD_INIT(tsk.pending.list), \ - .signal = {{0}}}, \ - .blocked = {{0}}, \ - .alloc_lock = __SPIN_LOCK_UNLOCKED(tsk.alloc_lock), \ - .journal_info = NULL, \ - .cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \ - .fs_excl = ATOMIC_INIT(0), \ - .pi_lock = __SPIN_LOCK_UNLOCKED(tsk.pi_lock), \ - .timer_slack_ns = 50000, /* 50 usec default slack */ \ - .pids = { \ - [PIDTYPE_PID] = INIT_PID_LINK(PIDTYPE_PID), \ - [PIDTYPE_PGID] = INIT_PID_LINK(PIDTYPE_PGID), \ - [PIDTYPE_SID] = INIT_PID_LINK(PIDTYPE_SID), \ - }, \ - .dirties = INIT_PROP_LOCAL_SINGLE(dirties), \ - INIT_IDS \ - INIT_PERF_COUNTERS(tsk) \ - INIT_TRACE_IRQFLAGS \ - INIT_LOCKDEP \ - INIT_FTRACE_GRAPH \ - INIT_TRACE_RECURSION \ -} -#else /* CONFIG_SCHED_BFS */ #define INIT_TASK(tsk) \ { \ .state = 0, \ @@ -230,14 +168,13 @@ extern struct cred init_cred; }, \ .dirties = INIT_PROP_LOCAL_SINGLE(dirties), \ INIT_IDS \ - INIT_PERF_EVENTS(tsk) \ + INIT_PERF_COUNTERS(tsk) \ INIT_TRACE_IRQFLAGS \ INIT_LOCKDEP \ INIT_FTRACE_GRAPH \ INIT_TRACE_RECURSION \ - INIT_TASK_RCU_PREEMPT(tsk) \ } -#endif /* CONFIG_SCHED_BFS */ + #define INIT_CPU_TIMERS(cpu_timers) \ { \ diff --git a/include/linux/ioprio.h b/include/linux/ioprio.h index 72324720a97..76dad480884 100644 --- a/include/linux/ioprio.h +++ b/include/linux/ioprio.h @@ -64,8 +64,6 @@ static inline int task_ioprio_class(struct io_context *ioc) static inline int task_nice_ioprio(struct task_struct *task) { - if (iso_task(task)) - return 0; return (task_nice(task) + 20) / 5; } diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h index efa6ae28a42..1a9cf78bfce 100644 --- a/include/linux/jiffies.h +++ b/include/linux/jiffies.h @@ -164,7 +164,7 @@ static inline u64 get_jiffies_64(void) * Have the 32 bit jiffies value wrap 5 minutes after boot * so jiffies wrap bugs show up earlier. */ -#define INITIAL_JIFFIES ((unsigned long)(unsigned int) (-10*HZ)) +#define INITIAL_JIFFIES ((unsigned long)(unsigned int) (-300*HZ)) /* * Change timeval to jiffies, trying to avoid the diff --git a/include/linux/sched.h b/include/linux/sched.h index 8590ae61f50..57f7e84692d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -36,16 +36,8 @@ #define SCHED_FIFO 1 #define SCHED_RR 2 #define SCHED_BATCH 3 -/* SCHED_ISO: Implemented on BFS only */ +/* SCHED_ISO: reserved but not implemented yet */ #define SCHED_IDLE 5 -#ifdef CONFIG_SCHED_BFS -#define SCHED_ISO 4 -#define SCHED_IDLEPRIO SCHED_IDLE - -#define SCHED_MAX (SCHED_IDLEPRIO) -#define SCHED_RANGE(policy) ((policy) <= SCHED_MAX) -#endif - /* Can be ORed in to make sure the process is reverted back to SCHED_NORMAL on fork */ #define SCHED_RESET_ON_FORK 0x40000000 @@ -148,7 +140,7 @@ extern int nr_processes(void); extern unsigned long nr_running(void); extern unsigned long nr_uninterruptible(void); extern unsigned long nr_iowait(void); -extern void calc_global_load(void); +extern void calc_global_load(unsigned long ticks); extern u64 cpu_nr_migrations(int cpu); extern unsigned long get_parent_ip(unsigned long addr); @@ -264,6 +256,9 @@ extern asmlinkage void schedule_tail(struct task_struct *prev); extern void init_idle(struct task_struct *idle, int cpu); extern void init_idle_bootup_task(struct task_struct *idle); +extern int runqueue_is_locked(void); +extern void task_rq_unlock_wait(struct task_struct *p); + extern cpumask_var_t nohz_cpu_mask; #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ) extern int select_nohz_load_balancer(int cpu); @@ -1028,6 +1023,148 @@ struct uts_namespace; struct rq; struct sched_domain; +struct sched_class { + const struct sched_class *next; + + void (*enqueue_task) (struct rq *rq, struct task_struct *p, int wakeup); + void (*dequeue_task) (struct rq *rq, struct task_struct *p, int sleep); + void (*yield_task) (struct rq *rq); + + void (*check_preempt_curr) (struct rq *rq, struct task_struct *p, int sync); + + struct task_struct * (*pick_next_task) (struct rq *rq); + void (*put_prev_task) (struct rq *rq, struct task_struct *p); + +#ifdef CONFIG_SMP + int (*select_task_rq)(struct task_struct *p, int sync); + + unsigned long (*load_balance) (struct rq *this_rq, int this_cpu, + struct rq *busiest, unsigned long max_load_move, + struct sched_domain *sd, enum cpu_idle_type idle, + int *all_pinned, int *this_best_prio); + + int (*move_one_task) (struct rq *this_rq, int this_cpu, + struct rq *busiest, struct sched_domain *sd, + enum cpu_idle_type idle); + void (*pre_schedule) (struct rq *this_rq, struct task_struct *task); + int (*needs_post_schedule) (struct rq *this_rq); + void (*post_schedule) (struct rq *this_rq); + void (*task_wake_up) (struct rq *this_rq, struct task_struct *task); + + void (*set_cpus_allowed)(struct task_struct *p, + const struct cpumask *newmask); + + void (*rq_online)(struct rq *rq); + void (*rq_offline)(struct rq *rq); +#endif + + void (*set_curr_task) (struct rq *rq); + void (*task_tick) (struct rq *rq, struct task_struct *p, int queued); + void (*task_new) (struct rq *rq, struct task_struct *p); + + void (*switched_from) (struct rq *this_rq, struct task_struct *task, + int running); + void (*switched_to) (struct rq *this_rq, struct task_struct *task, + int running); + void (*prio_changed) (struct rq *this_rq, struct task_struct *task, + int oldprio, int running); + +#ifdef CONFIG_FAIR_GROUP_SCHED + void (*moved_group) (struct task_struct *p); +#endif +}; + +struct load_weight { + unsigned long weight, inv_weight; +}; + +/* + * CFS stats for a schedulable entity (task, task-group etc) + * + * Current field usage histogram: + * + * 4 se->block_start + * 4 se->run_node + * 4 se->sleep_start + * 6 se->load.weight + */ +struct sched_entity { + struct load_weight load; /* for load-balancing */ + struct rb_node run_node; + struct list_head group_node; + unsigned int on_rq; + + u64 exec_start; + u64 sum_exec_runtime; + u64 vruntime; + u64 prev_sum_exec_runtime; + + u64 last_wakeup; + u64 avg_overlap; + + u64 nr_migrations; + + u64 start_runtime; + u64 avg_wakeup; + +#ifdef CONFIG_SCHEDSTATS + u64 wait_start; + u64 wait_max; + u64 wait_count; + u64 wait_sum; + + u64 sleep_start; + u64 sleep_max; + s64 sum_sleep_runtime; + + u64 block_start; + u64 block_max; + u64 exec_max; + u64 slice_max; + + u64 nr_migrations_cold; + u64 nr_failed_migrations_affine; + u64 nr_failed_migrations_running; + u64 nr_failed_migrations_hot; + u64 nr_forced_migrations; + u64 nr_forced2_migrations; + + u64 nr_wakeups; + u64 nr_wakeups_sync; + u64 nr_wakeups_migrate; + u64 nr_wakeups_local; + u64 nr_wakeups_remote; + u64 nr_wakeups_affine; + u64 nr_wakeups_affine_attempts; + u64 nr_wakeups_passive; + u64 nr_wakeups_idle; +#endif + +#ifdef CONFIG_FAIR_GROUP_SCHED + struct sched_entity *parent; + /* rq on which this entity is (to be) queued: */ + struct cfs_rq *cfs_rq; + /* rq "owned" by this entity/group: */ + struct cfs_rq *my_q; +#endif +}; + +struct sched_rt_entity { + struct list_head run_list; + unsigned long timeout; + unsigned int time_slice; + int nr_cpus_allowed; + + struct sched_rt_entity *back; +#ifdef CONFIG_RT_GROUP_SCHED + struct sched_rt_entity *parent; + /* rq on which this entity is (to be) queued: */ + struct rt_rq *rt_rq; + /* rq "owned" by this entity/group: */ + struct rt_rq *my_q; +#endif +}; + struct task_struct { volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */ void *stack; @@ -1037,33 +1174,17 @@ struct task_struct { int lock_depth; /* BKL lock depth */ -#ifndef CONFIG_SCHED_BFS #ifdef CONFIG_SMP #ifdef __ARCH_WANT_UNLOCKED_CTXSW int oncpu; #endif -#endif -#else /* CONFIG_SCHED_BFS */ - int oncpu; #endif int prio, static_prio, normal_prio; unsigned int rt_priority; -#ifdef CONFIG_SCHED_BFS - int time_slice; - u64 deadline; - struct list_head run_list; - u64 last_ran; - u64 sched_time; /* sched_clock time spent running */ -#ifdef CONFIG_SMP - int sticky; /* Soft affined flag */ -#endif - unsigned long rt_timeout; -#else /* CONFIG_SCHED_BFS */ const struct sched_class *sched_class; struct sched_entity se; struct sched_rt_entity rt; -#endif #ifdef CONFIG_PREEMPT_NOTIFIERS /* list of struct preempt_notifier: */ @@ -1158,9 +1279,6 @@ struct task_struct { int __user *clear_child_tid; /* CLONE_CHILD_CLEARTID */ cputime_t utime, stime, utimescaled, stimescaled; -#ifdef CONFIG_SCHED_BFS - unsigned long utime_pc, stime_pc; -#endif cputime_t gtime; cputime_t prev_utime, prev_stime; unsigned long nvcsw, nivcsw; /* context switch counts */ @@ -1370,66 +1488,6 @@ struct task_struct { #endif /* CONFIG_TRACING */ }; -#ifdef CONFIG_SCHED_BFS -extern int grunqueue_is_locked(void); -extern void grq_unlock_wait(void); -extern void cpu_scaling(int cpu); -extern void cpu_nonscaling(int cpu); -#define tsk_seruntime(t) ((t)->sched_time) -#define tsk_rttimeout(t) ((t)->rt_timeout) -#define task_rq_unlock_wait(tsk) grq_unlock_wait() - -static inline void set_oom_timeslice(struct task_struct *p) -{ - p->time_slice = HZ; -} - -static inline void tsk_cpus_current(struct task_struct *p) -{ -} - -#define runqueue_is_locked(cpu) grunqueue_is_locked() - -static inline void print_scheduler_version(void) -{ - printk(KERN_INFO"BFS CPU scheduler v0.376 by Con Kolivas.\n"); -} - -static inline int iso_task(struct task_struct *p) -{ - return (p->policy == SCHED_ISO); -} -#else -extern int runqueue_is_locked(int cpu); -extern void task_rq_unlock_wait(struct task_struct *p); -#define tsk_seruntime(t) ((t)->se.sum_exec_runtime) -#define tsk_rttimeout(t) ((t)->rt.timeout) - -static inline void sched_exit(struct task_struct *p) -{ -} - -static inline void set_oom_timeslice(struct task_struct *p) -{ - p->rt.time_slice = HZ; -} - -static inline void tsk_cpus_current(struct task_struct *p) -{ - p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed; -} - -static inline void print_scheduler_version(void) -{ - printk(KERN_INFO"CFS CPU scheduler.\n"); -} - -static inline int iso_task(struct task_struct *p) -{ - return 0; -} -#endif - /* Future-safe accessor for struct task_struct's cpus_allowed. */ #define tsk_cpumask(tsk) (&(tsk)->cpus_allowed) @@ -1448,19 +1506,9 @@ static inline int iso_task(struct task_struct *p) #define MAX_USER_RT_PRIO 100 #define MAX_RT_PRIO MAX_USER_RT_PRIO -#define DEFAULT_PRIO (MAX_RT_PRIO + 20) -#ifdef CONFIG_SCHED_BFS -#define PRIO_RANGE (40) -#define MAX_PRIO (MAX_RT_PRIO + PRIO_RANGE) -#define ISO_PRIO (MAX_RT_PRIO) -#define NORMAL_PRIO (MAX_RT_PRIO + 1) -#define IDLE_PRIO (MAX_RT_PRIO + 2) -#define PRIO_LIMIT ((IDLE_PRIO) + 1) -#else /* CONFIG_SCHED_BFS */ #define MAX_PRIO (MAX_RT_PRIO + 40) -#define NORMAL_PRIO DEFAULT_PRIO -#endif /* CONFIG_SCHED_BFS */ +#define DEFAULT_PRIO (MAX_RT_PRIO + 20) static inline int rt_prio(int prio) { @@ -1743,7 +1791,7 @@ task_sched_runtime(struct task_struct *task); extern unsigned long long thread_group_sched_runtime(struct task_struct *task); /* sched_exec is called by processes performing an exec */ -#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_BFS) +#ifdef CONFIG_SMP extern void sched_exec(void); #else #define sched_exec() {} @@ -1897,9 +1945,6 @@ extern void wake_up_new_task(struct task_struct *tsk, static inline void kick_process(struct task_struct *tsk) { } #endif extern void sched_fork(struct task_struct *p, int clone_flags); -#ifdef CONFIG_SCHED_BFS -extern void sched_exit(struct task_struct *p); -#endif extern void sched_dead(struct task_struct *p); extern void proc_caches_init(void); diff --git a/init/Kconfig b/init/Kconfig index d3d52e70123..3f7e60995c8 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -23,19 +23,6 @@ config CONSTRUCTORS menu "General setup" -config SCHED_BFS - bool "BFS cpu scheduler" - ---help--- - The Brain Fuck CPU Scheduler for excellent interactivity and - responsiveness on the desktop and solid scalability on normal - hardware. Not recommended for 4096 CPUs. - - Currently incompatible with the Group CPU scheduler, and RCU TORTURE - TEST so these options are disabled. - - Say Y here. - default y - config EXPERIMENTAL bool "Prompt for development and/or incomplete code/drivers" ---help--- @@ -456,7 +443,7 @@ config HAVE_UNSTABLE_SCHED_CLOCK config GROUP_SCHED bool "Group CPU scheduler" - depends on EXPERIMENTAL && !SCHED_BFS + depends on EXPERIMENTAL default n help This feature lets CPU scheduler recognize task groups and control CPU @@ -572,7 +559,7 @@ config PROC_PID_CPUSET config CGROUP_CPUACCT bool "Simple CPU accounting cgroup subsystem" - depends on CGROUPS && !SCHED_BFS + depends on CGROUPS help Provides a simple Resource Controller for monitoring the total CPU consumed by the tasks in a cgroup. diff --git a/init/main.c b/init/main.c index 501c5f65931..1ebd6e8f521 100644 --- a/init/main.c +++ b/init/main.c @@ -840,8 +840,6 @@ static noinline int init_post(void) system_state = SYSTEM_RUNNING; numa_default_policy(); - print_scheduler_version(); - if (sys_open((const char __user *) "/dev/console", O_RDWR, 0) < 0) printk(KERN_WARNING "Warning: unable to open an initial console.\n"); diff --git a/kernel/Makefile b/kernel/Makefile index 7594300f155..2093a691f1c 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -2,7 +2,7 @@ # Makefile for the linux kernel. # -obj-y = sched_bfs.o fork.o exec_domain.o panic.o printk.o \ +obj-y = sched.o fork.o exec_domain.o panic.o printk.o \ cpu.o exit.o itimer.o time.o softirq.o resource.o \ sysctl.o capability.o ptrace.o timer.o user.o \ signal.o sys.o kmod.o workqueue.o pid.o \ @@ -107,7 +107,7 @@ ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) # me. I suspect most platforms don't need this, but until we know that for sure # I turn this off for IA-64 only. Andreas Schwab says it's also needed on m68k # to get a correct value for the wait-channel (WCHAN in ps). --davidm -CFLAGS_sched_bfs.o := $(PROFILING) -fno-omit-frame-pointer +CFLAGS_sched.o := $(PROFILING) -fno-omit-frame-pointer endif $(obj)/configs.o: $(obj)/config_data.h diff --git a/kernel/delayacct.c b/kernel/delayacct.c index cbdc400ce8b..abb6e17505e 100644 --- a/kernel/delayacct.c +++ b/kernel/delayacct.c @@ -127,7 +127,7 @@ int __delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk) */ t1 = tsk->sched_info.pcount; t2 = tsk->sched_info.run_delay; - t3 = tsk_seruntime(tsk); + t3 = tsk->se.sum_exec_runtime; d->cpu_count += t1; diff --git a/kernel/exit.c b/kernel/exit.c index 4bf015908a4..b8606f0f22e 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -120,7 +120,7 @@ static void __exit_signal(struct task_struct *tsk) sig->inblock += task_io_get_inblock(tsk); sig->oublock += task_io_get_oublock(tsk); task_io_accounting_add(&sig->ioac, &tsk->ioac); - sig->sum_sched_runtime += tsk_seruntime(tsk); + sig->sum_sched_runtime += tsk->se.sum_exec_runtime; sig = NULL; /* Marker for below. */ } diff --git a/kernel/fork.c b/kernel/fork.c index 3dd4e16007e..4b36858c0f4 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1199,7 +1199,7 @@ static struct task_struct *copy_process(unsigned long clone_flags, * parent's CPU). This avoids alot of nasty races. */ p->cpus_allowed = current->cpus_allowed; - tsk_cpus_current(p); + p->rt.nr_cpus_allowed = current->rt.nr_cpus_allowed; if (unlikely(!cpu_isset(task_cpu(p), p->cpus_allowed) || !cpu_online(task_cpu(p)))) set_task_cpu(p, smp_processor_id()); diff --git a/kernel/kthread.c b/kernel/kthread.c index 67038f50dbc..eb8751aa041 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -16,7 +16,7 @@ #include #include -#define KTHREAD_NICE_LEVEL (0) +#define KTHREAD_NICE_LEVEL (-5) static DEFINE_SPINLOCK(kthread_create_lock); static LIST_HEAD(kthread_create_list); @@ -170,6 +170,7 @@ void kthread_bind(struct task_struct *k, unsigned int cpu) } set_task_cpu(k, cpu); k->cpus_allowed = cpumask_of_cpu(cpu); + k->rt.nr_cpus_allowed = 1; k->flags |= PF_THREAD_BOUND; } EXPORT_SYMBOL(kthread_bind); diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c index c9d160ecf49..e33a21cb940 100644 --- a/kernel/posix-cpu-timers.c +++ b/kernel/posix-cpu-timers.c @@ -249,7 +249,7 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times) do { times->utime = cputime_add(times->utime, t->utime); times->stime = cputime_add(times->stime, t->stime); - times->sum_exec_runtime += tsk_seruntime(t); + times->sum_exec_runtime += t->se.sum_exec_runtime; t = next_thread(t); } while (t != tsk); @@ -516,7 +516,7 @@ static void cleanup_timers(struct list_head *head, void posix_cpu_timers_exit(struct task_struct *tsk) { cleanup_timers(tsk->cpu_timers, - tsk->utime, tsk->stime, tsk_seruntime(tsk)); + tsk->utime, tsk->stime, tsk->se.sum_exec_runtime); } void posix_cpu_timers_exit_group(struct task_struct *tsk) @@ -526,7 +526,7 @@ void posix_cpu_timers_exit_group(struct task_struct *tsk) cleanup_timers(tsk->signal->cpu_timers, cputime_add(tsk->utime, sig->utime), cputime_add(tsk->stime, sig->stime), - tsk_seruntime(tsk) + sig->sum_sched_runtime); + tsk->se.sum_exec_runtime + sig->sum_sched_runtime); } static void clear_dead_task(struct k_itimer *timer, union cpu_time_count now) @@ -1017,7 +1017,7 @@ static void check_thread_timers(struct task_struct *tsk, struct cpu_timer_list *t = list_first_entry(timers, struct cpu_timer_list, entry); - if (!--maxfire || tsk_seruntime(tsk) < t->expires.sched) { + if (!--maxfire || tsk->se.sum_exec_runtime < t->expires.sched) { tsk->cputime_expires.sched_exp = t->expires.sched; break; } @@ -1033,7 +1033,7 @@ static void check_thread_timers(struct task_struct *tsk, unsigned long *soft = &sig->rlim[RLIMIT_RTTIME].rlim_cur; if (hard != RLIM_INFINITY && - tsk_rttimeout(tsk) > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) { + tsk->rt.timeout > DIV_ROUND_UP(hard, USEC_PER_SEC/HZ)) { /* * At the hard limit, we just die. * No need to calculate anything else now. @@ -1041,7 +1041,7 @@ static void check_thread_timers(struct task_struct *tsk, __group_send_sig_info(SIGKILL, SEND_SIG_PRIV, tsk); return; } - if (tsk_rttimeout(tsk) > DIV_ROUND_UP(*soft, USEC_PER_SEC/HZ)) { + if (tsk->rt.timeout > DIV_ROUND_UP(*soft, USEC_PER_SEC/HZ)) { /* * At the soft limit, send a SIGXCPU every second. */ @@ -1357,7 +1357,7 @@ static inline int fastpath_timer_check(struct task_struct *tsk) struct task_cputime task_sample = { .utime = tsk->utime, .stime = tsk->stime, - .sum_exec_runtime = tsk_seruntime(tsk) + .sum_exec_runtime = tsk->se.sum_exec_runtime }; if (task_cputime_expired(&task_sample, &tsk->cputime_expires)) diff --git a/kernel/sched.c b/kernel/sched.c index c1b728f1937..4c1852a8846 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1,6 +1,3 @@ -#ifdef CONFIG_SCHED_BFS -#include "sched_bfs.c" -#else /* * kernel/sched.c * @@ -10801,4 +10798,3 @@ struct cgroup_subsys cpuacct_subsys = { .subsys_id = cpuacct_subsys_id, }; #endif /* CONFIG_CGROUP_CPUACCT */ -#endif /* CONFIG_SCHED_BFS */ diff --git a/kernel/sched_bfs.c b/kernel/sched_bfs.c deleted file mode 100644 index 31d1ac8a0e0..00000000000 --- a/kernel/sched_bfs.c +++ /dev/null @@ -1,6737 +0,0 @@ -/* - * kernel/sched_bfs.c, was sched.c - * - * Kernel scheduler and related syscalls - * - * Copyright (C) 1991-2002 Linus Torvalds - * - * 1996-12-23 Modified by Dave Grothe to fix bugs in semaphores and - * make semaphores SMP safe - * 1998-11-19 Implemented schedule_timeout() and related stuff - * by Andrea Arcangeli - * 2002-01-04 New ultra-scalable O(1) scheduler by Ingo Molnar: - * hybrid priority-list and round-robin design with - * an array-switch method of distributing timeslices - * and per-CPU runqueues. Cleanups and useful suggestions - * by Davide Libenzi, preemptible kernel bits by Robert Love. - * 2003-09-03 Interactivity tuning by Con Kolivas. - * 2004-04-02 Scheduler domains code by Nick Piggin - * 2007-04-15 Work begun on replacing all interactivity tuning with a - * fair scheduling design by Con Kolivas. - * 2007-05-05 Load balancing (smp-nice) and other improvements - * by Peter Williams - * 2007-05-06 Interactivity improvements to CFS by Mike Galbraith - * 2007-07-01 Group scheduling enhancements by Srivatsa Vaddagiri - * 2007-11-29 RT balancing improvements by Steven Rostedt, Gregory Haskins, - * Thomas Gleixner, Mike Kravetz - * now Brainfuck deadline scheduling policy by Con Kolivas deletes - * a whole lot of those previous things. - */ - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include -#include - -#define CREATE_TRACE_POINTS -#include - -#define rt_prio(prio) unlikely((prio) < MAX_RT_PRIO) -#define rt_task(p) rt_prio((p)->prio) -#define rt_queue(rq) rt_prio((rq)->rq_prio) -#define batch_task(p) (unlikely((p)->policy == SCHED_BATCH)) -#define is_rt_policy(policy) ((policy) == SCHED_FIFO || \ - (policy) == SCHED_RR) -#define has_rt_policy(p) unlikely(is_rt_policy((p)->policy)) -#define idleprio_task(p) unlikely((p)->policy == SCHED_IDLEPRIO) -#define iso_task(p) unlikely((p)->policy == SCHED_ISO) -#define iso_queue(rq) unlikely((rq)->rq_policy == SCHED_ISO) -#define ISO_PERIOD ((5 * HZ * grq.noc) + 1) - -/* - * Convert user-nice values [ -20 ... 0 ... 19 ] - * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ], - * and back. - */ -#define NICE_TO_PRIO(nice) (MAX_RT_PRIO + (nice) + 20) -#define PRIO_TO_NICE(prio) ((prio) - MAX_RT_PRIO - 20) -#define TASK_NICE(p) PRIO_TO_NICE((p)->static_prio) - -/* - * 'User priority' is the nice value converted to something we - * can work with better when scaling various scheduler parameters, - * it's a [ 0 ... 39 ] range. - */ -#define USER_PRIO(p) ((p)-MAX_RT_PRIO) -#define TASK_USER_PRIO(p) USER_PRIO((p)->static_prio) -#define MAX_USER_PRIO (USER_PRIO(MAX_PRIO)) -#define SCHED_PRIO(p) ((p)+MAX_RT_PRIO) - -/* - * Some helpers for converting to/from various scales. Use shifts to get - * approximate multiples of ten for less overhead. - */ -#define JIFFIES_TO_NS(TIME) ((TIME) * (1000000000 / HZ)) -#define JIFFY_NS (1000000000 / HZ) -#define HALF_JIFFY_NS (1000000000 / HZ / 2) -#define HALF_JIFFY_US (1000000 / HZ / 2) -#define MS_TO_NS(TIME) ((TIME) << 20) -#define MS_TO_US(TIME) ((TIME) << 10) -#define US_TO_NS(TIME) ((TIME) >> 10) -#define NS_TO_MS(TIME) ((TIME) >> 20) -#define NS_TO_US(TIME) ((TIME) >> 10) - -#define RESCHED_US (100) /* Reschedule if less than this many μs left */ - -#ifdef CONFIG_SMP -/* - * Divide a load by a sched group cpu_power : (load / sg->__cpu_power) - * Since cpu_power is a 'constant', we can use a reciprocal divide. - */ -static inline u32 sg_div_cpu_power(const struct sched_group *sg, u32 load) -{ - return reciprocal_divide(load, sg->reciprocal_cpu_power); -} - -/* - * Each time a sched group cpu_power is changed, - * we must compute its reciprocal value - */ -static inline void sg_inc_cpu_power(struct sched_group *sg, u32 val) -{ - sg->__cpu_power += val; - sg->reciprocal_cpu_power = reciprocal_value(sg->__cpu_power); -} -#endif - -/* - * This is the time all tasks within the same priority round robin. - * Value is in ms and set to a minimum of 6ms. Scales with number of cpus. - * Tunable via /proc interface. - */ -int rr_interval __read_mostly = 6; - -/* - * sched_iso_cpu - sysctl which determines the cpu percentage SCHED_ISO tasks - * are allowed to run five seconds as real time tasks. This is the total over - * all online cpus. - */ -int sched_iso_cpu __read_mostly = 70; - -/* - * The relative length of deadline for each priority(nice) level. - */ -static int prio_ratios[PRIO_RANGE] __read_mostly; - -/* - * The quota handed out to tasks of all priority levels when refilling their - * time_slice. - */ -static inline unsigned long timeslice(void) -{ - return MS_TO_US(rr_interval); -} - -/* - * The global runqueue data that all CPUs work off. Data is protected either - * by the global grq lock, or the discrete lock that precedes the data in this - * struct. - */ -struct global_rq { - spinlock_t lock; - unsigned long nr_running; - unsigned long nr_uninterruptible; - unsigned long long nr_switches; - struct list_head queue[PRIO_LIMIT]; - DECLARE_BITMAP(prio_bitmap, PRIO_LIMIT + 1); -#ifdef CONFIG_SMP - unsigned long qnr; /* queued not running */ - cpumask_t cpu_idle_map; - int idle_cpus; -#endif - int noc; /* num_online_cpus stored and updated when it changes */ - u64 niffies; /* Nanosecond jiffies */ - unsigned long last_jiffy; /* Last jiffy we updated niffies */ - - spinlock_t iso_lock; - int iso_ticks; - int iso_refractory; -}; - -/* There can be only one */ -static struct global_rq grq; - -/* - * This is the main, per-CPU runqueue data structure. - * This data should only be modified by the local cpu. - */ -struct rq { -#ifdef CONFIG_SMP -#ifdef CONFIG_NO_HZ - unsigned char in_nohz_recently; -#endif -#endif - - struct task_struct *curr, *idle; - struct mm_struct *prev_mm; - - /* Stored data about rq->curr to work outside grq lock */ - u64 rq_deadline; - unsigned int rq_policy; - int rq_time_slice; - u64 rq_last_ran; - int rq_prio; - int rq_running; /* There is a task running */ - - /* Accurate timekeeping data */ - u64 timekeep_clock; - unsigned long user_pc, nice_pc, irq_pc, softirq_pc, system_pc, - iowait_pc, idle_pc; - atomic_t nr_iowait; - -#ifdef CONFIG_SMP - int cpu; /* cpu of this runqueue */ - int online; - int scaling; /* This CPU is managed by a scaling CPU freq governor */ - struct task_struct *sticky_task; - - struct root_domain *rd; - struct sched_domain *sd; - unsigned long *cpu_locality; /* CPU relative cache distance */ -#ifdef CONFIG_SCHED_SMT - int (*siblings_idle)(unsigned long cpu); - /* See if all smt siblings are idle */ - cpumask_t smt_siblings; -#endif -#ifdef CONFIG_SCHED_MC - int (*cache_idle)(unsigned long cpu); - /* See if all cache siblings are idle */ - cpumask_t cache_siblings; -#endif - u64 last_niffy; /* Last time this RQ updated grq.niffies */ -#endif - u64 clock, old_clock, last_tick; - int dither; - -#ifdef CONFIG_SCHEDSTATS - - /* latency stats */ - struct sched_info rq_sched_info; - unsigned long long rq_cpu_time; - /* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */ - - /* sys_sched_yield() stats */ - unsigned int yld_count; - - /* schedule() stats */ - unsigned int sched_switch; - unsigned int sched_count; - unsigned int sched_goidle; - - /* try_to_wake_up() stats */ - unsigned int ttwu_count; - unsigned int ttwu_local; - - /* BKL stats */ - unsigned int bkl_count; -#endif -}; - -static DEFINE_PER_CPU(struct rq, runqueues) ____cacheline_aligned_in_smp; -static DEFINE_MUTEX(sched_hotcpu_mutex); - -#ifdef CONFIG_SMP - -/* - * We add the notion of a root-domain which will be used to define per-domain - * variables. Each exclusive cpuset essentially defines an island domain by - * fully partitioning the member cpus from any other cpuset. Whenever a new - * exclusive cpuset is created, we also create and attach a new root-domain - * object. - * - */ -struct root_domain { - atomic_t refcount; - cpumask_var_t span; - cpumask_var_t online; - - /* - * The "RT overload" flag: it gets set if a CPU has more than - * one runnable RT task. - */ - cpumask_var_t rto_mask; - atomic_t rto_count; -#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) - /* - * Preferred wake up cpu nominated by sched_mc balance that will be - * used when most cpus are idle in the system indicating overall very - * low system utilisation. Triggered at POWERSAVINGS_BALANCE_WAKEUP(2) - */ - unsigned int sched_mc_preferred_wakeup_cpu; -#endif -}; - -/* - * By default the system creates a single root-domain with all cpus as - * members (mimicking the global state we have today). - */ -static struct root_domain def_root_domain; -#endif - -/* - * The domain tree (rq->sd) is protected by RCU's quiescent state transition. - * See detach_destroy_domains: synchronize_sched for details. - * - * The domain tree of any CPU may only be accessed from within - * preempt-disabled sections. - */ -#define for_each_domain(cpu, __sd) \ - for (__sd = rcu_dereference(cpu_rq(cpu)->sd); __sd; __sd = __sd->parent) - -static inline void update_rq_clock(struct rq *rq); - -/* - * Sanity check should sched_clock return bogus values. We make sure it does - * not appear to go backwards, and use jiffies to determine the maximum it - * could possibly have increased. At least 1us will have always passed so we - * use that when we don't trust the difference. - */ -static inline void niffy_diff(s64 *niff_diff, int jiff_diff) -{ - unsigned long max_diff; - - /* Round up to the nearest tick for maximum */ - max_diff = JIFFIES_TO_NS(jiff_diff + 1); - - if (unlikely(*niff_diff < 1 || *niff_diff > max_diff)) - *niff_diff = US_TO_NS(1); -} - -#ifdef CONFIG_SMP -#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu))) -#define this_rq() (&__get_cpu_var(runqueues)) -#define task_rq(p) cpu_rq(task_cpu(p)) -#define cpu_curr(cpu) (cpu_rq(cpu)->curr) -static inline int cpu_of(struct rq *rq) -{ - return rq->cpu; -} - -/* - * Niffies are a globally increasing nanosecond counter. Whenever a runqueue - * clock is updated with the grq.lock held, it is an opportunity to update the - * niffies value. Any CPU can update it by adding how much its clock has - * increased since it last updated niffies, minus any added niffies by other - * CPUs. - */ -static inline void update_clocks(struct rq *rq) -{ - s64 ndiff; - long jdiff; - - update_rq_clock(rq); - ndiff = rq->clock - rq->old_clock; - /* old_clock is only updated when we are updating niffies */ - rq->old_clock = rq->clock; - ndiff -= grq.niffies - rq->last_niffy; - jdiff = jiffies - grq.last_jiffy; - niffy_diff(&ndiff, jdiff); - grq.last_jiffy += jdiff; - grq.niffies += ndiff; - rq->last_niffy = grq.niffies; -} -#else /* CONFIG_SMP */ -static struct rq *uprq; -#define cpu_rq(cpu) (uprq) -#define this_rq() (uprq) -#define task_rq(p) (uprq) -#define cpu_curr(cpu) ((uprq)->curr) -static inline int cpu_of(struct rq *rq) -{ - return 0; -} - -static inline void update_clocks(struct rq *rq) -{ - s64 ndiff; - long jdiff; - - update_rq_clock(rq); - ndiff = rq->clock - rq->old_clock; - rq->old_clock = rq->clock; - jdiff = jiffies - grq.last_jiffy; - niffy_diff(&ndiff, jdiff); - grq.last_jiffy += jdiff; - grq.niffies += ndiff; -} -#endif - -#include "sched_stats.h" - -#ifndef prepare_arch_switch -# define prepare_arch_switch(next) do { } while (0) -#endif -#ifndef finish_arch_switch -# define finish_arch_switch(prev) do { } while (0) -#endif - -/* - * All common locking functions performed on grq.lock. rq->clock is local to - * the CPU accessing it so it can be modified just with interrupts disabled - * when we're not updating niffies. - * Looking up task_rq must be done under grq.lock to be safe. - */ -static inline void update_rq_clock(struct rq *rq) -{ - rq->clock = sched_clock_cpu(cpu_of(rq)); -} - -static inline int task_running(struct task_struct *p) -{ - return p->oncpu; -} - -static inline void grq_lock(void) - __acquires(grq.lock) -{ - spin_lock(&grq.lock); -} - -static inline void grq_unlock(void) - __releases(grq.lock) -{ - spin_unlock(&grq.lock); -} - -static inline void grq_lock_irq(void) - __acquires(grq.lock) -{ - spin_lock_irq(&grq.lock); -} - -static inline void time_lock_grq(struct rq *rq) - __acquires(grq.lock) -{ - grq_lock(); - update_clocks(rq); -} - -static inline void grq_unlock_irq(void) - __releases(grq.lock) -{ - spin_unlock_irq(&grq.lock); -} - -static inline void grq_lock_irqsave(unsigned long *flags) - __acquires(grq.lock) -{ - spin_lock_irqsave(&grq.lock, *flags); -} - -static inline void grq_unlock_irqrestore(unsigned long *flags) - __releases(grq.lock) -{ - spin_unlock_irqrestore(&grq.lock, *flags); -} - -static inline struct rq -*task_grq_lock(struct task_struct *p, unsigned long *flags) - __acquires(grq.lock) -{ - grq_lock_irqsave(flags); - return task_rq(p); -} - -static inline struct rq -*time_task_grq_lock(struct task_struct *p, unsigned long *flags) - __acquires(grq.lock) -{ - struct rq *rq = task_grq_lock(p, flags); - update_clocks(rq); - return rq; -} - -static inline struct rq *task_grq_lock_irq(struct task_struct *p) - __acquires(grq.lock) -{ - grq_lock_irq(); - return task_rq(p); -} - -static inline void time_task_grq_lock_irq(struct task_struct *p) - __acquires(grq.lock) -{ - struct rq *rq = task_grq_lock_irq(p); - update_clocks(rq); -} - -static inline void task_grq_unlock_irq(void) - __releases(grq.lock) -{ - grq_unlock_irq(); -} - -static inline void task_grq_unlock(unsigned long *flags) - __releases(grq.lock) -{ - grq_unlock_irqrestore(flags); -} - -/** - * grunqueue_is_locked - * - * Returns true if the global runqueue is locked. - * This interface allows printk to be called with the runqueue lock - * held and know whether or not it is OK to wake up the klogd. - */ -inline int grunqueue_is_locked(void) -{ - return spin_is_locked(&grq.lock); -} - -inline void grq_unlock_wait(void) - __releases(grq.lock) -{ - smp_mb(); /* spin-unlock-wait is not a full memory barrier */ - spin_unlock_wait(&grq.lock); -} - -static inline void time_grq_lock(struct rq *rq, unsigned long *flags) - __acquires(grq.lock) -{ - local_irq_save(*flags); - time_lock_grq(rq); -} - -static inline struct rq *__task_grq_lock(struct task_struct *p) - __acquires(grq.lock) -{ - grq_lock(); - return task_rq(p); -} - -static inline void __task_grq_unlock(void) - __releases(grq.lock) -{ - grq_unlock(); -} - -#ifndef __ARCH_WANT_UNLOCKED_CTXSW -static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next) -{ -} - -static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev) -{ -#ifdef CONFIG_DEBUG_SPINLOCK - /* this is a valid case when another task releases the spinlock */ - grq.lock.owner = current; -#endif - /* - * If we are tracking spinlock dependencies then we have to - * fix up the runqueue lock - which gets 'carried over' from - * prev into current: - */ - spin_acquire(&grq.lock.dep_map, 0, 0, _THIS_IP_); - - grq_unlock_irq(); -} - -#else /* __ARCH_WANT_UNLOCKED_CTXSW */ - -static inline void prepare_lock_switch(struct rq *rq, struct task_struct *next) -{ -#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW - grq_unlock_irq(); -#else - grq_unlock(); -#endif -} - -static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev) -{ - smp_wmb(); -#ifndef __ARCH_WANT_INTERRUPTS_ON_CTXSW - local_irq_enable(); -#endif -} -#endif /* __ARCH_WANT_UNLOCKED_CTXSW */ - -static inline int deadline_before(u64 deadline, u64 time) -{ - return (deadline < time); -} - -static inline int deadline_after(u64 deadline, u64 time) -{ - return (deadline > time); -} - -/* - * A task that is queued but not running will be on the grq run list. - * A task that is not running or queued will not be on the grq run list. - * A task that is currently running will have ->oncpu set but not on the - * grq run list. - */ -static inline int task_queued(struct task_struct *p) -{ - return (!list_empty(&p->run_list)); -} - -/* - * Removing from the global runqueue. Enter with grq locked. - */ -static void dequeue_task(struct task_struct *p) -{ - list_del_init(&p->run_list); - if (list_empty(grq.queue + p->prio)) - __clear_bit(p->prio, grq.prio_bitmap); -} - -/* - * To determine if it's safe for a task of SCHED_IDLEPRIO to actually run as - * an idle task, we ensure none of the following conditions are met. - */ -static int idleprio_suitable(struct task_struct *p) -{ - return (!freezing(p) && !signal_pending(p) && - !(task_contributes_to_load(p)) && !(p->flags & (PF_EXITING))); -} - -/* - * To determine if a task of SCHED_ISO can run in pseudo-realtime, we check - * that the iso_refractory flag is not set. - */ -static int isoprio_suitable(void) -{ - return !grq.iso_refractory; -} - -/* - * Adding to the global runqueue. Enter with grq locked. - */ -static void enqueue_task(struct task_struct *p) -{ - if (!rt_task(p)) { - /* Check it hasn't gotten rt from PI */ - if ((idleprio_task(p) && idleprio_suitable(p)) || - (iso_task(p) && isoprio_suitable())) - p->prio = p->normal_prio; - else - p->prio = NORMAL_PRIO; - } - __set_bit(p->prio, grq.prio_bitmap); - list_add_tail(&p->run_list, grq.queue + p->prio); - sched_info_queued(p); -} - -/* Only idle task does this as a real time task*/ -static inline void enqueue_task_head(struct task_struct *p) -{ - __set_bit(p->prio, grq.prio_bitmap); - list_add(&p->run_list, grq.queue + p->prio); - sched_info_queued(p); -} - -static inline void requeue_task(struct task_struct *p) -{ - sched_info_queued(p); -} - -/* - * Returns the relative length of deadline all compared to the shortest - * deadline which is that of nice -20. - */ -static inline int task_prio_ratio(struct task_struct *p) -{ - return prio_ratios[TASK_USER_PRIO(p)]; -} - -/* - * task_timeslice - all tasks of all priorities get the exact same timeslice - * length. CPU distribution is handled by giving different deadlines to - * tasks of different priorities. Use 128 as the base value for fast shifts. - */ -static inline int task_timeslice(struct task_struct *p) -{ - return (rr_interval * task_prio_ratio(p) / 128); -} - -#ifdef CONFIG_SMP -/* - * qnr is the "queued but not running" count which is the total number of - * tasks on the global runqueue list waiting for cpu time but not actually - * currently running on a cpu. - */ -static inline void inc_qnr(void) -{ - grq.qnr++; -} - -static inline void dec_qnr(void) -{ - grq.qnr--; -} - -static inline int queued_notrunning(void) -{ - return grq.qnr; -} - -/* - * The cpu_idle_map stores a bitmap of all the CPUs currently idle to - * allow easy lookup of whether any suitable idle CPUs are available. - * It's cheaper to maintain a binary yes/no if there are any idle CPUs on the - * idle_cpus variable than to do a full bitmask check when we are busy. - */ -static inline void set_cpuidle_map(unsigned long cpu) -{ - cpu_set(cpu, grq.cpu_idle_map); - grq.idle_cpus = 1; -} - -static inline void clear_cpuidle_map(unsigned long cpu) -{ - cpu_clear(cpu, grq.cpu_idle_map); - if (cpus_empty(grq.cpu_idle_map)) - grq.idle_cpus = 0; -} - -static int suitable_idle_cpus(struct task_struct *p) -{ - if (!grq.idle_cpus) - return 0; - return (cpus_intersects(p->cpus_allowed, grq.cpu_idle_map)); -} - -static void resched_task(struct task_struct *p); - -#define CPUIDLE_DIFF_THREAD (1) -#define CPUIDLE_DIFF_CORE (2) -#define CPUIDLE_CACHE_BUSY (4) -#define CPUIDLE_DIFF_CPU (8) -#define CPUIDLE_THREAD_BUSY (16) -#define CPUIDLE_DIFF_NODE (32) - -/* - * The best idle CPU is chosen according to the CPUIDLE ranking above where the - * lowest value would give the most suitable CPU to schedule p onto next. The - * order works out to be the following: - * - * Same core, idle or busy cache, idle threads - * Other core, same cache, idle or busy cache, idle threads. - * Same node, other CPU, idle cache, idle threads. - * Same node, other CPU, busy cache, idle threads. - * Same core, busy threads. - * Other core, same cache, busy threads. - * Same node, other CPU, busy threads. - * Other node, other CPU, idle cache, idle threads. - * Other node, other CPU, busy cache, idle threads. - * Other node, other CPU, busy threads. - */ -static void resched_best_mask(unsigned long best_cpu, struct rq *rq, cpumask_t *tmpmask) -{ - unsigned long cpu_tmp, best_ranking; - - best_ranking = ~0UL; - - for_each_cpu_mask(cpu_tmp, *tmpmask) { - unsigned long ranking; - struct rq *tmp_rq; - - ranking = 0; - tmp_rq = cpu_rq(cpu_tmp); - -#ifdef CONFIG_NUMA - if (rq->cpu_locality[cpu_tmp] > 3) - ranking |= CPUIDLE_DIFF_NODE; - else -#endif - if (rq->cpu_locality[cpu_tmp] > 2) - ranking |= CPUIDLE_DIFF_CPU; -#ifdef CONFIG_SCHED_MC - if (rq->cpu_locality[cpu_tmp] == 2) - ranking |= CPUIDLE_DIFF_CORE; - if (!(tmp_rq->cache_idle(cpu_tmp))) - ranking |= CPUIDLE_CACHE_BUSY; -#endif -#ifdef CONFIG_SCHED_SMT - if (rq->cpu_locality[cpu_tmp] == 1) - ranking |= CPUIDLE_DIFF_THREAD; - if (!(tmp_rq->siblings_idle(cpu_tmp))) - ranking |= CPUIDLE_THREAD_BUSY; -#endif - if (ranking < best_ranking) { - best_cpu = cpu_tmp; - if (ranking == 0) - break; - best_ranking = ranking; - } - } - - resched_task(cpu_rq(best_cpu)->curr); -} - -static void resched_best_idle(struct task_struct *p) -{ - cpumask_t tmpmask; - - cpus_and(tmpmask, p->cpus_allowed, grq.cpu_idle_map); - resched_best_mask(task_cpu(p), task_rq(p), &tmpmask); -} - -static inline void resched_suitable_idle(struct task_struct *p) -{ - if (suitable_idle_cpus(p)) - resched_best_idle(p); -} -/* - * Flags to tell us whether this CPU is running a CPU frequency governor that - * has slowed its speed or not. No locking required as the very rare wrongly - * read value would be harmless. - */ -void cpu_scaling(int cpu) -{ - cpu_rq(cpu)->scaling = 1; -} - -void cpu_nonscaling(int cpu) -{ - cpu_rq(cpu)->scaling = 0; -} - -static inline int scaling_rq(struct rq *rq) -{ - return rq->scaling; -} -#else /* CONFIG_SMP */ -static inline void inc_qnr(void) -{ -} - -static inline void dec_qnr(void) -{ -} - -static inline int queued_notrunning(void) -{ - return grq.nr_running; -} - -static inline void set_cpuidle_map(unsigned long cpu) -{ -} - -static inline void clear_cpuidle_map(unsigned long cpu) -{ -} - -static inline int suitable_idle_cpus(struct task_struct *p) -{ - return uprq->curr == uprq->idle; -} - -static inline void resched_suitable_idle(struct task_struct *p) -{ -} - -void cpu_scaling(int __unused) -{ -} - -void cpu_nonscaling(int __unused) -{ -} - -/* - * Although CPUs can scale in UP, there is nowhere else for tasks to go so this - * always returns 0. - */ -static inline int scaling_rq(struct rq *rq) -{ - return 0; -} -#endif /* CONFIG_SMP */ -EXPORT_SYMBOL_GPL(cpu_scaling); -EXPORT_SYMBOL_GPL(cpu_nonscaling); - -/* - * activate_idle_task - move idle task to the _front_ of runqueue. - */ -static inline void activate_idle_task(struct task_struct *p) -{ - enqueue_task_head(p); - grq.nr_running++; - inc_qnr(); -} - -static inline int normal_prio(struct task_struct *p) -{ - if (has_rt_policy(p)) - return MAX_RT_PRIO - 1 - p->rt_priority; - if (idleprio_task(p)) - return IDLE_PRIO; - if (iso_task(p)) - return ISO_PRIO; - return NORMAL_PRIO; -} - -/* - * Calculate the current priority, i.e. the priority - * taken into account by the scheduler. This value might - * be boosted by RT tasks as it will be RT if the task got - * RT-boosted. If not then it returns p->normal_prio. - */ -static int effective_prio(struct task_struct *p) -{ - p->normal_prio = normal_prio(p); - /* - * If we are RT tasks or we were boosted to RT priority, - * keep the priority unchanged. Otherwise, update priority - * to the normal priority: - */ - if (!rt_prio(p->prio)) - return p->normal_prio; - return p->prio; -} - -/* - * activate_task - move a task to the runqueue. Enter with grq locked. - */ -static void activate_task(struct task_struct *p, struct rq *rq) -{ - update_clocks(rq); - - /* - * Sleep time is in units of nanosecs, so shift by 20 to get a - * milliseconds-range estimation of the amount of time that the task - * spent sleeping: - */ - if (unlikely(prof_on == SLEEP_PROFILING)) { - if (p->state == TASK_UNINTERRUPTIBLE) - profile_hits(SLEEP_PROFILING, (void *)get_wchan(p), - (rq->clock - p->last_ran) >> 20); - } - - p->prio = effective_prio(p); - if (task_contributes_to_load(p)) - grq.nr_uninterruptible--; - enqueue_task(p); - grq.nr_running++; - inc_qnr(); -} - -/* - * deactivate_task - If it's running, it's not on the grq and we can just - * decrement the nr_running. Enter with grq locked. - */ -static inline void deactivate_task(struct task_struct *p) -{ - if (task_contributes_to_load(p)) - grq.nr_uninterruptible++; - grq.nr_running--; -} - -#ifdef CONFIG_SMP -void set_task_cpu(struct task_struct *p, unsigned int cpu) -{ - trace_sched_migrate_task(p, cpu); - perf_swcounter_event(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 1, NULL, 0); - /* - * After ->cpu is set up to a new value, task_grq_lock(p, ...) can be - * successfuly executed on another CPU. We must ensure that updates of - * per-task data have been completed by this moment. - */ - smp_wmb(); - task_thread_info(p)->cpu = cpu; -} - -static inline void clear_sticky(struct task_struct *p) -{ - p->sticky = 0; -} - -static inline int task_sticky(struct task_struct *p) -{ - return p->sticky; -} - -/* Reschedule the best idle CPU that is not this one. */ -static void -resched_closest_idle(struct rq *rq, unsigned long cpu, struct task_struct *p) -{ - cpumask_t tmpmask; - - cpus_and(tmpmask, p->cpus_allowed, grq.cpu_idle_map); - cpu_clear(cpu, tmpmask); - if (cpus_empty(tmpmask)) - return; - resched_best_mask(cpu, rq, &tmpmask); -} - -/* - * We set the sticky flag on a task that is descheduled involuntarily meaning - * it is awaiting further CPU time. If the last sticky task is still sticky - * but unlucky enough to not be the next task scheduled, we unstick it and try - * to find it an idle CPU. Realtime tasks do not stick to minimise their - * latency at all times. - */ -static inline void -swap_sticky(struct rq *rq, unsigned long cpu, struct task_struct *p) -{ - if (rq->sticky_task) { - if (rq->sticky_task == p) { - p->sticky = 1; - return; - } - if (rq->sticky_task->sticky) { - rq->sticky_task->sticky = 0; - resched_closest_idle(rq, cpu, rq->sticky_task); - } - } - if (!rt_task(p)) { - p->sticky = 1; - rq->sticky_task = p; - } else { - resched_closest_idle(rq, cpu, p); - rq->sticky_task = NULL; - } -} - -static inline void unstick_task(struct rq *rq, struct task_struct *p) -{ - rq->sticky_task = NULL; - clear_sticky(p); -} -#else -static inline void clear_sticky(struct task_struct *p) -{ -} - -static inline int task_sticky(struct task_struct *p) -{ - return 0; -} - -static inline void -swap_sticky(struct rq *rq, unsigned long cpu, struct task_struct *p) -{ -} - -static inline void unstick_task(struct rq *rq, struct task_struct *p) -{ -} -#endif - -/* - * Move a task off the global queue and take it to a cpu for it will - * become the running task. - */ -static inline void take_task(struct rq *rq, struct task_struct *p) -{ - set_task_cpu(p, cpu_of(rq)); - dequeue_task(p); - clear_sticky(p); - dec_qnr(); -} - -/* - * Returns a descheduling task to the grq runqueue unless it is being - * deactivated. - */ -static inline void return_task(struct task_struct *p, int deactivate) -{ - if (deactivate) - deactivate_task(p); - else { - inc_qnr(); - enqueue_task(p); - } -} - -/* - * resched_task - mark a task 'to be rescheduled now'. - * - * On UP this means the setting of the need_resched flag, on SMP it - * might also involve a cross-CPU call to trigger the scheduler on - * the target CPU. - */ -#ifdef CONFIG_SMP - -#ifndef tsk_is_polling -#define tsk_is_polling(t) test_tsk_thread_flag(t, TIF_POLLING_NRFLAG) -#endif - -static void resched_task(struct task_struct *p) -{ - int cpu; - - assert_spin_locked(&grq.lock); - - if (unlikely(test_tsk_thread_flag(p, TIF_NEED_RESCHED))) - return; - - set_tsk_thread_flag(p, TIF_NEED_RESCHED); - - cpu = task_cpu(p); - if (cpu == smp_processor_id()) - return; - - /* NEED_RESCHED must be visible before we test polling */ - smp_mb(); - if (!tsk_is_polling(p)) - smp_send_reschedule(cpu); -} - -#else -static inline void resched_task(struct task_struct *p) -{ - assert_spin_locked(&grq.lock); - set_tsk_need_resched(p); -} -#endif - -/** - * task_curr - is this task currently executing on a CPU? - * @p: the task in question. - */ -inline int task_curr(const struct task_struct *p) -{ - return cpu_curr(task_cpu(p)) == p; -} - -#ifdef CONFIG_SMP -struct migration_req { - struct list_head list; - - struct task_struct *task; - int dest_cpu; - - struct completion done; -}; - -/* - * wait_task_context_switch - wait for a thread to complete at least one - * context switch. - * - * @p must not be current. - */ -void wait_task_context_switch(struct task_struct *p) -{ - unsigned long nvcsw, nivcsw, flags; - int running; - struct rq *rq; - - nvcsw = p->nvcsw; - nivcsw = p->nivcsw; - for (;;) { - /* - * The runqueue is assigned before the actual context - * switch. We need to take the runqueue lock. - * - * We could check initially without the lock but it is - * very likely that we need to take the lock in every - * iteration. - */ - rq = task_grq_lock(p, &flags); - running = task_running(p); - task_grq_unlock(&flags); - - if (likely(!running)) - break; - /* - * The switch count is incremented before the actual - * context switch. We thus wait for two switches to be - * sure at least one completed. - */ - if ((p->nvcsw - nvcsw) > 1) - break; - if ((p->nivcsw - nivcsw) > 1) - break; - - cpu_relax(); - } -} - -/* - * wait_task_inactive - wait for a thread to unschedule. - * - * If @match_state is nonzero, it's the @p->state value just checked and - * not expected to change. If it changes, i.e. @p might have woken up, - * then return zero. When we succeed in waiting for @p to be off its CPU, - * we return a positive number (its total switch count). If a second call - * a short while later returns the same number, the caller can be sure that - * @p has remained unscheduled the whole time. - * - * The caller must ensure that the task *will* unschedule sometime soon, - * else this function might spin for a *long* time. This function can't - * be called with interrupts off, or it may introduce deadlock with - * smp_call_function() if an IPI is sent by the same process we are - * waiting to become inactive. - */ -unsigned long wait_task_inactive(struct task_struct *p, long match_state) -{ - unsigned long flags; - int running, on_rq; - unsigned long ncsw; - struct rq *rq; - - for (;;) { - /* - * We do the initial early heuristics without holding - * any task-queue locks at all. We'll only try to get - * the runqueue lock when things look like they will - * work out! In the unlikely event rq is dereferenced - * since we're lockless, grab it again. - */ -#ifdef CONFIG_SMP -retry_rq: - rq = task_rq(p); - if (unlikely(!rq)) - goto retry_rq; -#else /* CONFIG_SMP */ - rq = task_rq(p); -#endif - /* - * If the task is actively running on another CPU - * still, just relax and busy-wait without holding - * any locks. - * - * NOTE! Since we don't hold any locks, it's not - * even sure that "rq" stays as the right runqueue! - * But we don't care, since this will return false - * if the runqueue has changed and p is actually now - * running somewhere else! - */ - while (task_running(p) && p == rq->curr) { - if (match_state && unlikely(p->state != match_state)) - return 0; - cpu_relax(); - } - - /* - * Ok, time to look more closely! We need the grq - * lock now, to be *sure*. If we're wrong, we'll - * just go back and repeat. - */ - rq = task_grq_lock(p, &flags); - trace_sched_wait_task(rq, p); - running = task_running(p); - on_rq = task_queued(p); - ncsw = 0; - if (!match_state || p->state == match_state) - ncsw = p->nvcsw | LONG_MIN; /* sets MSB */ - task_grq_unlock(&flags); - - /* - * If it changed from the expected state, bail out now. - */ - if (unlikely(!ncsw)) - break; - - /* - * Was it really running after all now that we - * checked with the proper locks actually held? - * - * Oops. Go back and try again.. - */ - if (unlikely(running)) { - cpu_relax(); - continue; - } - - /* - * It's not enough that it's not actively running, - * it must be off the runqueue _entirely_, and not - * preempted! - * - * So if it was still runnable (but just not actively - * running right now), it's preempted, and we should - * yield - it could be a while. - */ - if (unlikely(on_rq)) { - schedule_timeout_uninterruptible(1); - continue; - } - - /* - * Ahh, all good. It wasn't running, and it wasn't - * runnable, which means that it will never become - * running in the future either. We're all done! - */ - break; - } - - return ncsw; -} - -/*** - * kick_process - kick a running thread to enter/exit the kernel - * @p: the to-be-kicked thread - * - * Cause a process which is running on another CPU to enter - * kernel-mode, without any delay. (to get signals handled.) - * - * NOTE: this function doesnt have to take the runqueue lock, - * because all it wants to ensure is that the remote task enters - * the kernel. If the IPI races and the task has been migrated - * to another CPU then no harm is done and the purpose has been - * achieved as well. - */ -void kick_process(struct task_struct *p) -{ - int cpu; - - preempt_disable(); - cpu = task_cpu(p); - if ((cpu != smp_processor_id()) && task_curr(p)) - smp_send_reschedule(cpu); - preempt_enable(); -} -EXPORT_SYMBOL_GPL(kick_process); -#endif - -#define rq_idle(rq) ((rq)->rq_prio == PRIO_LIMIT) - -/* - * RT tasks preempt purely on priority. SCHED_NORMAL tasks preempt on the - * basis of earlier deadlines. SCHED_IDLEPRIO don't preempt anything else or - * between themselves, they cooperatively multitask. An idle rq scores as - * prio PRIO_LIMIT so it is always preempted. - */ -static inline int -can_preempt(struct task_struct *p, int prio, u64 deadline, - unsigned int policy) -{ - /* Better static priority RT task or better policy preemption */ - if (p->prio < prio) - return 1; - if (p->prio > prio) - return 0; - /* SCHED_NORMAL, BATCH and ISO will preempt based on deadline */ - if (!deadline_before(p->deadline, deadline)) - return 0; - return 1; -} -#ifdef CONFIG_SMP -#ifdef CONFIG_HOTPLUG_CPU -/* - * Check to see if there is a task that is affined only to offline CPUs but - * still wants runtime. This happens to kernel threads during suspend/halt and - * disabling of CPUs. - */ -static inline int online_cpus(struct task_struct *p) -{ - return (likely(cpus_intersects(cpu_online_map, p->cpus_allowed))); -} -#else /* CONFIG_HOTPLUG_CPU */ -/* All available CPUs are always online without hotplug. */ -static inline int online_cpus(struct task_struct *p) -{ - return 1; -} -#endif - - -/* - * Check to see if p can run on cpu, and if not, whether there are any online - * CPUs it can run on instead. - */ -static inline int needs_other_cpu(struct task_struct *p, int cpu) -{ - if (unlikely(!cpu_isset(cpu, p->cpus_allowed))) - return 1; - return 0; -} - -/* - * latest_deadline and highest_prio_rq are initialised only to silence the - * compiler. When all else is equal, still prefer this_rq. - */ -static void try_preempt(struct task_struct *p, struct rq *this_rq) -{ - struct rq *highest_prio_rq = this_rq; - u64 latest_deadline; - unsigned long cpu; - int highest_prio; - cpumask_t tmp; - - /* - * We clear the sticky flag here because for a task to have called - * try_preempt with the sticky flag enabled means some complicated - * re-scheduling has occurred and we should ignore the sticky flag. - */ - clear_sticky(p); - - if (suitable_idle_cpus(p)) { - resched_best_idle(p); - return; - } - - /* IDLEPRIO tasks never preempt anything */ - if (p->policy == SCHED_IDLEPRIO) - return; - - if (likely(online_cpus(p))) - cpus_and(tmp, cpu_online_map, p->cpus_allowed); - else - return; - - latest_deadline = 0; - highest_prio = -1; - - for_each_cpu_mask(cpu, tmp) { - struct rq *rq; - int rq_prio; - - rq = cpu_rq(cpu); - rq_prio = rq->rq_prio; - if (rq_prio < highest_prio) - continue; - - if (rq_prio > highest_prio || (rq_prio == highest_prio && - deadline_after(rq->rq_deadline, latest_deadline))) { - latest_deadline = rq->rq_deadline; - highest_prio = rq_prio; - highest_prio_rq = rq; - } - } - - if (!can_preempt(p, highest_prio, highest_prio_rq->rq_deadline, - highest_prio_rq->rq_policy)) - return; - - resched_task(highest_prio_rq->curr); -} -#else /* CONFIG_SMP */ -static inline int needs_other_cpu(struct task_struct *p, int cpu) -{ - return 0; -} - -static void try_preempt(struct task_struct *p, struct rq *this_rq) -{ - if (p->policy == SCHED_IDLEPRIO) - return; - if (can_preempt(p, uprq->rq_prio, uprq->rq_deadline, - uprq->rq_policy)) - resched_task(uprq->curr); -} -#endif /* CONFIG_SMP */ - -/** - * task_oncpu_function_call - call a function on the cpu on which a task runs - * @p: the task to evaluate - * @func: the function to be called - * @info: the function call argument - * - * Calls the function @func when the task is currently running. This might - * be on the current CPU, which just calls the function directly - */ -void task_oncpu_function_call(struct task_struct *p, - void (*func) (void *info), void *info) -{ - int cpu; - - preempt_disable(); - cpu = task_cpu(p); - if (task_curr(p)) - smp_call_function_single(cpu, func, info, 1); - preempt_enable(); -} - -/*** - * try_to_wake_up - wake up a thread - * @p: the to-be-woken-up thread - * @state: the mask of task states that can be woken - * @sync: do a synchronous wakeup? - * - * Put it on the run-queue if it's not already there. The "current" - * thread is always on the run-queue (except when the actual - * re-schedule is in progress), and as such you're allowed to do - * the simpler "current->state = TASK_RUNNING" to mark yourself - * runnable without the overhead of this. - * - * returns failure only if the task is already active. - */ -static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync) -{ - unsigned long flags; - int success = 0; - struct rq *rq; - - get_cpu(); - - /* This barrier is undocumented, probably for p->state? くそ */ - smp_wmb(); - - /* - * No need to do time_lock_grq as we only need to update the rq clock - * if we activate the task - */ - rq = task_grq_lock(p, &flags); - - /* state is a volatile long, どうして、分からない */ - if (!((unsigned int)p->state & state)) - goto out_unlock; - - if (task_queued(p) || task_running(p)) - goto out_running; - - activate_task(p, rq); - /* - * Sync wakeups (i.e. those types of wakeups where the waker - * has indicated that it will leave the CPU in short order) - * don't trigger a preemption if there are no idle cpus, - * instead waiting for current to deschedule. - */ - if (!sync || suitable_idle_cpus(p)) - try_preempt(p, rq); - success = 1; - -out_running: - trace_sched_wakeup(rq, p, success); - p->state = TASK_RUNNING; -out_unlock: - task_grq_unlock(&flags); - put_cpu(); - - return success; -} - -/** - * wake_up_process - Wake up a specific process - * @p: The process to be woken up. - * - * Attempt to wake up the nominated process and move it to the set of runnable - * processes. Returns 1 if the process was woken up, 0 if it was already - * running. - * - * It may be assumed that this function implies a write memory barrier before - * changing the task state if and only if any tasks are woken up. - */ -int wake_up_process(struct task_struct *p) -{ - return try_to_wake_up(p, TASK_ALL, 0); -} -EXPORT_SYMBOL(wake_up_process); - -int wake_up_state(struct task_struct *p, unsigned int state) -{ - return try_to_wake_up(p, state, 0); -} - -static void time_slice_expired(struct task_struct *p); - -/* - * Perform scheduler related setup for a newly forked process p. - * p is forked by current. - */ -void sched_fork(struct task_struct *p, int clone_flags) -{ - struct task_struct *curr; - int cpu = get_cpu(); - struct rq *rq; - -#ifdef CONFIG_PREEMPT_NOTIFIERS - INIT_HLIST_HEAD(&p->preempt_notifiers); -#endif - /* - * We mark the process as running here, but have not actually - * inserted it onto the runqueue yet. This guarantees that - * nobody will actually run it, and a signal or other external - * event cannot wake it up and insert it on the runqueue either. - */ - p->state = TASK_RUNNING; - set_task_cpu(p, cpu); - - /* Should be reset in fork.c but done here for ease of bfs patching */ - p->sched_time = p->stime_pc = p->utime_pc = 0; - - curr = current; - /* - * Make sure we do not leak PI boosting priority to the child: - */ - p->prio = curr->normal_prio; - - INIT_LIST_HEAD(&p->run_list); -#if defined(CONFIG_SCHEDSTATS) || defined(CONFIG_TASK_DELAY_ACCT) - if (unlikely(sched_info_on())) - memset(&p->sched_info, 0, sizeof(p->sched_info)); -#endif - - p->oncpu = 0; - clear_sticky(p); - -#ifdef CONFIG_PREEMPT - /* Want to start with kernel preemption disabled. */ - task_thread_info(p)->preempt_count = 1; -#endif - if (unlikely(p->policy == SCHED_FIFO)) - goto out; - /* - * Share the timeslice between parent and child, thus the - * total amount of pending timeslices in the system doesn't change, - * resulting in more scheduling fairness. If it's negative, it won't - * matter since that's the same as being 0. current's time_slice is - * actually in rq_time_slice when it's running, as is its last_ran - * value. rq->rq_deadline is only modified within schedule() so it - * is always equal to current->deadline. - */ - rq = task_grq_lock_irq(curr); - if (likely(rq->rq_time_slice >= RESCHED_US * 2)) { - rq->rq_time_slice /= 2; - p->time_slice = rq->rq_time_slice; - } else { - /* - * Forking task has run out of timeslice. Reschedule it and - * start its child with a new time slice and deadline. The - * child will end up running first because its deadline will - * be slightly earlier. - */ - rq->rq_time_slice = 0; - set_tsk_need_resched(curr); - time_slice_expired(p); - } - p->last_ran = rq->rq_last_ran; - task_grq_unlock_irq(); -out: - put_cpu(); -} - -/* - * wake_up_new_task - wake up a newly created task for the first time. - * - * This function will do some initial scheduler statistics housekeeping - * that must be done for every newly created context, then puts the task - * on the runqueue and wakes it. - */ -void wake_up_new_task(struct task_struct *p, unsigned long clone_flags) -{ - struct task_struct *parent; - unsigned long flags; - struct rq *rq; - - rq = task_grq_lock(p, &flags); ; - p->state = TASK_RUNNING; - parent = p->parent; - /* Unnecessary but small chance that the parent changed CPU */ - set_task_cpu(p, task_cpu(parent)); - activate_task(p, rq); - trace_sched_wakeup_new(rq, p, 1); - if (!(clone_flags & CLONE_VM) && rq->curr == parent && - !suitable_idle_cpus(p)) { - /* - * The VM isn't cloned, so we're in a good position to - * do child-runs-first in anticipation of an exec. This - * usually avoids a lot of COW overhead. - */ - resched_task(parent); - } else - try_preempt(p, rq); - task_grq_unlock(&flags); -} - -/* Nothing to do here */ -void sched_exit(struct task_struct *p) -{ -} - -#ifdef CONFIG_PREEMPT_NOTIFIERS - -/** - * preempt_notifier_register - tell me when current is being preempted & rescheduled - * @notifier: notifier struct to register - */ -void preempt_notifier_register(struct preempt_notifier *notifier) -{ - hlist_add_head(¬ifier->link, ¤t->preempt_notifiers); -} -EXPORT_SYMBOL_GPL(preempt_notifier_register); - -/** - * preempt_notifier_unregister - no longer interested in preemption notifications - * @notifier: notifier struct to unregister - * - * This is safe to call from within a preemption notifier. - */ -void preempt_notifier_unregister(struct preempt_notifier *notifier) -{ - hlist_del(¬ifier->link); -} -EXPORT_SYMBOL_GPL(preempt_notifier_unregister); - -static void fire_sched_in_preempt_notifiers(struct task_struct *curr) -{ - struct preempt_notifier *notifier; - struct hlist_node *node; - - hlist_for_each_entry(notifier, node, &curr->preempt_notifiers, link) - notifier->ops->sched_in(notifier, raw_smp_processor_id()); -} - -static void -fire_sched_out_preempt_notifiers(struct task_struct *curr, - struct task_struct *next) -{ - struct preempt_notifier *notifier; - struct hlist_node *node; - - hlist_for_each_entry(notifier, node, &curr->preempt_notifiers, link) - notifier->ops->sched_out(notifier, next); -} - -#else /* !CONFIG_PREEMPT_NOTIFIERS */ - -static void fire_sched_in_preempt_notifiers(struct task_struct *curr) -{ -} - -static void -fire_sched_out_preempt_notifiers(struct task_struct *curr, - struct task_struct *next) -{ -} - -#endif /* CONFIG_PREEMPT_NOTIFIERS */ - -/** - * prepare_task_switch - prepare to switch tasks - * @rq: the runqueue preparing to switch - * @next: the task we are going to switch to. - * - * This is called with the rq lock held and interrupts off. It must - * be paired with a subsequent finish_task_switch after the context - * switch. - * - * prepare_task_switch sets up locking and calls architecture specific - * hooks. - */ -static inline void -prepare_task_switch(struct rq *rq, struct task_struct *prev, - struct task_struct *next) -{ - fire_sched_out_preempt_notifiers(prev, next); - prepare_lock_switch(rq, next); - prepare_arch_switch(next); -} - -/** - * finish_task_switch - clean up after a task-switch - * @rq: runqueue associated with task-switch - * @prev: the thread we just switched away from. - * - * finish_task_switch must be called after the context switch, paired - * with a prepare_task_switch call before the context switch. - * finish_task_switch will reconcile locking set up by prepare_task_switch, - * and do any other architecture-specific cleanup actions. - * - * Note that we may have delayed dropping an mm in context_switch(). If - * so, we finish that here outside of the runqueue lock. (Doing it - * with the lock held can cause deadlocks; see schedule() for - * details.) - */ -static inline void finish_task_switch(struct rq *rq, struct task_struct *prev) - __releases(grq.lock) -{ - struct mm_struct *mm = rq->prev_mm; - long prev_state; - - rq->prev_mm = NULL; - - /* - * A task struct has one reference for the use as "current". - * If a task dies, then it sets TASK_DEAD in tsk->state and calls - * schedule one last time. The schedule call will never return, and - * the scheduled task must drop that reference. - * The test for TASK_DEAD must occur while the runqueue locks are - * still held, otherwise prev could be scheduled on another cpu, die - * there before we look at prev->state, and then the reference would - * be dropped twice. - * Manfred Spraul - */ - prev_state = prev->state; - finish_arch_switch(prev); - perf_counter_task_sched_in(current, cpu_of(rq)); - finish_lock_switch(rq, prev); - - fire_sched_in_preempt_notifiers(current); - if (mm) - mmdrop(mm); - if (unlikely(prev_state == TASK_DEAD)) { - /* - * Remove function-return probe instances associated with this - * task and put them back on the free list. - */ - kprobe_flush_task(prev); - put_task_struct(prev); - } -} - -/** - * schedule_tail - first thing a freshly forked thread must call. - * @prev: the thread we just switched away from. - */ -asmlinkage void schedule_tail(struct task_struct *prev) - __releases(grq.lock) -{ - struct rq *rq = this_rq(); - - finish_task_switch(rq, prev); -#ifdef __ARCH_WANT_UNLOCKED_CTXSW - /* In this case, finish_task_switch does not reenable preemption */ - preempt_enable(); -#endif - if (current->set_child_tid) - put_user(current->pid, current->set_child_tid); -} - -/* - * context_switch - switch to the new MM and the new - * thread's register state. - */ -static inline void -context_switch(struct rq *rq, struct task_struct *prev, - struct task_struct *next) -{ - struct mm_struct *mm, *oldmm; - - prepare_task_switch(rq, prev, next); - trace_sched_switch(rq, prev, next); - mm = next->mm; - oldmm = prev->active_mm; - /* - * For paravirt, this is coupled with an exit in switch_to to - * combine the page table reload and the switch backend into - * one hypercall. - */ - arch_start_context_switch(prev); - - if (!mm) { - next->active_mm = oldmm; - atomic_inc(&oldmm->mm_count); - enter_lazy_tlb(oldmm, next); - } else - switch_mm(oldmm, mm, next); - - if (!prev->mm) { - prev->active_mm = NULL; - rq->prev_mm = oldmm; - } - /* - * Since the runqueue lock will be released by the next - * task (which is an invalid locking op but in the case - * of the scheduler it's an obvious special-case), so we - * do an early lockdep release here: - */ -#ifndef __ARCH_WANT_UNLOCKED_CTXSW - spin_release(&grq.lock.dep_map, 1, _THIS_IP_); -#endif - - /* Here we just switch the register state and the stack. */ - switch_to(prev, next, prev); - - barrier(); - /* - * this_rq must be evaluated again because prev may have moved - * CPUs since it called schedule(), thus the 'rq' on its stack - * frame will be invalid. - */ - finish_task_switch(this_rq(), prev); -} - -/* - * nr_running, nr_uninterruptible and nr_context_switches: - * - * externally visible scheduler statistics: current number of runnable - * threads, current number of uninterruptible-sleeping threads, total - * number of context switches performed since bootup. All are measured - * without grabbing the grq lock but the occasional inaccurate result - * doesn't matter so long as it's positive. - */ -unsigned long nr_running(void) -{ - long nr = grq.nr_running; - - if (unlikely(nr < 0)) - nr = 0; - return (unsigned long)nr; -} - -unsigned long nr_uninterruptible(void) -{ - long nu = grq.nr_uninterruptible; - - if (unlikely(nu < 0)) - nu = 0; - return nu; -} - -unsigned long long nr_context_switches(void) -{ - long long ns = grq.nr_switches; - - /* This is of course impossible */ - if (unlikely(ns < 0)) - ns = 1; - return (long long)ns; -} - -unsigned long nr_iowait(void) -{ - unsigned long i, sum = 0; - - for_each_possible_cpu(i) - sum += atomic_read(&cpu_rq(i)->nr_iowait); - - return sum; -} - -unsigned long nr_active(void) -{ - return nr_running() + nr_uninterruptible(); -} - -/* Variables and functions for calc_load */ -static unsigned long calc_load_update; -unsigned long avenrun[3]; -EXPORT_SYMBOL(avenrun); - -/** - * get_avenrun - get the load average array - * @loads: pointer to dest load array - * @offset: offset to add - * @shift: shift count to shift the result left - * - * These values are estimates at best, so no need for locking. - */ -void get_avenrun(unsigned long *loads, unsigned long offset, int shift) -{ - loads[0] = (avenrun[0] + offset) << shift; - loads[1] = (avenrun[1] + offset) << shift; - loads[2] = (avenrun[2] + offset) << shift; -} - -static unsigned long -calc_load(unsigned long load, unsigned long exp, unsigned long active) -{ - load *= exp; - load += active * (FIXED_1 - exp); - return load >> FSHIFT; -} - -/* - * calc_load - update the avenrun load estimates every LOAD_FREQ seconds. - */ -void calc_global_load(void) -{ - long active; - - if (time_before(jiffies, calc_load_update)) - return; - active = nr_active() * FIXED_1; - - avenrun[0] = calc_load(avenrun[0], EXP_1, active); - avenrun[1] = calc_load(avenrun[1], EXP_5, active); - avenrun[2] = calc_load(avenrun[2], EXP_15, active); - - calc_load_update = jiffies + LOAD_FREQ; -} - -DEFINE_PER_CPU(struct kernel_stat, kstat); - -EXPORT_PER_CPU_SYMBOL(kstat); - -/* - * On each tick, see what percentage of that tick was attributed to each - * component and add the percentage to the _pc values. Once a _pc value has - * accumulated one tick's worth, account for that. This means the total - * percentage of load components will always be 100 per tick. - */ -static void pc_idle_time(struct rq *rq, unsigned long pc) -{ - struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; - cputime64_t tmp = cputime_to_cputime64(jiffies_to_cputime(1)); - - if (atomic_read(&rq->nr_iowait) > 0) { - rq->iowait_pc += pc; - if (rq->iowait_pc >= 100) { - rq->iowait_pc %= 100; - cpustat->iowait = cputime64_add(cpustat->iowait, tmp); - } - } else { - rq->idle_pc += pc; - if (rq->idle_pc >= 100) { - rq->idle_pc %= 100; - cpustat->idle = cputime64_add(cpustat->idle, tmp); - } - } -} - -static void -pc_system_time(struct rq *rq, struct task_struct *p, int hardirq_offset, - unsigned long pc, unsigned long ns) -{ - struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; - cputime_t one_jiffy = jiffies_to_cputime(1); - cputime_t one_jiffy_scaled = cputime_to_scaled(one_jiffy); - cputime64_t tmp = cputime_to_cputime64(one_jiffy); - - p->stime_pc += pc; - if (p->stime_pc >= 100) { - p->stime_pc -= 100; - p->stime = cputime_add(p->stime, one_jiffy); - p->stimescaled = cputime_add(p->stimescaled, one_jiffy_scaled); - account_group_system_time(p, one_jiffy); - acct_update_integrals(p); - } - p->sched_time += ns; - - if (hardirq_count() - hardirq_offset) { - rq->irq_pc += pc; - if (rq->irq_pc >= 100) { - rq->irq_pc %= 100; - cpustat->irq = cputime64_add(cpustat->irq, tmp); - } - } else if (softirq_count()) { - rq->softirq_pc += pc; - if (rq->softirq_pc >= 100) { - rq->softirq_pc %= 100; - cpustat->softirq = cputime64_add(cpustat->softirq, tmp); - } - } else { - rq->system_pc += pc; - if (rq->system_pc >= 100) { - rq->system_pc %= 100; - cpustat->system = cputime64_add(cpustat->system, tmp); - } - } -} - -static void pc_user_time(struct rq *rq, struct task_struct *p, - unsigned long pc, unsigned long ns) -{ - struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; - cputime_t one_jiffy = jiffies_to_cputime(1); - cputime_t one_jiffy_scaled = cputime_to_scaled(one_jiffy); - cputime64_t tmp = cputime_to_cputime64(one_jiffy); - - p->utime_pc += pc; - if (p->utime_pc >= 100) { - p->utime_pc -= 100; - p->utime = cputime_add(p->utime, one_jiffy); - p->utimescaled = cputime_add(p->utimescaled, one_jiffy_scaled); - account_group_user_time(p, one_jiffy); - acct_update_integrals(p); - } - p->sched_time += ns; - - if (TASK_NICE(p) > 0 || idleprio_task(p)) { - rq->nice_pc += pc; - if (rq->nice_pc >= 100) { - rq->nice_pc %= 100; - cpustat->nice = cputime64_add(cpustat->nice, tmp); - } - } else { - rq->user_pc += pc; - if (rq->user_pc >= 100) { - rq->user_pc %= 100; - cpustat->user = cputime64_add(cpustat->user, tmp); - } - } -} - -/* Convert nanoseconds to percentage of one tick. */ -#define NS_TO_PC(NS) (NS * 100 / JIFFY_NS) - -/* - * This is called on clock ticks and on context switches. - * Bank in p->sched_time the ns elapsed since the last tick or switch. - * CPU scheduler quota accounting is also performed here in microseconds. - */ -static void -update_cpu_clock(struct rq *rq, struct task_struct *p, int tick) -{ - long account_ns = rq->clock - rq->timekeep_clock; - struct task_struct *idle = rq->idle; - unsigned long account_pc; - - if (unlikely(account_ns < 0)) - account_ns = 0; - - account_pc = NS_TO_PC(account_ns); - - if (tick) { - int user_tick = user_mode(get_irq_regs()); - - /* Accurate tick timekeeping */ - if (user_tick) - pc_user_time(rq, p, account_pc, account_ns); - else if (p != idle || (irq_count() != HARDIRQ_OFFSET)) - pc_system_time(rq, p, HARDIRQ_OFFSET, - account_pc, account_ns); - else - pc_idle_time(rq, account_pc); - } else { - /* Accurate subtick timekeeping */ - if (p == idle) - pc_idle_time(rq, account_pc); - else - pc_user_time(rq, p, account_pc, account_ns); - } - - /* time_slice accounting is done in usecs to avoid overflow on 32bit */ - if (rq->rq_policy != SCHED_FIFO && p != idle) { - s64 time_diff = rq->clock - rq->rq_last_ran; - - niffy_diff(&time_diff, 1); - rq->rq_time_slice -= NS_TO_US(time_diff); - } - rq->rq_last_ran = rq->timekeep_clock = rq->clock; -} - -/* - * Return any ns on the sched_clock that have not yet been accounted in - * @p in case that task is currently running. - * - * Called with task_grq_lock() held. - */ -static u64 do_task_delta_exec(struct task_struct *p, struct rq *rq) -{ - u64 ns = 0; - - if (p == rq->curr) { - update_clocks(rq); - ns = rq->clock - rq->rq_last_ran; - if (unlikely((s64)ns < 0)) - ns = 0; - } - - return ns; -} - -unsigned long long task_delta_exec(struct task_struct *p) -{ - unsigned long flags; - struct rq *rq; - u64 ns; - - rq = task_grq_lock(p, &flags); - ns = do_task_delta_exec(p, rq); - task_grq_unlock(&flags); - - return ns; -} - -/* - * Return accounted runtime for the task. - * In case the task is currently running, return the runtime plus current's - * pending runtime that have not been accounted yet. - */ -unsigned long long task_sched_runtime(struct task_struct *p) -{ - unsigned long flags; - struct rq *rq; - u64 ns; - - rq = task_grq_lock(p, &flags); - ns = p->sched_time + do_task_delta_exec(p, rq); - task_grq_unlock(&flags); - - return ns; -} - -/* - * Return sum_exec_runtime for the thread group. - * In case the task is currently running, return the sum plus current's - * pending runtime that have not been accounted yet. - * - * Note that the thread group might have other running tasks as well, - * so the return value not includes other pending runtime that other - * running tasks might have. - */ -unsigned long long thread_group_sched_runtime(struct task_struct *p) -{ - struct task_cputime totals; - unsigned long flags; - struct rq *rq; - u64 ns; - - rq = task_grq_lock(p, &flags); - thread_group_cputime(p, &totals); - ns = totals.sum_exec_runtime + do_task_delta_exec(p, rq); - task_grq_unlock(&flags); - - return ns; -} - -/* Compatibility crap for removal */ -void account_user_time(struct task_struct *p, cputime_t cputime, - cputime_t cputime_scaled) -{ -} - -void account_idle_time(cputime_t cputime) -{ -} - -/* - * Account guest cpu time to a process. - * @p: the process that the cpu time gets accounted to - * @cputime: the cpu time spent in virtual machine since the last update - * @cputime_scaled: cputime scaled by cpu frequency - */ -static void account_guest_time(struct task_struct *p, cputime_t cputime, - cputime_t cputime_scaled) -{ - cputime64_t tmp; - struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; - - tmp = cputime_to_cputime64(cputime); - - /* Add guest time to process. */ - p->utime = cputime_add(p->utime, cputime); - p->utimescaled = cputime_add(p->utimescaled, cputime_scaled); - account_group_user_time(p, cputime); - p->gtime = cputime_add(p->gtime, cputime); - - /* Add guest time to cpustat. */ - cpustat->user = cputime64_add(cpustat->user, tmp); - cpustat->guest = cputime64_add(cpustat->guest, tmp); -} - -/* - * Account system cpu time to a process. - * @p: the process that the cpu time gets accounted to - * @hardirq_offset: the offset to subtract from hardirq_count() - * @cputime: the cpu time spent in kernel space since the last update - * @cputime_scaled: cputime scaled by cpu frequency - * This is for guest only now. - */ -void account_system_time(struct task_struct *p, int hardirq_offset, - cputime_t cputime, cputime_t cputime_scaled) -{ - - if ((p->flags & PF_VCPU) && (irq_count() - hardirq_offset == 0)) - account_guest_time(p, cputime, cputime_scaled); -} - -/* - * Account for involuntary wait time. - * @steal: the cpu time spent in involuntary wait - */ -void account_steal_time(cputime_t cputime) -{ - struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; - cputime64_t cputime64 = cputime_to_cputime64(cputime); - - cpustat->steal = cputime64_add(cpustat->steal, cputime64); -} - -/* - * Account for idle time. - * @cputime: the cpu time spent in idle wait - */ -static void account_idle_times(cputime_t cputime) -{ - struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat; - cputime64_t cputime64 = cputime_to_cputime64(cputime); - struct rq *rq = this_rq(); - - if (atomic_read(&rq->nr_iowait) > 0) - cpustat->iowait = cputime64_add(cpustat->iowait, cputime64); - else - cpustat->idle = cputime64_add(cpustat->idle, cputime64); -} - -#ifndef CONFIG_VIRT_CPU_ACCOUNTING - -void account_process_tick(struct task_struct *p, int user_tick) -{ -} - -/* - * Account multiple ticks of steal time. - * @p: the process from which the cpu time has been stolen - * @ticks: number of stolen ticks - */ -void account_steal_ticks(unsigned long ticks) -{ - account_steal_time(jiffies_to_cputime(ticks)); -} - -/* - * Account multiple ticks of idle time. - * @ticks: number of stolen ticks - */ -void account_idle_ticks(unsigned long ticks) -{ - account_idle_times(jiffies_to_cputime(ticks)); -} -#endif - -static inline void grq_iso_lock(void) - __acquires(grq.iso_lock) -{ - spin_lock(&grq.iso_lock); -} - -static inline void grq_iso_unlock(void) - __releases(grq.iso_lock) -{ - spin_unlock(&grq.iso_lock); -} - -/* - * Functions to test for when SCHED_ISO tasks have used their allocated - * quota as real time scheduling and convert them back to SCHED_NORMAL. - * Where possible, the data is tested lockless, to avoid grabbing iso_lock - * because the occasional inaccurate result won't matter. However the - * tick data is only ever modified under lock. iso_refractory is only simply - * set to 0 or 1 so it's not worth grabbing the lock yet again for that. - */ -static void set_iso_refractory(void) -{ - grq.iso_refractory = 1; -} - -static void clear_iso_refractory(void) -{ - grq.iso_refractory = 0; -} - -/* - * Test if SCHED_ISO tasks have run longer than their alloted period as RT - * tasks and set the refractory flag if necessary. There is 10% hysteresis - * for unsetting the flag. 115/128 is ~90/100 as a fast shift instead of a - * slow division. - */ -static unsigned int test_ret_isorefractory(struct rq *rq) -{ - if (likely(!grq.iso_refractory)) { - if (grq.iso_ticks > ISO_PERIOD * sched_iso_cpu) - set_iso_refractory(); - } else { - if (grq.iso_ticks < ISO_PERIOD * (sched_iso_cpu * 115 / 128)) - clear_iso_refractory(); - } - return grq.iso_refractory; -} - -static void iso_tick(void) -{ - grq_iso_lock(); - grq.iso_ticks += 100; - grq_iso_unlock(); -} - -/* No SCHED_ISO task was running so decrease rq->iso_ticks */ -static inline void no_iso_tick(void) -{ - if (grq.iso_ticks) { - grq_iso_lock(); - grq.iso_ticks -= grq.iso_ticks / ISO_PERIOD + 1; - if (unlikely(grq.iso_refractory && grq.iso_ticks < - ISO_PERIOD * (sched_iso_cpu * 115 / 128))) - clear_iso_refractory(); - grq_iso_unlock(); - } -} - -static int rq_running_iso(struct rq *rq) -{ - return rq->rq_prio == ISO_PRIO; -} - -/* This manages tasks that have run out of timeslice during a scheduler_tick */ -static void task_running_tick(struct rq *rq) -{ - struct task_struct *p; - - /* - * If a SCHED_ISO task is running we increment the iso_ticks. In - * order to prevent SCHED_ISO tasks from causing starvation in the - * presence of true RT tasks we account those as iso_ticks as well. - */ - if ((rt_queue(rq) || (iso_queue(rq) && !grq.iso_refractory))) { - if (grq.iso_ticks <= (ISO_PERIOD * 100) - 100) - iso_tick(); - } else - no_iso_tick(); - - if (iso_queue(rq)) { - if (unlikely(test_ret_isorefractory(rq))) { - if (rq_running_iso(rq)) { - /* - * SCHED_ISO task is running as RT and limit - * has been hit. Force it to reschedule as - * SCHED_NORMAL by zeroing its time_slice - */ - rq->rq_time_slice = 0; - } - } - } - - /* SCHED_FIFO tasks never run out of timeslice. */ - if (rq->rq_policy == SCHED_FIFO) - return; - /* - * Tasks that were scheduled in the first half of a tick are not - * allowed to run into the 2nd half of the next tick if they will - * run out of time slice in the interim. Otherwise, if they have - * less than RESCHED_US μs of time slice left they will be rescheduled. - */ - if (rq->dither) { - if (rq->rq_time_slice > HALF_JIFFY_US) - return; - else - rq->rq_time_slice = 0; - } else if (rq->rq_time_slice >= RESCHED_US) - return; - - /* p->time_slice < RESCHED_US. We only modify task_struct under grq lock */ - p = rq->curr; - requeue_task(p); - grq_lock(); - set_tsk_need_resched(p); - grq_unlock(); -} - -void wake_up_idle_cpu(int cpu); - -/* - * This function gets called by the timer code, with HZ frequency. - * We call it with interrupts disabled. The data modified is all - * local to struct rq so we don't need to grab grq lock. - */ -void scheduler_tick(void) -{ - int cpu = smp_processor_id(); - struct rq *rq = cpu_rq(cpu); - - sched_clock_tick(); - /* grq lock not grabbed, so only update rq clock */ - update_rq_clock(rq); - update_cpu_clock(rq, rq->curr, 1); - if (!rq_idle(rq)) - task_running_tick(rq); - else - no_iso_tick(); - rq->last_tick = rq->clock; - perf_counter_task_tick(rq->curr, cpu); -} - -notrace unsigned long get_parent_ip(unsigned long addr) -{ - if (in_lock_functions(addr)) { - addr = CALLER_ADDR2; - if (in_lock_functions(addr)) - addr = CALLER_ADDR3; - } - return addr; -} - -#if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \ - defined(CONFIG_PREEMPT_TRACER)) -void __kprobes add_preempt_count(int val) -{ -#ifdef CONFIG_DEBUG_PREEMPT - /* - * Underflow? - */ - if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0))) - return; -#endif - preempt_count() += val; -#ifdef CONFIG_DEBUG_PREEMPT - /* - * Spinlock count overflowing soon? - */ - DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >= - PREEMPT_MASK - 10); -#endif - if (preempt_count() == val) - trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1)); -} -EXPORT_SYMBOL(add_preempt_count); - -void __kprobes sub_preempt_count(int val) -{ -#ifdef CONFIG_DEBUG_PREEMPT - /* - * Underflow? - */ - if (DEBUG_LOCKS_WARN_ON(val > preempt_count())) - return; - /* - * Is the spinlock portion underflowing? - */ - if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) && - !(preempt_count() & PREEMPT_MASK))) - return; -#endif - - if (preempt_count() == val) - trace_preempt_on(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1)); - preempt_count() -= val; -} -EXPORT_SYMBOL(sub_preempt_count); -#endif - -/* - * Deadline is "now" in niffies + (offset by priority). Setting the deadline - * is the key to everything. It distributes cpu fairly amongst tasks of the - * same nice value, it proportions cpu according to nice level, it means the - * task that last woke up the longest ago has the earliest deadline, thus - * ensuring that interactive tasks get low latency on wake up. The CPU - * proportion works out to the square of the virtual deadline difference, so - * this equation will give nice 19 3% CPU compared to nice 0. - */ -static inline u64 prio_deadline_diff(int user_prio) -{ - return (prio_ratios[user_prio] * rr_interval * (MS_TO_NS(1) / 128)); -} - -static inline u64 task_deadline_diff(struct task_struct *p) -{ - return prio_deadline_diff(TASK_USER_PRIO(p)); -} - -static inline u64 static_deadline_diff(int static_prio) -{ - return prio_deadline_diff(USER_PRIO(static_prio)); -} - -static inline int longest_deadline_diff(void) -{ - return prio_deadline_diff(39); -} - -static inline int ms_longest_deadline_diff(void) -{ - return NS_TO_MS(longest_deadline_diff()); -} - -/* - * The time_slice is only refilled when it is empty and that is when we set a - * new deadline. - */ -static void time_slice_expired(struct task_struct *p) -{ - p->time_slice = timeslice(); - p->deadline = grq.niffies + task_deadline_diff(p); -} - -/* - * Timeslices below RESCHED_US are considered as good as expired as there's no - * point rescheduling when there's so little time left. SCHED_BATCH tasks - * have been flagged be not latency sensitive and likely to be fully CPU - * bound so every time they're rescheduled they have their time_slice - * refilled, but get a new later deadline to have little effect on - * SCHED_NORMAL tasks. - - */ -static inline void check_deadline(struct task_struct *p) -{ - if (p->time_slice < RESCHED_US || batch_task(p)) - time_slice_expired(p); -} - -/* - * O(n) lookup of all tasks in the global runqueue. The real brainfuck - * of lock contention and O(n). It's not really O(n) as only the queued, - * but not running tasks are scanned, and is O(n) queued in the worst case - * scenario only because the right task can be found before scanning all of - * them. - * Tasks are selected in this order: - * Real time tasks are selected purely by their static priority and in the - * order they were queued, so the lowest value idx, and the first queued task - * of that priority value is chosen. - * If no real time tasks are found, the SCHED_ISO priority is checked, and - * all SCHED_ISO tasks have the same priority value, so they're selected by - * the earliest deadline value. - * If no SCHED_ISO tasks are found, SCHED_NORMAL tasks are selected by the - * earliest deadline. - * Finally if no SCHED_NORMAL tasks are found, SCHED_IDLEPRIO tasks are - * selected by the earliest deadline. - */ -static inline struct -task_struct *earliest_deadline_task(struct rq *rq, struct task_struct *idle) -{ - u64 dl, earliest_deadline = 0; /* Initialise to silence compiler */ - struct task_struct *p, *edt = idle; - unsigned int cpu = cpu_of(rq); - struct list_head *queue; - int idx = 0; - -retry: - idx = find_next_bit(grq.prio_bitmap, PRIO_LIMIT, idx); - if (idx >= PRIO_LIMIT) - goto out; - queue = grq.queue + idx; - list_for_each_entry(p, queue, run_list) { - /* Make sure cpu affinity is ok */ - if (needs_other_cpu(p, cpu)) - continue; - if (idx < MAX_RT_PRIO) { - /* We found an rt task */ - edt = p; - goto out_take; - } - - /* - * Soft affinity happens here by not scheduling a task with - * its sticky flag set that ran on a different CPU last when - * the CPU is scaling, or by greatly biasing against its - * deadline when not. - */ - if (task_rq(p) != rq && task_sticky(p)) { - if (scaling_rq(rq)) - continue; - else - dl = p->deadline + longest_deadline_diff(); - } else - dl = p->deadline; - - /* - * No rt tasks. Find the earliest deadline task. Now we're in - * O(n) territory. This is what we silenced the compiler for: - * edt will always start as idle. - */ - if (edt == idle || - deadline_before(dl, earliest_deadline)) { - earliest_deadline = dl; - edt = p; - } - } - if (edt == idle) { - if (++idx < PRIO_LIMIT) - goto retry; - goto out; - } -out_take: - take_task(rq, edt); -out: - return edt; -} - -/* - * Print scheduling while atomic bug: - */ -static noinline void __schedule_bug(struct task_struct *prev) -{ - struct pt_regs *regs = get_irq_regs(); - - printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n", - prev->comm, prev->pid, preempt_count()); - - debug_show_held_locks(prev); - print_modules(); - if (irqs_disabled()) - print_irqtrace_events(prev); - - if (regs) - show_regs(regs); - else - dump_stack(); -} - -/* - * Various schedule()-time debugging checks and statistics: - */ -static inline void schedule_debug(struct task_struct *prev) -{ - /* - * Test if we are atomic. Since do_exit() needs to call into - * schedule() atomically, we ignore that path for now. - * Otherwise, whine if we are scheduling when we should not be. - */ - if (unlikely(in_atomic_preempt_off() && !prev->exit_state)) - __schedule_bug(prev); - - profile_hit(SCHED_PROFILING, __builtin_return_address(0)); - - schedstat_inc(this_rq(), sched_count); -#ifdef CONFIG_SCHEDSTATS - if (unlikely(prev->lock_depth >= 0)) { - schedstat_inc(this_rq(), bkl_count); - schedstat_inc(prev, sched_info.bkl_count); - } -#endif -} - -/* - * The currently running task's information is all stored in rq local data - * which is only modified by the local CPU, thereby allowing the data to be - * changed without grabbing the grq lock. - */ -static inline void set_rq_task(struct rq *rq, struct task_struct *p) -{ - rq->rq_time_slice = p->time_slice; - rq->rq_deadline = p->deadline; - rq->rq_last_ran = p->last_ran; - rq->rq_policy = p->policy; - rq->rq_prio = p->prio; - if (p != rq->idle) - rq->rq_running = 1; - else - rq->rq_running = 0; -} - -static void reset_rq_task(struct rq *rq, struct task_struct *p) -{ - rq->rq_policy = p->policy; - rq->rq_prio = p->prio; -} - -/* - * schedule() is the main scheduler function. - */ -asmlinkage void __sched schedule(void) -{ - struct task_struct *prev, *next, *idle; - unsigned long *switch_count; - int deactivate, cpu; - struct rq *rq; - -need_resched: - preempt_disable(); - - cpu = smp_processor_id(); - rq = cpu_rq(cpu); - idle = rq->idle; - rcu_qsctr_inc(cpu); - prev = rq->curr; - switch_count = &prev->nivcsw; - - release_kernel_lock(prev); -need_resched_nonpreemptible: - - deactivate = 0; - schedule_debug(prev); - - grq_lock_irq(); - update_clocks(rq); - update_cpu_clock(rq, prev, 0); - if (rq->clock - rq->last_tick > HALF_JIFFY_NS) - rq->dither = 0; - else - rq->dither = 1; - - clear_tsk_need_resched(prev); - - if (prev->state && !(preempt_count() & PREEMPT_ACTIVE)) { - if (unlikely(signal_pending_state(prev->state, prev))) - prev->state = TASK_RUNNING; - else - deactivate = 1; - switch_count = &prev->nvcsw; - } - - if (prev != idle) { - /* Update all the information stored on struct rq */ - prev->time_slice = rq->rq_time_slice; - prev->deadline = rq->rq_deadline; - check_deadline(prev); - prev->last_ran = rq->clock; - - /* Task changed affinity off this CPU */ - if (needs_other_cpu(prev, cpu)) - resched_suitable_idle(prev); - else if (!deactivate) { - if (!queued_notrunning()) { - /* - * We now know prev is the only thing that is - * awaiting CPU so we can bypass rechecking for - * the earliest deadline task and just run it - * again. - */ - grq_unlock_irq(); - goto rerun_prev_unlocked; - } else - swap_sticky(rq, cpu, prev); - } - return_task(prev, deactivate); - } - - if (unlikely(!queued_notrunning())) { - /* - * This CPU is now truly idle as opposed to when idle is - * scheduled as a high priority task in its own right. - */ - next = idle; - schedstat_inc(rq, sched_goidle); - set_cpuidle_map(cpu); - } else { - next = earliest_deadline_task(rq, idle); - if (likely(next->prio != PRIO_LIMIT)) { - prefetch(next); - prefetch_stack(next); - clear_cpuidle_map(cpu); - } else - set_cpuidle_map(cpu); - } - - if (likely(prev != next)) { - /* - * Don't stick tasks when a real time task is going to run as - * they may literally get stuck. - */ - if (rt_task(next)) - unstick_task(rq, prev); - sched_info_switch(prev, next); - perf_counter_task_sched_out(prev, next, cpu); - - set_rq_task(rq, next); - grq.nr_switches++; - prev->oncpu = 0; - next->oncpu = 1; - rq->curr = next; - ++*switch_count; - - context_switch(rq, prev, next); /* unlocks the grq */ - /* - * the context switch might have flipped the stack from under - * us, hence refresh the local variables. - */ - cpu = smp_processor_id(); - rq = cpu_rq(cpu); - idle = rq->idle; - } else - grq_unlock_irq(); - -rerun_prev_unlocked: - if (unlikely(reacquire_kernel_lock(current) < 0)) - goto need_resched_nonpreemptible; - preempt_enable_no_resched(); - if (need_resched()) - goto need_resched; -} -EXPORT_SYMBOL(schedule); - -#ifdef CONFIG_SMP -int mutex_spin_on_owner(struct mutex *lock, struct thread_info *owner) -{ - unsigned int cpu; - struct rq *rq; - -#ifdef CONFIG_DEBUG_PAGEALLOC - /* - * Need to access the cpu field knowing that - * DEBUG_PAGEALLOC could have unmapped it if - * the mutex owner just released it and exited. - */ - if (probe_kernel_address(&owner->cpu, cpu)) - goto out; -#else - cpu = owner->cpu; -#endif - - /* - * Even if the access succeeded (likely case), - * the cpu field may no longer be valid. - */ - if (cpu >= nr_cpumask_bits) - goto out; - - /* - * We need to validate that we can do a - * get_cpu() and that we have the percpu area. - */ - if (!cpu_online(cpu)) - goto out; - - rq = cpu_rq(cpu); - - for (;;) { - /* - * Owner changed, break to re-assess state. - */ - if (lock->owner != owner) - break; - - /* - * Is that owner really running on that cpu? - */ - if (task_thread_info(rq->curr) != owner || need_resched()) - return 0; - - cpu_relax(); - } -out: - return 1; -} -#endif - -#ifdef CONFIG_PREEMPT -/* - * this is the entry point to schedule() from in-kernel preemption - * off of preempt_enable. Kernel preemptions off return from interrupt - * occur there and call schedule directly. - */ -asmlinkage void __sched preempt_schedule(void) -{ - struct thread_info *ti = current_thread_info(); - - /* - * If there is a non-zero preempt_count or interrupts are disabled, - * we do not want to preempt the current task. Just return.. - */ - if (likely(ti->preempt_count || irqs_disabled())) - return; - - do { - add_preempt_count(PREEMPT_ACTIVE); - schedule(); - sub_preempt_count(PREEMPT_ACTIVE); - - /* - * Check again in case we missed a preemption opportunity - * between schedule and now. - */ - barrier(); - } while (need_resched()); -} -EXPORT_SYMBOL(preempt_schedule); - -/* - * this is the entry point to schedule() from kernel preemption - * off of irq context. - * Note, that this is called and return with irqs disabled. This will - * protect us against recursive calling from irq. - */ -asmlinkage void __sched preempt_schedule_irq(void) -{ - struct thread_info *ti = current_thread_info(); - - /* Catch callers which need to be fixed */ - BUG_ON(ti->preempt_count || !irqs_disabled()); - - do { - add_preempt_count(PREEMPT_ACTIVE); - local_irq_enable(); - schedule(); - local_irq_disable(); - sub_preempt_count(PREEMPT_ACTIVE); - - /* - * Check again in case we missed a preemption opportunity - * between schedule and now. - */ - barrier(); - } while (need_resched()); -} - -#endif /* CONFIG_PREEMPT */ - -int default_wake_function(wait_queue_t *curr, unsigned mode, int sync, - void *key) -{ - return try_to_wake_up(curr->private, mode, sync); -} -EXPORT_SYMBOL(default_wake_function); - -/* - * The core wakeup function. Non-exclusive wakeups (nr_exclusive == 0) just - * wake everything up. If it's an exclusive wakeup (nr_exclusive == small +ve - * number) then we wake all the non-exclusive tasks and one exclusive task. - * - * There are circumstances in which we can try to wake a task which has already - * started to run but is not in state TASK_RUNNING. try_to_wake_up() returns - * zero in this (rare) case, and we handle it by continuing to scan the queue. - */ -static void __wake_up_common(wait_queue_head_t *q, unsigned int mode, - int nr_exclusive, int sync, void *key) -{ - struct list_head *tmp, *next; - - list_for_each_safe(tmp, next, &q->task_list) { - wait_queue_t *curr = list_entry(tmp, wait_queue_t, task_list); - unsigned int flags = curr->flags; - - if (curr->func(curr, mode, sync, key) && - (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive) - break; - } -} - -/** - * __wake_up - wake up threads blocked on a waitqueue. - * @q: the waitqueue - * @mode: which threads - * @nr_exclusive: how many wake-one or wake-many threads to wake up - * @key: is directly passed to the wakeup function - * - * It may be assumed that this function implies a write memory barrier before - * changing the task state if and only if any tasks are woken up. - */ -void __wake_up(wait_queue_head_t *q, unsigned int mode, - int nr_exclusive, void *key) -{ - unsigned long flags; - - spin_lock_irqsave(&q->lock, flags); - __wake_up_common(q, mode, nr_exclusive, 0, key); - spin_unlock_irqrestore(&q->lock, flags); -} -EXPORT_SYMBOL(__wake_up); - -/* - * Same as __wake_up but called with the spinlock in wait_queue_head_t held. - */ -void __wake_up_locked(wait_queue_head_t *q, unsigned int mode) -{ - __wake_up_common(q, mode, 1, 0, NULL); -} - -void __wake_up_locked_key(wait_queue_head_t *q, unsigned int mode, void *key) -{ - __wake_up_common(q, mode, 1, 0, key); -} - -/** - * __wake_up_sync_key - wake up threads blocked on a waitqueue. - * @q: the waitqueue - * @mode: which threads - * @nr_exclusive: how many wake-one or wake-many threads to wake up - * @key: opaque value to be passed to wakeup targets - * - * The sync wakeup differs that the waker knows that it will schedule - * away soon, so while the target thread will be woken up, it will not - * be migrated to another CPU - ie. the two threads are 'synchronised' - * with each other. This can prevent needless bouncing between CPUs. - * - * On UP it can prevent extra preemption. - * - * It may be assumed that this function implies a write memory barrier before - * changing the task state if and only if any tasks are woken up. - */ -void __wake_up_sync_key(wait_queue_head_t *q, unsigned int mode, - int nr_exclusive, void *key) -{ - unsigned long flags; - int sync = 1; - - if (unlikely(!q)) - return; - - if (unlikely(!nr_exclusive)) - sync = 0; - - spin_lock_irqsave(&q->lock, flags); - __wake_up_common(q, mode, nr_exclusive, sync, key); - spin_unlock_irqrestore(&q->lock, flags); -} -EXPORT_SYMBOL_GPL(__wake_up_sync_key); - -/** - * __wake_up_sync - wake up threads blocked on a waitqueue. - * @q: the waitqueue - * @mode: which threads - * @nr_exclusive: how many wake-one or wake-many threads to wake up - * - * The sync wakeup differs that the waker knows that it will schedule - * away soon, so while the target thread will be woken up, it will not - * be migrated to another CPU - ie. the two threads are 'synchronised' - * with each other. This can prevent needless bouncing between CPUs. - * - * On UP it can prevent extra preemption. - */ -void __wake_up_sync(wait_queue_head_t *q, unsigned int mode, int nr_exclusive) -{ - unsigned long flags; - int sync = 1; - - if (unlikely(!q)) - return; - - if (unlikely(!nr_exclusive)) - sync = 0; - - spin_lock_irqsave(&q->lock, flags); - __wake_up_common(q, mode, nr_exclusive, sync, NULL); - spin_unlock_irqrestore(&q->lock, flags); -} -EXPORT_SYMBOL_GPL(__wake_up_sync); /* For internal use only */ - -/** - * complete: - signals a single thread waiting on this completion - * @x: holds the state of this particular completion - * - * This will wake up a single thread waiting on this completion. Threads will be - * awakened in the same order in which they were queued. - * - * See also complete_all(), wait_for_completion() and related routines. - * - * It may be assumed that this function implies a write memory barrier before - * changing the task state if and only if any tasks are woken up. - */ -void complete(struct completion *x) -{ - unsigned long flags; - - spin_lock_irqsave(&x->wait.lock, flags); - x->done++; - __wake_up_common(&x->wait, TASK_NORMAL, 1, 0, NULL); - spin_unlock_irqrestore(&x->wait.lock, flags); -} -EXPORT_SYMBOL(complete); - -/** - * complete_all: - signals all threads waiting on this completion - * @x: holds the state of this particular completion - * - * This will wake up all threads waiting on this particular completion event. - * - * It may be assumed that this function implies a write memory barrier before - * changing the task state if and only if any tasks are woken up. - */ -void complete_all(struct completion *x) -{ - unsigned long flags; - - spin_lock_irqsave(&x->wait.lock, flags); - x->done += UINT_MAX/2; - __wake_up_common(&x->wait, TASK_NORMAL, 0, 0, NULL); - spin_unlock_irqrestore(&x->wait.lock, flags); -} -EXPORT_SYMBOL(complete_all); - -static inline long __sched -do_wait_for_common(struct completion *x, long timeout, int state) -{ - if (!x->done) { - DECLARE_WAITQUEUE(wait, current); - - wait.flags |= WQ_FLAG_EXCLUSIVE; - __add_wait_queue_tail(&x->wait, &wait); - do { - if (signal_pending_state(state, current)) { - timeout = -ERESTARTSYS; - break; - } - __set_current_state(state); - spin_unlock_irq(&x->wait.lock); - timeout = schedule_timeout(timeout); - spin_lock_irq(&x->wait.lock); - } while (!x->done && timeout); - __remove_wait_queue(&x->wait, &wait); - if (!x->done) - return timeout; - } - x->done--; - return timeout ?: 1; -} - -static long __sched -wait_for_common(struct completion *x, long timeout, int state) -{ - might_sleep(); - - spin_lock_irq(&x->wait.lock); - timeout = do_wait_for_common(x, timeout, state); - spin_unlock_irq(&x->wait.lock); - return timeout; -} - -/** - * wait_for_completion: - waits for completion of a task - * @x: holds the state of this particular completion - * - * This waits to be signaled for completion of a specific task. It is NOT - * interruptible and there is no timeout. - * - * See also similar routines (i.e. wait_for_completion_timeout()) with timeout - * and interrupt capability. Also see complete(). - */ -void __sched wait_for_completion(struct completion *x) -{ - wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_UNINTERRUPTIBLE); -} -EXPORT_SYMBOL(wait_for_completion); - -/** - * wait_for_completion_timeout: - waits for completion of a task (w/timeout) - * @x: holds the state of this particular completion - * @timeout: timeout value in jiffies - * - * This waits for either a completion of a specific task to be signaled or for a - * specified timeout to expire. The timeout is in jiffies. It is not - * interruptible. - */ -unsigned long __sched -wait_for_completion_timeout(struct completion *x, unsigned long timeout) -{ - return wait_for_common(x, timeout, TASK_UNINTERRUPTIBLE); -} -EXPORT_SYMBOL(wait_for_completion_timeout); - -/** - * wait_for_completion_interruptible: - waits for completion of a task (w/intr) - * @x: holds the state of this particular completion - * - * This waits for completion of a specific task to be signaled. It is - * interruptible. - */ -int __sched wait_for_completion_interruptible(struct completion *x) -{ - long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_INTERRUPTIBLE); - if (t == -ERESTARTSYS) - return t; - return 0; -} -EXPORT_SYMBOL(wait_for_completion_interruptible); - -/** - * wait_for_completion_interruptible_timeout: - waits for completion (w/(to,intr)) - * @x: holds the state of this particular completion - * @timeout: timeout value in jiffies - * - * This waits for either a completion of a specific task to be signaled or for a - * specified timeout to expire. It is interruptible. The timeout is in jiffies. - */ -unsigned long __sched -wait_for_completion_interruptible_timeout(struct completion *x, - unsigned long timeout) -{ - return wait_for_common(x, timeout, TASK_INTERRUPTIBLE); -} -EXPORT_SYMBOL(wait_for_completion_interruptible_timeout); - -/** - * wait_for_completion_killable: - waits for completion of a task (killable) - * @x: holds the state of this particular completion - * - * This waits to be signaled for completion of a specific task. It can be - * interrupted by a kill signal. - */ -int __sched wait_for_completion_killable(struct completion *x) -{ - long t = wait_for_common(x, MAX_SCHEDULE_TIMEOUT, TASK_KILLABLE); - if (t == -ERESTARTSYS) - return t; - return 0; -} -EXPORT_SYMBOL(wait_for_completion_killable); - -/** - * try_wait_for_completion - try to decrement a completion without blocking - * @x: completion structure - * - * Returns: 0 if a decrement cannot be done without blocking - * 1 if a decrement succeeded. - * - * If a completion is being used as a counting completion, - * attempt to decrement the counter without blocking. This - * enables us to avoid waiting if the resource the completion - * is protecting is not available. - */ -bool try_wait_for_completion(struct completion *x) -{ - int ret = 1; - - spin_lock_irq(&x->wait.lock); - if (!x->done) - ret = 0; - else - x->done--; - spin_unlock_irq(&x->wait.lock); - return ret; -} -EXPORT_SYMBOL(try_wait_for_completion); - -/** - * completion_done - Test to see if a completion has any waiters - * @x: completion structure - * - * Returns: 0 if there are waiters (wait_for_completion() in progress) - * 1 if there are no waiters. - * - */ -bool completion_done(struct completion *x) -{ - int ret = 1; - - spin_lock_irq(&x->wait.lock); - if (!x->done) - ret = 0; - spin_unlock_irq(&x->wait.lock); - return ret; -} -EXPORT_SYMBOL(completion_done); - -static long __sched -sleep_on_common(wait_queue_head_t *q, int state, long timeout) -{ - unsigned long flags; - wait_queue_t wait; - - init_waitqueue_entry(&wait, current); - - __set_current_state(state); - - spin_lock_irqsave(&q->lock, flags); - __add_wait_queue(q, &wait); - spin_unlock(&q->lock); - timeout = schedule_timeout(timeout); - spin_lock_irq(&q->lock); - __remove_wait_queue(q, &wait); - spin_unlock_irqrestore(&q->lock, flags); - - return timeout; -} - -void __sched interruptible_sleep_on(wait_queue_head_t *q) -{ - sleep_on_common(q, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); -} -EXPORT_SYMBOL(interruptible_sleep_on); - -long __sched -interruptible_sleep_on_timeout(wait_queue_head_t *q, long timeout) -{ - return sleep_on_common(q, TASK_INTERRUPTIBLE, timeout); -} -EXPORT_SYMBOL(interruptible_sleep_on_timeout); - -void __sched sleep_on(wait_queue_head_t *q) -{ - sleep_on_common(q, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); -} -EXPORT_SYMBOL(sleep_on); - -long __sched sleep_on_timeout(wait_queue_head_t *q, long timeout) -{ - return sleep_on_common(q, TASK_UNINTERRUPTIBLE, timeout); -} -EXPORT_SYMBOL(sleep_on_timeout); - -#ifdef CONFIG_RT_MUTEXES - -/* - * rt_mutex_setprio - set the current priority of a task - * @p: task - * @prio: prio value (kernel-internal form) - * - * This function changes the 'effective' priority of a task. It does - * not touch ->normal_prio like __setscheduler(). - * - * Used by the rt_mutex code to implement priority inheritance logic. - */ -void rt_mutex_setprio(struct task_struct *p, int prio) -{ - unsigned long flags; - int queued, oldprio; - struct rq *rq; - - BUG_ON(prio < 0 || prio > MAX_PRIO); - - rq = task_grq_lock(p, &flags); - - oldprio = p->prio; - queued = task_queued(p); - if (queued) - dequeue_task(p); - p->prio = prio; - if (task_running(p) && prio > oldprio) - resched_task(p); - if (queued) { - enqueue_task(p); - try_preempt(p, rq); - } - - task_grq_unlock(&flags); -} - -#endif - -/* - * Adjust the deadline for when the priority is to change, before it's - * changed. - */ -static inline void adjust_deadline(struct task_struct *p, int new_prio) -{ - p->deadline += static_deadline_diff(new_prio) - task_deadline_diff(p); -} - -void set_user_nice(struct task_struct *p, long nice) -{ - int queued, new_static, old_static; - unsigned long flags; - struct rq *rq; - - if (TASK_NICE(p) == nice || nice < -20 || nice > 19) - return; - new_static = NICE_TO_PRIO(nice); - /* - * We have to be careful, if called from sys_setpriority(), - * the task might be in the middle of scheduling on another CPU. - */ - rq = time_task_grq_lock(p, &flags); - /* - * The RT priorities are set via sched_setscheduler(), but we still - * allow the 'normal' nice value to be set - but as expected - * it wont have any effect on scheduling until the task is - * not SCHED_NORMAL/SCHED_BATCH: - */ - if (has_rt_policy(p)) { - p->static_prio = new_static; - goto out_unlock; - } - queued = task_queued(p); - if (queued) - dequeue_task(p); - - adjust_deadline(p, new_static); - old_static = p->static_prio; - p->static_prio = new_static; - p->prio = effective_prio(p); - - if (queued) { - enqueue_task(p); - if (new_static < old_static) - try_preempt(p, rq); - } else if (task_running(p)) { - reset_rq_task(rq, p); - if (old_static < new_static) - resched_task(p); - } -out_unlock: - task_grq_unlock(&flags); -} -EXPORT_SYMBOL(set_user_nice); - -/* - * can_nice - check if a task can reduce its nice value - * @p: task - * @nice: nice value - */ -int can_nice(const struct task_struct *p, const int nice) -{ - /* convert nice value [19,-20] to rlimit style value [1,40] */ - int nice_rlim = 20 - nice; - - return (nice_rlim <= p->signal->rlim[RLIMIT_NICE].rlim_cur || - capable(CAP_SYS_NICE)); -} - -#ifdef __ARCH_WANT_SYS_NICE - -/* - * sys_nice - change the priority of the current process. - * @increment: priority increment - * - * sys_setpriority is a more generic, but much slower function that - * does similar things. - */ -SYSCALL_DEFINE1(nice, int, increment) -{ - long nice, retval; - - /* - * Setpriority might change our priority at the same moment. - * We don't have to worry. Conceptually one call occurs first - * and we have a single winner. - */ - if (increment < -40) - increment = -40; - if (increment > 40) - increment = 40; - - nice = TASK_NICE(current) + increment; - if (nice < -20) - nice = -20; - if (nice > 19) - nice = 19; - - if (increment < 0 && !can_nice(current, nice)) - return -EPERM; - - retval = security_task_setnice(current, nice); - if (retval) - return retval; - - set_user_nice(current, nice); - return 0; -} - -#endif - -/** - * task_prio - return the priority value of a given task. - * @p: the task in question. - * - * This is the priority value as seen by users in /proc. - * RT tasks are offset by -100. Normal tasks are centered around 1, value goes - * from 0 (SCHED_ISO) up to 82 (nice +19 SCHED_IDLEPRIO). - */ -int task_prio(const struct task_struct *p) -{ - int delta, prio = p->prio - MAX_RT_PRIO; - - /* rt tasks and iso tasks */ - if (prio <= 0) - goto out; - - /* Convert to ms to avoid overflows */ - delta = NS_TO_MS(p->deadline - grq.niffies); - delta = delta * 40 / ms_longest_deadline_diff(); - if (delta > 0 && delta <= 80) - prio += delta; - if (idleprio_task(p)) - prio += 40; -out: - return prio; -} - -/** - * task_nice - return the nice value of a given task. - * @p: the task in question. - */ -int task_nice(const struct task_struct *p) -{ - return TASK_NICE(p); -} -EXPORT_SYMBOL_GPL(task_nice); - -/** - * idle_cpu - is a given cpu idle currently? - * @cpu: the processor in question. - */ -int idle_cpu(int cpu) -{ - return cpu_curr(cpu) == cpu_rq(cpu)->idle; -} - -/** - * idle_task - return the idle task for a given cpu. - * @cpu: the processor in question. - */ -struct task_struct *idle_task(int cpu) -{ - return cpu_rq(cpu)->idle; -} - -/** - * find_process_by_pid - find a process with a matching PID value. - * @pid: the pid in question. - */ -static inline struct task_struct *find_process_by_pid(pid_t pid) -{ - return pid ? find_task_by_vpid(pid) : current; -} - -/* Actually do priority change: must hold grq lock. */ -static void -__setscheduler(struct task_struct *p, struct rq *rq, int policy, int prio) -{ - int oldrtprio, oldprio; - - BUG_ON(task_queued(p)); - - p->policy = policy; - oldrtprio = p->rt_priority; - p->rt_priority = prio; - p->normal_prio = normal_prio(p); - oldprio = p->prio; - /* we are holding p->pi_lock already */ - p->prio = rt_mutex_getprio(p); - if (task_running(p)) { - reset_rq_task(rq, p); - /* Resched only if we might now be preempted */ - if (p->prio > oldprio || p->rt_priority > oldrtprio) - resched_task(p); - } -} - -/* - * check the target process has a UID that matches the current process's - */ -static bool check_same_owner(struct task_struct *p) -{ - const struct cred *cred = current_cred(), *pcred; - bool match; - - rcu_read_lock(); - pcred = __task_cred(p); - match = (cred->euid == pcred->euid || - cred->euid == pcred->uid); - rcu_read_unlock(); - return match; -} - -static int __sched_setscheduler(struct task_struct *p, int policy, - struct sched_param *param, bool user) -{ - struct sched_param zero_param = { .sched_priority = 0 }; - int queued, retval, oldpolicy = -1; - unsigned long flags, rlim_rtprio = 0; - struct rq *rq; - - /* may grab non-irq protected spin_locks */ - BUG_ON(in_interrupt()); - - if (is_rt_policy(policy) && !capable(CAP_SYS_NICE)) { - unsigned long lflags; - - if (!lock_task_sighand(p, &lflags)) - return -ESRCH; - rlim_rtprio = p->signal->rlim[RLIMIT_RTPRIO].rlim_cur; - unlock_task_sighand(p, &lflags); - if (rlim_rtprio) - goto recheck; - /* - * If the caller requested an RT policy without having the - * necessary rights, we downgrade the policy to SCHED_ISO. - * We also set the parameter to zero to pass the checks. - */ - policy = SCHED_ISO; - param = &zero_param; - } -recheck: - /* double check policy once rq lock held */ - if (policy < 0) - policy = oldpolicy = p->policy; - else if (!SCHED_RANGE(policy)) - return -EINVAL; - /* - * Valid priorities for SCHED_FIFO and SCHED_RR are - * 1..MAX_USER_RT_PRIO-1, valid priority for SCHED_NORMAL and - * SCHED_BATCH is 0. - */ - if (param->sched_priority < 0 || - (p->mm && param->sched_priority > MAX_USER_RT_PRIO - 1) || - (!p->mm && param->sched_priority > MAX_RT_PRIO - 1)) - return -EINVAL; - if (is_rt_policy(policy) != (param->sched_priority != 0)) - return -EINVAL; - - /* - * Allow unprivileged RT tasks to decrease priority: - */ - if (user && !capable(CAP_SYS_NICE)) { - if (is_rt_policy(policy)) { - /* can't set/change the rt policy */ - if (policy != p->policy && !rlim_rtprio) - return -EPERM; - - /* can't increase priority */ - if (param->sched_priority > p->rt_priority && - param->sched_priority > rlim_rtprio) - return -EPERM; - } else { - switch (p->policy) { - /* - * Can only downgrade policies but not back to - * SCHED_NORMAL - */ - case SCHED_ISO: - if (policy == SCHED_ISO) - goto out; - if (policy == SCHED_NORMAL) - return -EPERM; - break; - case SCHED_BATCH: - if (policy == SCHED_BATCH) - goto out; - if (policy != SCHED_IDLEPRIO) - return -EPERM; - break; - case SCHED_IDLEPRIO: - if (policy == SCHED_IDLEPRIO) - goto out; - return -EPERM; - default: - break; - } - } - - /* can't change other user's priorities */ - if (!check_same_owner(p)) - return -EPERM; - } - - retval = security_task_setscheduler(p, policy, param); - if (retval) - return retval; - /* - * make sure no PI-waiters arrive (or leave) while we are - * changing the priority of the task: - */ - spin_lock_irqsave(&p->pi_lock, flags); - /* - * To be able to change p->policy safely, the apropriate - * runqueue lock must be held. - */ - rq = __task_grq_lock(p); - /* recheck policy now with rq lock held */ - if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) { - __task_grq_unlock(); - spin_unlock_irqrestore(&p->pi_lock, flags); - policy = oldpolicy = -1; - goto recheck; - } - update_clocks(rq); - queued = task_queued(p); - if (queued) - dequeue_task(p); - __setscheduler(p, rq, policy, param->sched_priority); - if (queued) { - enqueue_task(p); - try_preempt(p, rq); - } - __task_grq_unlock(); - spin_unlock_irqrestore(&p->pi_lock, flags); - - rt_mutex_adjust_pi(p); -out: - return 0; -} - -/** - * sched_setscheduler - change the scheduling policy and/or RT priority of a thread. - * @p: the task in question. - * @policy: new policy. - * @param: structure containing the new RT priority. - * - * NOTE that the task may be already dead. - */ -int sched_setscheduler(struct task_struct *p, int policy, - struct sched_param *param) -{ - return __sched_setscheduler(p, policy, param, true); -} - -EXPORT_SYMBOL_GPL(sched_setscheduler); - -/** - * sched_setscheduler_nocheck - change the scheduling policy and/or RT priority of a thread from kernelspace. - * @p: the task in question. - * @policy: new policy. - * @param: structure containing the new RT priority. - * - * Just like sched_setscheduler, only don't bother checking if the - * current context has permission. For example, this is needed in - * stop_machine(): we create temporary high priority worker threads, - * but our caller might not have that capability. - */ -int sched_setscheduler_nocheck(struct task_struct *p, int policy, - struct sched_param *param) -{ - return __sched_setscheduler(p, policy, param, false); -} - -static int -do_sched_setscheduler(pid_t pid, int policy, struct sched_param __user *param) -{ - struct sched_param lparam; - struct task_struct *p; - int retval; - - if (!param || pid < 0) - return -EINVAL; - if (copy_from_user(&lparam, param, sizeof(struct sched_param))) - return -EFAULT; - - rcu_read_lock(); - retval = -ESRCH; - p = find_process_by_pid(pid); - if (p != NULL) - retval = sched_setscheduler(p, policy, &lparam); - rcu_read_unlock(); - - return retval; -} - -/** - * sys_sched_setscheduler - set/change the scheduler policy and RT priority - * @pid: the pid in question. - * @policy: new policy. - * @param: structure containing the new RT priority. - */ -asmlinkage long sys_sched_setscheduler(pid_t pid, int policy, - struct sched_param __user *param) -{ - /* negative values for policy are not valid */ - if (policy < 0) - return -EINVAL; - - return do_sched_setscheduler(pid, policy, param); -} - -/** - * sys_sched_setparam - set/change the RT priority of a thread - * @pid: the pid in question. - * @param: structure containing the new RT priority. - */ -SYSCALL_DEFINE2(sched_setparam, pid_t, pid, struct sched_param __user *, param) -{ - return do_sched_setscheduler(pid, -1, param); -} - -/** - * sys_sched_getscheduler - get the policy (scheduling class) of a thread - * @pid: the pid in question. - */ -SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid) -{ - struct task_struct *p; - int retval = -EINVAL; - - if (pid < 0) - goto out_nounlock; - - retval = -ESRCH; - read_lock(&tasklist_lock); - p = find_process_by_pid(pid); - if (p) { - retval = security_task_getscheduler(p); - if (!retval) - retval = p->policy; - } - read_unlock(&tasklist_lock); - -out_nounlock: - return retval; -} - -/** - * sys_sched_getscheduler - get the RT priority of a thread - * @pid: the pid in question. - * @param: structure containing the RT priority. - */ -SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param) -{ - struct sched_param lp; - struct task_struct *p; - int retval = -EINVAL; - - if (!param || pid < 0) - goto out_nounlock; - - read_lock(&tasklist_lock); - p = find_process_by_pid(pid); - retval = -ESRCH; - if (!p) - goto out_unlock; - - retval = security_task_getscheduler(p); - if (retval) - goto out_unlock; - - lp.sched_priority = p->rt_priority; - read_unlock(&tasklist_lock); - - /* - * This one might sleep, we cannot do it with a spinlock held ... - */ - retval = copy_to_user(param, &lp, sizeof(*param)) ? -EFAULT : 0; - -out_nounlock: - return retval; - -out_unlock: - read_unlock(&tasklist_lock); - return retval; -} - -long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) -{ - cpumask_var_t cpus_allowed, new_mask; - struct task_struct *p; - int retval; - - get_online_cpus(); - read_lock(&tasklist_lock); - - p = find_process_by_pid(pid); - if (!p) { - read_unlock(&tasklist_lock); - put_online_cpus(); - return -ESRCH; - } - - /* - * It is not safe to call set_cpus_allowed with the - * tasklist_lock held. We will bump the task_struct's - * usage count and then drop tasklist_lock. - */ - get_task_struct(p); - read_unlock(&tasklist_lock); - - if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) { - retval = -ENOMEM; - goto out_put_task; - } - if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) { - retval = -ENOMEM; - goto out_free_cpus_allowed; - } - retval = -EPERM; - if (!check_same_owner(p) && !capable(CAP_SYS_NICE)) - goto out_unlock; - - retval = security_task_setscheduler(p, 0, NULL); - if (retval) - goto out_unlock; - - cpuset_cpus_allowed(p, cpus_allowed); - cpumask_and(new_mask, in_mask, cpus_allowed); -again: - retval = set_cpus_allowed_ptr(p, new_mask); - - if (!retval) { - cpuset_cpus_allowed(p, cpus_allowed); - if (!cpumask_subset(new_mask, cpus_allowed)) { - /* - * We must have raced with a concurrent cpuset - * update. Just reset the cpus_allowed to the - * cpuset's cpus_allowed - */ - cpumask_copy(new_mask, cpus_allowed); - goto again; - } - } -out_unlock: - free_cpumask_var(new_mask); -out_free_cpus_allowed: - free_cpumask_var(cpus_allowed); -out_put_task: - put_task_struct(p); - put_online_cpus(); - return retval; -} - -static int get_user_cpu_mask(unsigned long __user *user_mask_ptr, unsigned len, - cpumask_t *new_mask) -{ - if (len < sizeof(cpumask_t)) { - memset(new_mask, 0, sizeof(cpumask_t)); - } else if (len > sizeof(cpumask_t)) { - len = sizeof(cpumask_t); - } - return copy_from_user(new_mask, user_mask_ptr, len) ? -EFAULT : 0; -} - - -/** - * sys_sched_setaffinity - set the cpu affinity of a process - * @pid: pid of the process - * @len: length in bytes of the bitmask pointed to by user_mask_ptr - * @user_mask_ptr: user-space pointer to the new cpu mask - */ -SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len, - unsigned long __user *, user_mask_ptr) -{ - cpumask_var_t new_mask; - int retval; - - if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) - return -ENOMEM; - - retval = get_user_cpu_mask(user_mask_ptr, len, new_mask); - if (retval == 0) - retval = sched_setaffinity(pid, new_mask); - free_cpumask_var(new_mask); - return retval; -} - -long sched_getaffinity(pid_t pid, cpumask_t *mask) -{ - struct task_struct *p; - int retval; - - mutex_lock(&sched_hotcpu_mutex); - read_lock(&tasklist_lock); - - retval = -ESRCH; - p = find_process_by_pid(pid); - if (!p) - goto out_unlock; - - retval = security_task_getscheduler(p); - if (retval) - goto out_unlock; - - cpus_and(*mask, p->cpus_allowed, cpu_online_map); - -out_unlock: - read_unlock(&tasklist_lock); - mutex_unlock(&sched_hotcpu_mutex); - if (retval) - return retval; - - return 0; -} - -/** - * sys_sched_getaffinity - get the cpu affinity of a process - * @pid: pid of the process - * @len: length in bytes of the bitmask pointed to by user_mask_ptr - * @user_mask_ptr: user-space pointer to hold the current cpu mask - */ -SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len, - unsigned long __user *, user_mask_ptr) -{ - int ret; - cpumask_var_t mask; - - if (len < cpumask_size()) - return -EINVAL; - - if (!alloc_cpumask_var(&mask, GFP_KERNEL)) - return -ENOMEM; - - ret = sched_getaffinity(pid, mask); - if (ret == 0) { - if (copy_to_user(user_mask_ptr, mask, cpumask_size())) - ret = -EFAULT; - else - ret = cpumask_size(); - } - free_cpumask_var(mask); - - return ret; -} - -/** - * sys_sched_yield - yield the current processor to other threads. - * - * This function yields the current CPU to other tasks. It does this by - * scheduling away the current task. If it still has the earliest deadline - * it will be scheduled again as the next task. - */ -SYSCALL_DEFINE0(sched_yield) -{ - struct task_struct *p; - struct rq *rq; - - p = current; - rq = task_grq_lock_irq(p); - schedstat_inc(rq, yld_count); - requeue_task(p); - - /* - * Since we are going to call schedule() anyway, there's - * no need to preempt or enable interrupts: - */ - __release(grq.lock); - spin_release(&grq.lock.dep_map, 1, _THIS_IP_); - _raw_spin_unlock(&grq.lock); - preempt_enable_no_resched(); - - schedule(); - - return 0; -} - -static inline int should_resched(void) -{ - return need_resched() && !(preempt_count() & PREEMPT_ACTIVE); -} - -static void __cond_resched(void) -{ - /* NOT a real fix but will make voluntary preempt work. 馬鹿な事 */ - if (unlikely(system_state != SYSTEM_RUNNING)) - return; -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP - __might_sleep(__FILE__, __LINE__); -#endif - /* - * The BKS might be reacquired before we have dropped - * PREEMPT_ACTIVE, which could trigger a second - * cond_resched() call. - */ - do { - add_preempt_count(PREEMPT_ACTIVE); - schedule(); - sub_preempt_count(PREEMPT_ACTIVE); - } while (need_resched()); -} - -int __sched _cond_resched(void) -{ - if (should_resched()) { - __cond_resched(); - return 1; - } - return 0; -} -EXPORT_SYMBOL(_cond_resched); - -/* - * cond_resched_lock() - if a reschedule is pending, drop the given lock, - * call schedule, and on return reacquire the lock. - * - * This works OK both with and without CONFIG_PREEMPT. We do strange low-level - * operations here to prevent schedule() from being called twice (once via - * spin_unlock(), once by hand). - */ -int cond_resched_lock(spinlock_t *lock) -{ - int resched = should_resched(); - int ret = 0; - - if (spin_needbreak(lock) || resched) { - spin_unlock(lock); - if (resched) - __cond_resched(); - else - cpu_relax(); - ret = 1; - spin_lock(lock); - } - return ret; -} -EXPORT_SYMBOL(cond_resched_lock); - -int __sched cond_resched_softirq(void) -{ - BUG_ON(!in_softirq()); - - if (should_resched()) { - local_bh_enable(); - __cond_resched(); - local_bh_disable(); - return 1; - } - return 0; -} -EXPORT_SYMBOL(cond_resched_softirq); - -/** - * yield - yield the current processor to other threads. - * - * This is a shortcut for kernel-space yielding - it marks the - * thread runnable and calls sys_sched_yield(). - */ -void __sched yield(void) -{ - set_current_state(TASK_RUNNING); - sys_sched_yield(); -} -EXPORT_SYMBOL(yield); - -/* - * This task is about to go to sleep on IO. Increment rq->nr_iowait so - * that process accounting knows that this is a task in IO wait state. - * - * But don't do that if it is a deliberate, throttling IO wait (this task - * has set its backing_dev_info: the queue against which it should throttle) - */ -void __sched io_schedule(void) -{ - struct rq *rq = &__raw_get_cpu_var(runqueues); - - delayacct_blkio_start(); - atomic_inc(&rq->nr_iowait); - schedule(); - atomic_dec(&rq->nr_iowait); - delayacct_blkio_end(); -} -EXPORT_SYMBOL(io_schedule); - -long __sched io_schedule_timeout(long timeout) -{ - struct rq *rq = &__raw_get_cpu_var(runqueues); - long ret; - - delayacct_blkio_start(); - atomic_inc(&rq->nr_iowait); - ret = schedule_timeout(timeout); - atomic_dec(&rq->nr_iowait); - delayacct_blkio_end(); - return ret; -} - -/** - * sys_sched_get_priority_max - return maximum RT priority. - * @policy: scheduling class. - * - * this syscall returns the maximum rt_priority that can be used - * by a given scheduling class. - */ -SYSCALL_DEFINE1(sched_get_priority_max, int, policy) -{ - int ret = -EINVAL; - - switch (policy) { - case SCHED_FIFO: - case SCHED_RR: - ret = MAX_USER_RT_PRIO-1; - break; - case SCHED_NORMAL: - case SCHED_BATCH: - case SCHED_ISO: - case SCHED_IDLEPRIO: - ret = 0; - break; - } - return ret; -} - -/** - * sys_sched_get_priority_min - return minimum RT priority. - * @policy: scheduling class. - * - * this syscall returns the minimum rt_priority that can be used - * by a given scheduling class. - */ -SYSCALL_DEFINE1(sched_get_priority_min, int, policy) -{ - int ret = -EINVAL; - - switch (policy) { - case SCHED_FIFO: - case SCHED_RR: - ret = 1; - break; - case SCHED_NORMAL: - case SCHED_BATCH: - case SCHED_ISO: - case SCHED_IDLEPRIO: - ret = 0; - break; - } - return ret; -} - -/** - * sys_sched_rr_get_interval - return the default timeslice of a process. - * @pid: pid of the process. - * @interval: userspace pointer to the timeslice value. - * - * this syscall writes the default timeslice value of a given process - * into the user-space timespec buffer. A value of '0' means infinity. - */ -SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid, - struct timespec __user *, interval) -{ - struct task_struct *p; - int retval = -EINVAL; - struct timespec t; - - if (pid < 0) - goto out_nounlock; - - retval = -ESRCH; - read_lock(&tasklist_lock); - p = find_process_by_pid(pid); - if (!p) - goto out_unlock; - - retval = security_task_getscheduler(p); - if (retval) - goto out_unlock; - - t = ns_to_timespec(p->policy == SCHED_FIFO ? 0 : - MS_TO_NS(task_timeslice(p))); - read_unlock(&tasklist_lock); - retval = copy_to_user(interval, &t, sizeof(t)) ? -EFAULT : 0; -out_nounlock: - return retval; -out_unlock: - read_unlock(&tasklist_lock); - return retval; -} - -static const char stat_nam[] = TASK_STATE_TO_CHAR_STR; - -void sched_show_task(struct task_struct *p) -{ - unsigned long free = 0; - unsigned state; - - state = p->state ? __ffs(p->state) + 1 : 0; - printk(KERN_INFO "%-13.13s %c", p->comm, - state < sizeof(stat_nam) - 1 ? stat_nam[state] : '?'); -#if BITS_PER_LONG == 32 - if (state == TASK_RUNNING) - printk(KERN_CONT " running "); - else - printk(KERN_CONT " %08lx ", thread_saved_pc(p)); -#else - if (state == TASK_RUNNING) - printk(KERN_CONT " running task "); - else - printk(KERN_CONT " %016lx ", thread_saved_pc(p)); -#endif -#ifdef CONFIG_DEBUG_STACK_USAGE - free = stack_not_used(p); -#endif - printk(KERN_CONT "%5lu %5d %6d 0x%08lx\n", free, - task_pid_nr(p), task_pid_nr(p->real_parent), - (unsigned long)task_thread_info(p)->flags); - - show_stack(p, NULL); -} - -void show_state_filter(unsigned long state_filter) -{ - struct task_struct *g, *p; - -#if BITS_PER_LONG == 32 - printk(KERN_INFO - " task PC stack pid father\n"); -#else - printk(KERN_INFO - " task PC stack pid father\n"); -#endif - read_lock(&tasklist_lock); - do_each_thread(g, p) { - /* - * reset the NMI-timeout, listing all files on a slow - * console might take alot of time: - */ - touch_nmi_watchdog(); - if (!state_filter || (p->state & state_filter)) - sched_show_task(p); - } while_each_thread(g, p); - - touch_all_softlockup_watchdogs(); - - read_unlock(&tasklist_lock); - /* - * Only show locks if all tasks are dumped: - */ - if (state_filter == -1) - debug_show_all_locks(); -} - -/** - * init_idle - set up an idle thread for a given CPU - * @idle: task in question - * @cpu: cpu the idle task belongs to - * - * NOTE: this function does not set the idle thread's NEED_RESCHED - * flag, to make booting more robust. - */ -void init_idle(struct task_struct *idle, int cpu) -{ - struct rq *rq = cpu_rq(cpu); - unsigned long flags; - - time_grq_lock(rq, &flags); - idle->last_ran = rq->clock; - idle->state = TASK_RUNNING; - /* Setting prio to illegal value shouldn't matter when never queued */ - idle->prio = PRIO_LIMIT; - set_rq_task(rq, idle); - idle->cpus_allowed = cpumask_of_cpu(cpu); - /* Silence PROVE_RCU */ - rcu_read_lock(); - set_task_cpu(idle, cpu); - rcu_read_unlock(); - rq->curr = rq->idle = idle; - idle->oncpu = 1; - grq_unlock_irqrestore(&flags); - - /* Set the preempt count _outside_ the spinlocks! */ -#if defined(CONFIG_PREEMPT) && !defined(CONFIG_PREEMPT_BKL) - task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0); -#else - task_thread_info(idle)->preempt_count = 0; -#endif - ftrace_graph_init_task(idle); -} - -/* - * In a system that switches off the HZ timer nohz_cpu_mask - * indicates which cpus entered this state. This is used - * in the rcu update to wait only for active cpus. For system - * which do not switch off the HZ timer nohz_cpu_mask should - * always be CPU_BITS_NONE. - */ -cpumask_var_t nohz_cpu_mask; - -#ifdef CONFIG_SMP -#ifdef CONFIG_NO_HZ -static struct { - atomic_t load_balancer; - cpumask_var_t cpu_mask; - cpumask_var_t ilb_grp_nohz_mask; -} nohz ____cacheline_aligned = { - .load_balancer = ATOMIC_INIT(-1), -}; - -int get_nohz_load_balancer(void) -{ - return atomic_read(&nohz.load_balancer); -} - -/* - * This routine will try to nominate the ilb (idle load balancing) - * owner among the cpus whose ticks are stopped. ilb owner will do the idle - * load balancing on behalf of all those cpus. If all the cpus in the system - * go into this tickless mode, then there will be no ilb owner (as there is - * no need for one) and all the cpus will sleep till the next wakeup event - * arrives... - * - * For the ilb owner, tick is not stopped. And this tick will be used - * for idle load balancing. ilb owner will still be part of - * nohz.cpu_mask.. - * - * While stopping the tick, this cpu will become the ilb owner if there - * is no other owner. And will be the owner till that cpu becomes busy - * or if all cpus in the system stop their ticks at which point - * there is no need for ilb owner. - * - * When the ilb owner becomes busy, it nominates another owner, during the - * next busy scheduler_tick() - */ -int select_nohz_load_balancer(int stop_tick) -{ - int cpu = smp_processor_id(); - - if (stop_tick) { - cpu_rq(cpu)->in_nohz_recently = 1; - - if (!cpu_active(cpu)) { - if (atomic_read(&nohz.load_balancer) != cpu) - return 0; - - /* - * If we are going offline and still the leader, - * give up! - */ - if (atomic_cmpxchg(&nohz.load_balancer, cpu, -1) != cpu) - BUG(); - - return 0; - } - - cpumask_set_cpu(cpu, nohz.cpu_mask); - - /* time for ilb owner also to sleep */ - if (cpumask_weight(nohz.cpu_mask) == num_online_cpus()) { - if (atomic_read(&nohz.load_balancer) == cpu) - atomic_set(&nohz.load_balancer, -1); - return 0; - } - - if (atomic_read(&nohz.load_balancer) == -1) { - /* make me the ilb owner */ - if (atomic_cmpxchg(&nohz.load_balancer, -1, cpu) == -1) - return 1; - } else if (atomic_read(&nohz.load_balancer) == cpu) - return 1; - } else { - if (!cpumask_test_cpu(cpu, nohz.cpu_mask)) - return 0; - - cpumask_clear_cpu(cpu, nohz.cpu_mask); - - if (atomic_read(&nohz.load_balancer) == cpu) - if (atomic_cmpxchg(&nohz.load_balancer, cpu, -1) != cpu) - BUG(); - } - return 0; -} - -/* - * When add_timer_on() enqueues a timer into the timer wheel of an - * idle CPU then this timer might expire before the next timer event - * which is scheduled to wake up that CPU. In case of a completely - * idle system the next event might even be infinite time into the - * future. wake_up_idle_cpu() ensures that the CPU is woken up and - * leaves the inner idle loop so the newly added timer is taken into - * account when the CPU goes back to idle and evaluates the timer - * wheel for the next timer event. - */ -void wake_up_idle_cpu(int cpu) -{ - struct task_struct *idle; - struct rq *rq; - - if (cpu == smp_processor_id()) - return; - - rq = cpu_rq(cpu); - idle = rq->idle; - - /* - * This is safe, as this function is called with the timer - * wheel base lock of (cpu) held. When the CPU is on the way - * to idle and has not yet set rq->curr to idle then it will - * be serialised on the timer wheel base lock and take the new - * timer into account automatically. - */ - if (unlikely(rq->curr != idle)) - return; - - /* - * We can set TIF_RESCHED on the idle task of the other CPU - * lockless. The worst case is that the other CPU runs the - * idle task through an additional NOOP schedule() - */ - set_tsk_need_resched(idle); - - /* NEED_RESCHED must be visible before we test polling */ - smp_mb(); - if (!tsk_is_polling(idle)) - smp_send_reschedule(cpu); -} - -#endif /* CONFIG_NO_HZ */ - -/* - * Change a given task's CPU affinity. Migrate the thread to a - * proper CPU and schedule it away if the CPU it's executing on - * is removed from the allowed bitmask. - * - * NOTE: the caller must have a valid reference to the task, the - * task must not exit() & deallocate itself prematurely. The - * call is not atomic; no spinlocks may be held. - */ -int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask) -{ - unsigned long flags; - int running_wrong = 0; - int queued = 0; - struct rq *rq; - int ret = 0; - - rq = task_grq_lock(p, &flags); - if (!cpumask_intersects(new_mask, cpu_online_mask)) { - ret = -EINVAL; - goto out; - } - - if (unlikely((p->flags & PF_THREAD_BOUND) && p != current && - !cpumask_equal(&p->cpus_allowed, new_mask))) { - ret = -EINVAL; - goto out; - } - - queued = task_queued(p); - - cpumask_copy(&p->cpus_allowed, new_mask); - - /* Can the task run on the task's current CPU? If so, we're done */ - if (cpumask_test_cpu(task_cpu(p), new_mask)) - goto out; - - if (task_running(p)) { - /* Task is running on the wrong cpu now, reschedule it. */ - if (rq == this_rq()) { - set_tsk_need_resched(p); - running_wrong = 1; - } else - resched_task(p); - } else - set_task_cpu(p, cpumask_any_and(cpu_online_mask, new_mask)); - -out: - if (queued) - try_preempt(p, rq); - task_grq_unlock(&flags); - - if (running_wrong) - _cond_resched(); - - return ret; -} -EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr); - -#ifdef CONFIG_HOTPLUG_CPU -/* - * Reschedule a task if it's on a dead CPU. - */ -void move_task_off_dead_cpu(int dead_cpu, struct task_struct *p) -{ - unsigned long flags; - struct rq *rq, *dead_rq; - - dead_rq = cpu_rq(dead_cpu); - rq = task_grq_lock(p, &flags); - if (rq == dead_rq && task_running(p)) - resched_task(p); - task_grq_unlock(&flags); - -} - -/* Run through task list and find tasks affined to just the dead cpu, then - * allocate a new affinity */ -static void break_sole_affinity(int src_cpu) -{ - struct task_struct *p, *t; - - do_each_thread(t, p) { - if (!online_cpus(p)) { - cpumask_copy(&p->cpus_allowed, cpu_possible_mask); - /* - * Don't tell them about moving exiting tasks or - * kernel threads (both mm NULL), since they never - * leave kernel. - */ - if (p->mm && printk_ratelimit()) { - printk(KERN_INFO "process %d (%s) no " - "longer affine to cpu %d\n", - task_pid_nr(p), p->comm, src_cpu); - } - } - clear_sticky(p); - } while_each_thread(t, p); -} - -/* - * Schedules idle task to be the next runnable task on current CPU. - * It does so by boosting its priority to highest possible. - * Used by CPU offline code. - */ -void sched_idle_next(void) -{ - int this_cpu = smp_processor_id(); - struct rq *rq = cpu_rq(this_cpu); - struct task_struct *idle = rq->idle; - unsigned long flags; - - /* cpu has to be offline */ - BUG_ON(cpu_online(this_cpu)); - - /* - * Strictly not necessary since rest of the CPUs are stopped by now - * and interrupts disabled on the current cpu. - */ - grq_lock_irqsave(&flags); - break_sole_affinity(this_cpu); - - __setscheduler(idle, rq, SCHED_FIFO, MAX_RT_PRIO - 1); - - activate_idle_task(idle); - set_tsk_need_resched(rq->curr); - - grq_unlock_irqrestore(&flags); -} - -/* - * Ensures that the idle task is using init_mm right before its cpu goes - * offline. - */ -void idle_task_exit(void) -{ - struct mm_struct *mm = current->active_mm; - - BUG_ON(cpu_online(smp_processor_id())); - - if (mm != &init_mm) - switch_mm(mm, &init_mm, current); - mmdrop(mm); -} - -#endif /* CONFIG_HOTPLUG_CPU */ - -#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL) - -static struct ctl_table sd_ctl_dir[] = { - { - .procname = "sched_domain", - .mode = 0555, - }, - {0, }, -}; - -static struct ctl_table sd_ctl_root[] = { - { - .ctl_name = CTL_KERN, - .procname = "kernel", - .mode = 0555, - .child = sd_ctl_dir, - }, - {0, }, -}; - -static struct ctl_table *sd_alloc_ctl_entry(int n) -{ - struct ctl_table *entry = - kcalloc(n, sizeof(struct ctl_table), GFP_KERNEL); - - return entry; -} - -static void sd_free_ctl_entry(struct ctl_table **tablep) -{ - struct ctl_table *entry; - - /* - * In the intermediate directories, both the child directory and - * procname are dynamically allocated and could fail but the mode - * will always be set. In the lowest directory the names are - * static strings and all have proc handlers. - */ - for (entry = *tablep; entry->mode; entry++) { - if (entry->child) - sd_free_ctl_entry(&entry->child); - if (entry->proc_handler == NULL) - kfree(entry->procname); - } - - kfree(*tablep); - *tablep = NULL; -} - -static void -set_table_entry(struct ctl_table *entry, - const char *procname, void *data, int maxlen, - mode_t mode, proc_handler *proc_handler) -{ - entry->procname = procname; - entry->data = data; - entry->maxlen = maxlen; - entry->mode = mode; - entry->proc_handler = proc_handler; -} - -static struct ctl_table * -sd_alloc_ctl_domain_table(struct sched_domain *sd) -{ - struct ctl_table *table = sd_alloc_ctl_entry(13); - - if (table == NULL) - return NULL; - - set_table_entry(&table[0], "min_interval", &sd->min_interval, - sizeof(long), 0644, proc_doulongvec_minmax); - set_table_entry(&table[1], "max_interval", &sd->max_interval, - sizeof(long), 0644, proc_doulongvec_minmax); - set_table_entry(&table[2], "busy_idx", &sd->busy_idx, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[3], "idle_idx", &sd->idle_idx, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[4], "newidle_idx", &sd->newidle_idx, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[5], "wake_idx", &sd->wake_idx, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[6], "forkexec_idx", &sd->forkexec_idx, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[7], "busy_factor", &sd->busy_factor, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[8], "imbalance_pct", &sd->imbalance_pct, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[9], "cache_nice_tries", - &sd->cache_nice_tries, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[10], "flags", &sd->flags, - sizeof(int), 0644, proc_dointvec_minmax); - set_table_entry(&table[11], "name", sd->name, - CORENAME_MAX_SIZE, 0444, proc_dostring); - /* &table[12] is terminator */ - - return table; -} - -static ctl_table *sd_alloc_ctl_cpu_table(int cpu) -{ - struct ctl_table *entry, *table; - struct sched_domain *sd; - int domain_num = 0, i; - char buf[32]; - - for_each_domain(cpu, sd) - domain_num++; - entry = table = sd_alloc_ctl_entry(domain_num + 1); - if (table == NULL) - return NULL; - - i = 0; - for_each_domain(cpu, sd) { - snprintf(buf, 32, "domain%d", i); - entry->procname = kstrdup(buf, GFP_KERNEL); - entry->mode = 0555; - entry->child = sd_alloc_ctl_domain_table(sd); - entry++; - i++; - } - return table; -} - -static struct ctl_table_header *sd_sysctl_header; -static void register_sched_domain_sysctl(void) -{ - int i, cpu_num = num_online_cpus(); - struct ctl_table *entry = sd_alloc_ctl_entry(cpu_num + 1); - char buf[32]; - - WARN_ON(sd_ctl_dir[0].child); - sd_ctl_dir[0].child = entry; - - if (entry == NULL) - return; - - for_each_online_cpu(i) { - snprintf(buf, 32, "cpu%d", i); - entry->procname = kstrdup(buf, GFP_KERNEL); - entry->mode = 0555; - entry->child = sd_alloc_ctl_cpu_table(i); - entry++; - } - - WARN_ON(sd_sysctl_header); - sd_sysctl_header = register_sysctl_table(sd_ctl_root); -} - -/* may be called multiple times per register */ -static void unregister_sched_domain_sysctl(void) -{ - if (sd_sysctl_header) - unregister_sysctl_table(sd_sysctl_header); - sd_sysctl_header = NULL; - if (sd_ctl_dir[0].child) - sd_free_ctl_entry(&sd_ctl_dir[0].child); -} -#else -static void register_sched_domain_sysctl(void) -{ -} -static void unregister_sched_domain_sysctl(void) -{ -} -#endif - -static void set_rq_online(struct rq *rq) -{ - if (!rq->online) { - cpumask_set_cpu(cpu_of(rq), rq->rd->online); - rq->online = 1; - } -} - -static void set_rq_offline(struct rq *rq) -{ - if (rq->online) { - cpumask_clear_cpu(cpu_of(rq), rq->rd->online); - rq->online = 0; - } -} - -/* - * migration_call - callback that gets triggered when a CPU is added. - */ -static int __cpuinit -migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu) -{ - struct task_struct *idle; - int cpu = (long)hcpu; - unsigned long flags; - struct rq *rq = cpu_rq(cpu); - - switch (action) { - - case CPU_UP_PREPARE: - case CPU_UP_PREPARE_FROZEN: - break; - - case CPU_ONLINE: - case CPU_ONLINE_FROZEN: - /* Update our root-domain */ - grq_lock_irqsave(&flags); - if (rq->rd) { - BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); - - set_rq_online(rq); - } - grq.noc = num_online_cpus(); - grq_unlock_irqrestore(&flags); - break; - -#ifdef CONFIG_HOTPLUG_CPU - case CPU_UP_CANCELED: - case CPU_UP_CANCELED_FROZEN: - break; - - case CPU_DEAD: - case CPU_DEAD_FROZEN: - idle = rq->idle; - /* Idle task back to normal (off runqueue, low prio) */ - grq_lock_irq(); - return_task(idle, 1); - idle->static_prio = MAX_PRIO; - __setscheduler(idle, rq, SCHED_NORMAL, 0); - idle->prio = PRIO_LIMIT; - set_rq_task(rq, idle); - update_clocks(rq); - grq_unlock_irq(); - break; - - case CPU_DYING: - case CPU_DYING_FROZEN: - /* Update our root-domain */ - grq_lock_irqsave(&flags); - if (rq->rd) { - BUG_ON(!cpumask_test_cpu(cpu, rq->rd->span)); - set_rq_offline(rq); - } - grq.noc = num_online_cpus(); - grq_unlock_irqrestore(&flags); - break; -#endif - } - return NOTIFY_OK; -} - -/* - * Register at high priority so that task migration (migrate_all_tasks) - * happens before everything else. This has to be lower priority than - * the notifier in the perf_counter subsystem, though. - */ -static struct notifier_block __cpuinitdata migration_notifier = { - .notifier_call = migration_call, - .priority = 10 -}; - -int __init migration_init(void) -{ - void *cpu = (void *)(long)smp_processor_id(); - int err; - - /* Start one for the boot CPU: */ - err = migration_call(&migration_notifier, CPU_UP_PREPARE, cpu); - BUG_ON(err == NOTIFY_BAD); - migration_call(&migration_notifier, CPU_ONLINE, cpu); - register_cpu_notifier(&migration_notifier); - - return 0; -} -early_initcall(migration_init); -#endif - -/* - * sched_domains_mutex serialises calls to arch_init_sched_domains, - * detach_destroy_domains and partition_sched_domains. - */ -static DEFINE_MUTEX(sched_domains_mutex); - -#ifdef CONFIG_SMP - -#ifdef CONFIG_SCHED_DEBUG - -static int sched_domain_debug_one(struct sched_domain *sd, int cpu, int level, - struct cpumask *groupmask) -{ - struct sched_group *group = sd->groups; - char str[256]; - - cpulist_scnprintf(str, sizeof(str), sched_domain_span(sd)); - cpumask_clear(groupmask); - - printk(KERN_DEBUG "%*s domain %d: ", level, "", level); - - if (!(sd->flags & SD_LOAD_BALANCE)) { - printk("does not load-balance\n"); - if (sd->parent) - printk(KERN_ERR "ERROR: !SD_LOAD_BALANCE domain" - " has parent"); - return -1; - } - - printk(KERN_CONT "span %s level %s\n", str, sd->name); - - if (!cpumask_test_cpu(cpu, sched_domain_span(sd))) { - printk(KERN_ERR "ERROR: domain->span does not contain " - "CPU%d\n", cpu); - } - if (!cpumask_test_cpu(cpu, sched_group_cpus(group))) { - printk(KERN_ERR "ERROR: domain->groups does not contain" - " CPU%d\n", cpu); - } - - printk(KERN_DEBUG "%*s groups:", level + 1, ""); - do { - if (!group) { - printk("\n"); - printk(KERN_ERR "ERROR: group is NULL\n"); - break; - } - - if (!group->__cpu_power) { - printk(KERN_CONT "\n"); - printk(KERN_ERR "ERROR: domain->cpu_power not " - "set\n"); - break; - } - - if (!cpumask_weight(sched_group_cpus(group))) { - printk(KERN_CONT "\n"); - printk(KERN_ERR "ERROR: empty group\n"); - break; - } - - if (cpumask_intersects(groupmask, sched_group_cpus(group))) { - printk(KERN_CONT "\n"); - printk(KERN_ERR "ERROR: repeated CPUs\n"); - break; - } - - cpumask_or(groupmask, groupmask, sched_group_cpus(group)); - - cpulist_scnprintf(str, sizeof(str), sched_group_cpus(group)); - - printk(KERN_CONT " %s", str); - if (group->__cpu_power != SCHED_LOAD_SCALE) { - printk(KERN_CONT " (__cpu_power = %d)", - group->__cpu_power); - } - - group = group->next; - } while (group != sd->groups); - printk(KERN_CONT "\n"); - - if (!cpumask_equal(sched_domain_span(sd), groupmask)) - printk(KERN_ERR "ERROR: groups don't span domain->span\n"); - - if (sd->parent && - !cpumask_subset(groupmask, sched_domain_span(sd->parent))) - printk(KERN_ERR "ERROR: parent span is not a superset " - "of domain->span\n"); - return 0; -} - -static void sched_domain_debug(struct sched_domain *sd, int cpu) -{ - cpumask_var_t groupmask; - int level = 0; - - if (!sd) { - printk(KERN_DEBUG "CPU%d attaching NULL sched-domain.\n", cpu); - return; - } - - printk(KERN_DEBUG "CPU%d attaching sched-domain:\n", cpu); - - if (!alloc_cpumask_var(&groupmask, GFP_KERNEL)) { - printk(KERN_DEBUG "Cannot load-balance (out of memory)\n"); - return; - } - - for (;;) { - if (sched_domain_debug_one(sd, cpu, level, groupmask)) - break; - level++; - sd = sd->parent; - if (!sd) - break; - } - free_cpumask_var(groupmask); -} -#else /* !CONFIG_SCHED_DEBUG */ -# define sched_domain_debug(sd, cpu) do { } while (0) -#endif /* CONFIG_SCHED_DEBUG */ - -static int sd_degenerate(struct sched_domain *sd) -{ - if (cpumask_weight(sched_domain_span(sd)) == 1) - return 1; - - /* Following flags need at least 2 groups */ - if (sd->flags & (SD_LOAD_BALANCE | - SD_BALANCE_NEWIDLE | - SD_BALANCE_FORK | - SD_BALANCE_EXEC | - SD_SHARE_CPUPOWER | - SD_SHARE_PKG_RESOURCES)) { - if (sd->groups != sd->groups->next) - return 0; - } - - /* Following flags don't use groups */ - if (sd->flags & (SD_WAKE_IDLE | - SD_WAKE_AFFINE | - SD_WAKE_BALANCE)) - return 0; - - return 1; -} - -static int -sd_parent_degenerate(struct sched_domain *sd, struct sched_domain *parent) -{ - unsigned long cflags = sd->flags, pflags = parent->flags; - - if (sd_degenerate(parent)) - return 1; - - if (!cpumask_equal(sched_domain_span(sd), sched_domain_span(parent))) - return 0; - - /* Does parent contain flags not in child? */ - /* WAKE_BALANCE is a subset of WAKE_AFFINE */ - if (cflags & SD_WAKE_AFFINE) - pflags &= ~SD_WAKE_BALANCE; - /* Flags needing groups don't count if only 1 group in parent */ - if (parent->groups == parent->groups->next) { - pflags &= ~(SD_LOAD_BALANCE | - SD_BALANCE_NEWIDLE | - SD_BALANCE_FORK | - SD_BALANCE_EXEC | - SD_SHARE_CPUPOWER | - SD_SHARE_PKG_RESOURCES); - if (nr_node_ids == 1) - pflags &= ~SD_SERIALIZE; - } - if (~cflags & pflags) - return 0; - - return 1; -} - -static void free_rootdomain(struct root_domain *rd) -{ - free_cpumask_var(rd->rto_mask); - free_cpumask_var(rd->online); - free_cpumask_var(rd->span); - kfree(rd); -} - -static void rq_attach_root(struct rq *rq, struct root_domain *rd) -{ - struct root_domain *old_rd = NULL; - unsigned long flags; - - grq_lock_irqsave(&flags); - - if (rq->rd) { - old_rd = rq->rd; - - if (cpumask_test_cpu(cpu_of(rq), old_rd->online)) - set_rq_offline(rq); - - cpumask_clear_cpu(cpu_of(rq), old_rd->span); - - /* - * If we dont want to free the old_rt yet then - * set old_rd to NULL to skip the freeing later - * in this function: - */ - if (!atomic_dec_and_test(&old_rd->refcount)) - old_rd = NULL; - } - - atomic_inc(&rd->refcount); - rq->rd = rd; - - cpumask_set_cpu(cpu_of(rq), rd->span); - if (cpumask_test_cpu(cpu_of(rq), cpu_online_mask)) - set_rq_online(rq); - - grq_unlock_irqrestore(&flags); - - if (old_rd) - free_rootdomain(old_rd); -} - -static int init_rootdomain(struct root_domain *rd, bool bootmem) -{ - gfp_t gfp = GFP_KERNEL; - - memset(rd, 0, sizeof(*rd)); - - if (bootmem) - gfp = GFP_NOWAIT; - - if (!alloc_cpumask_var(&rd->span, gfp)) - goto out; - if (!alloc_cpumask_var(&rd->online, gfp)) - goto free_span; - if (!alloc_cpumask_var(&rd->rto_mask, gfp)) - goto free_online; - - return 0; - -free_online: - free_cpumask_var(rd->online); -free_span: - free_cpumask_var(rd->span); -out: - return -ENOMEM; -} - -static void init_defrootdomain(void) -{ - init_rootdomain(&def_root_domain, true); - - atomic_set(&def_root_domain.refcount, 1); -} - -static struct root_domain *alloc_rootdomain(void) -{ - struct root_domain *rd; - - rd = kmalloc(sizeof(*rd), GFP_KERNEL); - if (!rd) - return NULL; - - if (init_rootdomain(rd, false) != 0) { - kfree(rd); - return NULL; - } - - return rd; -} - -/* - * Attach the domain 'sd' to 'cpu' as its base domain. Callers must - * hold the hotplug lock. - */ -static void -cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu) -{ - struct rq *rq = cpu_rq(cpu); - struct sched_domain *tmp; - - /* Remove the sched domains which do not contribute to scheduling. */ - for (tmp = sd; tmp; ) { - struct sched_domain *parent = tmp->parent; - if (!parent) - break; - - if (sd_parent_degenerate(tmp, parent)) { - tmp->parent = parent->parent; - if (parent->parent) - parent->parent->child = tmp; - } else - tmp = tmp->parent; - } - - if (sd && sd_degenerate(sd)) { - sd = sd->parent; - if (sd) - sd->child = NULL; - } - - sched_domain_debug(sd, cpu); - - rq_attach_root(rq, rd); - rcu_assign_pointer(rq->sd, sd); -} - -/* cpus with isolated domains */ -static cpumask_var_t cpu_isolated_map; - -/* Setup the mask of cpus configured for isolated domains */ -static int __init isolated_cpu_setup(char *str) -{ - cpulist_parse(str, cpu_isolated_map); - return 1; -} - -__setup("isolcpus=", isolated_cpu_setup); - -/* - * init_sched_build_groups takes the cpumask we wish to span, and a pointer - * to a function which identifies what group(along with sched group) a CPU - * belongs to. The return value of group_fn must be a >= 0 and < nr_cpu_ids - * (due to the fact that we keep track of groups covered with a struct cpumask). - * - * init_sched_build_groups will build a circular linked list of the groups - * covered by the given span, and will set each group's ->cpumask correctly, - * and ->cpu_power to 0. - */ -static void -init_sched_build_groups(const struct cpumask *span, - const struct cpumask *cpu_map, - int (*group_fn)(int cpu, const struct cpumask *cpu_map, - struct sched_group **sg, - struct cpumask *tmpmask), - struct cpumask *covered, struct cpumask *tmpmask) -{ - struct sched_group *first = NULL, *last = NULL; - int i; - - cpumask_clear(covered); - - for_each_cpu(i, span) { - struct sched_group *sg; - int group = group_fn(i, cpu_map, &sg, tmpmask); - int j; - - if (cpumask_test_cpu(i, covered)) - continue; - - cpumask_clear(sched_group_cpus(sg)); - sg->__cpu_power = 0; - - for_each_cpu(j, span) { - if (group_fn(j, cpu_map, NULL, tmpmask) != group) - continue; - - cpumask_set_cpu(j, covered); - cpumask_set_cpu(j, sched_group_cpus(sg)); - } - if (!first) - first = sg; - if (last) - last->next = sg; - last = sg; - } - last->next = first; -} - -#define SD_NODES_PER_DOMAIN 16 - -#ifdef CONFIG_NUMA - -/** - * find_next_best_node - find the next node to include in a sched_domain - * @node: node whose sched_domain we're building - * @used_nodes: nodes already in the sched_domain - * - * Find the next node to include in a given scheduling domain. Simply - * finds the closest node not already in the @used_nodes map. - * - * Should use nodemask_t. - */ -static int find_next_best_node(int node, nodemask_t *used_nodes) -{ - int i, n, val, min_val, best_node = 0; - - min_val = INT_MAX; - - for (i = 0; i < nr_node_ids; i++) { - /* Start at @node */ - n = (node + i) % nr_node_ids; - - if (!nr_cpus_node(n)) - continue; - - /* Skip already used nodes */ - if (node_isset(n, *used_nodes)) - continue; - - /* Simple min distance search */ - val = node_distance(node, n); - - if (val < min_val) { - min_val = val; - best_node = n; - } - } - - node_set(best_node, *used_nodes); - return best_node; -} - -/** - * sched_domain_node_span - get a cpumask for a node's sched_domain - * @node: node whose cpumask we're constructing - * @span: resulting cpumask - * - * Given a node, construct a good cpumask for its sched_domain to span. It - * should be one that prevents unnecessary balancing, but also spreads tasks - * out optimally. - */ -static void sched_domain_node_span(int node, struct cpumask *span) -{ - nodemask_t used_nodes; - int i; - - cpumask_clear(span); - nodes_clear(used_nodes); - - cpumask_or(span, span, cpumask_of_node(node)); - node_set(node, used_nodes); - - for (i = 1; i < SD_NODES_PER_DOMAIN; i++) { - int next_node = find_next_best_node(node, &used_nodes); - - cpumask_or(span, span, cpumask_of_node(next_node)); - } -} -#endif /* CONFIG_NUMA */ - -int sched_smt_power_savings = 0, sched_mc_power_savings = 0; - -/* - * The cpus mask in sched_group and sched_domain hangs off the end. - * - * ( See the the comments in include/linux/sched.h:struct sched_group - * and struct sched_domain. ) - */ -struct static_sched_group { - struct sched_group sg; - DECLARE_BITMAP(cpus, CONFIG_NR_CPUS); -}; - -struct static_sched_domain { - struct sched_domain sd; - DECLARE_BITMAP(span, CONFIG_NR_CPUS); -}; - -/* - * SMT sched-domains: - */ -#ifdef CONFIG_SCHED_SMT -static DEFINE_PER_CPU(struct static_sched_domain, cpu_domains); -static DEFINE_PER_CPU(struct static_sched_group, sched_group_cpus); - -static int -cpu_to_cpu_group(int cpu, const struct cpumask *cpu_map, - struct sched_group **sg, struct cpumask *unused) -{ - if (sg) - *sg = &per_cpu(sched_group_cpus, cpu).sg; - return cpu; -} -#endif /* CONFIG_SCHED_SMT */ - -/* - * multi-core sched-domains: - */ -#ifdef CONFIG_SCHED_MC -static DEFINE_PER_CPU(struct static_sched_domain, core_domains); -static DEFINE_PER_CPU(struct static_sched_group, sched_group_core); -#endif /* CONFIG_SCHED_MC */ - -#if defined(CONFIG_SCHED_MC) && defined(CONFIG_SCHED_SMT) -static int -cpu_to_core_group(int cpu, const struct cpumask *cpu_map, - struct sched_group **sg, struct cpumask *mask) -{ - int group; - - cpumask_and(mask, topology_thread_cpumask(cpu), cpu_map); - group = cpumask_first(mask); - if (sg) - *sg = &per_cpu(sched_group_core, group).sg; - return group; -} -#elif defined(CONFIG_SCHED_MC) -static int -cpu_to_core_group(int cpu, const struct cpumask *cpu_map, - struct sched_group **sg, struct cpumask *unused) -{ - if (sg) - *sg = &per_cpu(sched_group_core, cpu).sg; - return cpu; -} -#endif - -static DEFINE_PER_CPU(struct static_sched_domain, phys_domains); -static DEFINE_PER_CPU(struct static_sched_group, sched_group_phys); - -static int -cpu_to_phys_group(int cpu, const struct cpumask *cpu_map, - struct sched_group **sg, struct cpumask *mask) -{ - int group; -#ifdef CONFIG_SCHED_MC - cpumask_and(mask, cpu_coregroup_mask(cpu), cpu_map); - group = cpumask_first(mask); -#elif defined(CONFIG_SCHED_SMT) - cpumask_and(mask, topology_thread_cpumask(cpu), cpu_map); - group = cpumask_first(mask); -#else - group = cpu; -#endif - if (sg) - *sg = &per_cpu(sched_group_phys, group).sg; - return group; -} - -/** - * group_first_cpu - Returns the first cpu in the cpumask of a sched_group. - * @group: The group whose first cpu is to be returned. - */ -static inline unsigned int group_first_cpu(struct sched_group *group) -{ - return cpumask_first(sched_group_cpus(group)); -} - -#ifdef CONFIG_NUMA -/* - * The init_sched_build_groups can't handle what we want to do with node - * groups, so roll our own. Now each node has its own list of groups which - * gets dynamically allocated. - */ -static DEFINE_PER_CPU(struct static_sched_domain, node_domains); -static struct sched_group ***sched_group_nodes_bycpu; - -static DEFINE_PER_CPU(struct static_sched_domain, allnodes_domains); -static DEFINE_PER_CPU(struct static_sched_group, sched_group_allnodes); - -static int cpu_to_allnodes_group(int cpu, const struct cpumask *cpu_map, - struct sched_group **sg, - struct cpumask *nodemask) -{ - int group; - - cpumask_and(nodemask, cpumask_of_node(cpu_to_node(cpu)), cpu_map); - group = cpumask_first(nodemask); - - if (sg) - *sg = &per_cpu(sched_group_allnodes, group).sg; - return group; -} - -static void init_numa_sched_groups_power(struct sched_group *group_head) -{ - struct sched_group *sg = group_head; - int j; - - if (!sg) - return; - do { - for_each_cpu(j, sched_group_cpus(sg)) { - struct sched_domain *sd; - - sd = &per_cpu(phys_domains, j).sd; - if (j != group_first_cpu(sd->groups)) { - /* - * Only add "power" once for each - * physical package. - */ - continue; - } - - sg_inc_cpu_power(sg, sd->groups->__cpu_power); - } - sg = sg->next; - } while (sg != group_head); -} -#endif /* CONFIG_NUMA */ - -#ifdef CONFIG_NUMA -/* Free memory allocated for various sched_group structures */ -static void free_sched_groups(const struct cpumask *cpu_map, - struct cpumask *nodemask) -{ - int cpu, i; - - for_each_cpu(cpu, cpu_map) { - struct sched_group **sched_group_nodes - = sched_group_nodes_bycpu[cpu]; - - if (!sched_group_nodes) - continue; - - for (i = 0; i < nr_node_ids; i++) { - struct sched_group *oldsg, *sg = sched_group_nodes[i]; - - cpumask_and(nodemask, cpumask_of_node(i), cpu_map); - if (cpumask_empty(nodemask)) - continue; - - if (sg == NULL) - continue; - sg = sg->next; -next_sg: - oldsg = sg; - sg = sg->next; - kfree(oldsg); - if (oldsg != sched_group_nodes[i]) - goto next_sg; - } - kfree(sched_group_nodes); - sched_group_nodes_bycpu[cpu] = NULL; - } -} -#else /* !CONFIG_NUMA */ -static void free_sched_groups(const struct cpumask *cpu_map, - struct cpumask *nodemask) -{ -} -#endif /* CONFIG_NUMA */ - -/* - * Initialise sched groups cpu_power. - * - * cpu_power indicates the capacity of sched group, which is used while - * distributing the load between different sched groups in a sched domain. - * Typically cpu_power for all the groups in a sched domain will be same unless - * there are asymmetries in the topology. If there are asymmetries, group - * having more cpu_power will pickup more load compared to the group having - * less cpu_power. - * - * cpu_power will be a multiple of SCHED_LOAD_SCALE. This multiple represents - * the maximum number of tasks a group can handle in the presence of other idle - * or lightly loaded groups in the same sched domain. - */ -static void init_sched_groups_power(int cpu, struct sched_domain *sd) -{ - struct sched_domain *child; - struct sched_group *group; - - WARN_ON(!sd || !sd->groups); - - if (cpu != group_first_cpu(sd->groups)) - return; - - child = sd->child; - - sd->groups->__cpu_power = 0; - - /* - * For perf policy, if the groups in child domain share resources - * (for example cores sharing some portions of the cache hierarchy - * or SMT), then set this domain groups cpu_power such that each group - * can handle only one task, when there are other idle groups in the - * same sched domain. - */ - if (!child || (!(sd->flags & SD_POWERSAVINGS_BALANCE) && - (child->flags & - (SD_SHARE_CPUPOWER | SD_SHARE_PKG_RESOURCES)))) { - sg_inc_cpu_power(sd->groups, SCHED_LOAD_SCALE); - return; - } - - /* - * add cpu_power of each child group to this groups cpu_power - */ - group = child->groups; - do { - sg_inc_cpu_power(sd->groups, group->__cpu_power); - group = group->next; - } while (group != child->groups); -} - -/* - * Initialisers for schedule domains - * Non-inlined to reduce accumulated stack pressure in build_sched_domains() - */ - -#ifdef CONFIG_SCHED_DEBUG -# define SD_INIT_NAME(sd, type) sd->name = #type -#else -# define SD_INIT_NAME(sd, type) do { } while (0) -#endif - -#define SD_INIT(sd, type) sd_init_##type(sd) - -#define SD_INIT_FUNC(type) \ -static noinline void sd_init_##type(struct sched_domain *sd) \ -{ \ - memset(sd, 0, sizeof(*sd)); \ - *sd = SD_##type##_INIT; \ - sd->level = SD_LV_##type; \ - SD_INIT_NAME(sd, type); \ -} - -SD_INIT_FUNC(CPU) -#ifdef CONFIG_NUMA - SD_INIT_FUNC(ALLNODES) - SD_INIT_FUNC(NODE) -#endif -#ifdef CONFIG_SCHED_SMT - SD_INIT_FUNC(SIBLING) -#endif -#ifdef CONFIG_SCHED_MC - SD_INIT_FUNC(MC) -#endif - -static int default_relax_domain_level = -1; - -static int __init setup_relax_domain_level(char *str) -{ - unsigned long val; - - val = simple_strtoul(str, NULL, 0); - if (val < SD_LV_MAX) - default_relax_domain_level = val; - - return 1; -} -__setup("relax_domain_level=", setup_relax_domain_level); - -static void set_domain_attribute(struct sched_domain *sd, - struct sched_domain_attr *attr) -{ - int request; - - if (!attr || attr->relax_domain_level < 0) { - if (default_relax_domain_level < 0) - return; - else - request = default_relax_domain_level; - } else - request = attr->relax_domain_level; - if (request < sd->level) { - /* turn off idle balance on this domain */ - sd->flags &= ~(SD_WAKE_IDLE|SD_BALANCE_NEWIDLE); - } else { - /* turn on idle balance on this domain */ - sd->flags |= (SD_WAKE_IDLE_FAR|SD_BALANCE_NEWIDLE); - } -} - -/* - * Build sched domains for a given set of cpus and attach the sched domains - * to the individual cpus - */ -static int __build_sched_domains(const struct cpumask *cpu_map, - struct sched_domain_attr *attr) -{ - int i, err = -ENOMEM; - struct root_domain *rd; - cpumask_var_t nodemask, this_sibling_map, this_core_map, send_covered, - tmpmask; -#ifdef CONFIG_NUMA - cpumask_var_t domainspan, covered, notcovered; - struct sched_group **sched_group_nodes = NULL; - int sd_allnodes = 0; - - if (!alloc_cpumask_var(&domainspan, GFP_KERNEL)) - goto out; - if (!alloc_cpumask_var(&covered, GFP_KERNEL)) - goto free_domainspan; - if (!alloc_cpumask_var(¬covered, GFP_KERNEL)) - goto free_covered; -#endif - - if (!alloc_cpumask_var(&nodemask, GFP_KERNEL)) - goto free_notcovered; - if (!alloc_cpumask_var(&this_sibling_map, GFP_KERNEL)) - goto free_nodemask; - if (!alloc_cpumask_var(&this_core_map, GFP_KERNEL)) - goto free_this_sibling_map; - if (!alloc_cpumask_var(&send_covered, GFP_KERNEL)) - goto free_this_core_map; - if (!alloc_cpumask_var(&tmpmask, GFP_KERNEL)) - goto free_send_covered; - -#ifdef CONFIG_NUMA - /* - * Allocate the per-node list of sched groups - */ - sched_group_nodes = kcalloc(nr_node_ids, sizeof(struct sched_group *), - GFP_KERNEL); - if (!sched_group_nodes) { - printk(KERN_WARNING "Can not alloc sched group node list\n"); - goto free_tmpmask; - } -#endif - - rd = alloc_rootdomain(); - if (!rd) { - printk(KERN_WARNING "Cannot alloc root domain\n"); - goto free_sched_groups; - } - -#ifdef CONFIG_NUMA - sched_group_nodes_bycpu[cpumask_first(cpu_map)] = sched_group_nodes; -#endif - - /* - * Set up domains for cpus specified by the cpu_map. - */ - for_each_cpu(i, cpu_map) { - struct sched_domain *sd = NULL, *p; - - cpumask_and(nodemask, cpumask_of_node(cpu_to_node(i)), cpu_map); - -#ifdef CONFIG_NUMA - if (cpumask_weight(cpu_map) > - SD_NODES_PER_DOMAIN*cpumask_weight(nodemask)) { - sd = &per_cpu(allnodes_domains, i).sd; - SD_INIT(sd, ALLNODES); - set_domain_attribute(sd, attr); - cpumask_copy(sched_domain_span(sd), cpu_map); - cpu_to_allnodes_group(i, cpu_map, &sd->groups, tmpmask); - p = sd; - sd_allnodes = 1; - } else - p = NULL; - - sd = &per_cpu(node_domains, i).sd; - SD_INIT(sd, NODE); - set_domain_attribute(sd, attr); - sched_domain_node_span(cpu_to_node(i), sched_domain_span(sd)); - sd->parent = p; - if (p) - p->child = sd; - cpumask_and(sched_domain_span(sd), - sched_domain_span(sd), cpu_map); -#endif - - p = sd; - sd = &per_cpu(phys_domains, i).sd; - SD_INIT(sd, CPU); - set_domain_attribute(sd, attr); - cpumask_copy(sched_domain_span(sd), nodemask); - sd->parent = p; - if (p) - p->child = sd; - cpu_to_phys_group(i, cpu_map, &sd->groups, tmpmask); - -#ifdef CONFIG_SCHED_MC - p = sd; - sd = &per_cpu(core_domains, i).sd; - SD_INIT(sd, MC); - set_domain_attribute(sd, attr); - cpumask_and(sched_domain_span(sd), cpu_map, - cpu_coregroup_mask(i)); - sd->parent = p; - p->child = sd; - cpu_to_core_group(i, cpu_map, &sd->groups, tmpmask); -#endif - -#ifdef CONFIG_SCHED_SMT - p = sd; - sd = &per_cpu(cpu_domains, i).sd; - SD_INIT(sd, SIBLING); - set_domain_attribute(sd, attr); - cpumask_and(sched_domain_span(sd), - topology_thread_cpumask(i), cpu_map); - sd->parent = p; - p->child = sd; - cpu_to_cpu_group(i, cpu_map, &sd->groups, tmpmask); -#endif - } - -#ifdef CONFIG_SCHED_SMT - /* Set up CPU (sibling) groups */ - for_each_cpu(i, cpu_map) { - cpumask_and(this_sibling_map, - topology_thread_cpumask(i), cpu_map); - if (i != cpumask_first(this_sibling_map)) - continue; - - init_sched_build_groups(this_sibling_map, cpu_map, - &cpu_to_cpu_group, - send_covered, tmpmask); - } -#endif - -#ifdef CONFIG_SCHED_MC - /* Set up multi-core groups */ - for_each_cpu(i, cpu_map) { - cpumask_and(this_core_map, cpu_coregroup_mask(i), cpu_map); - if (i != cpumask_first(this_core_map)) - continue; - - init_sched_build_groups(this_core_map, cpu_map, - &cpu_to_core_group, - send_covered, tmpmask); - } -#endif - - /* Set up physical groups */ - for (i = 0; i < nr_node_ids; i++) { - cpumask_and(nodemask, cpumask_of_node(i), cpu_map); - if (cpumask_empty(nodemask)) - continue; - - init_sched_build_groups(nodemask, cpu_map, - &cpu_to_phys_group, - send_covered, tmpmask); - } - -#ifdef CONFIG_NUMA - /* Set up node groups */ - if (sd_allnodes) { - init_sched_build_groups(cpu_map, cpu_map, - &cpu_to_allnodes_group, - send_covered, tmpmask); - } - - for (i = 0; i < nr_node_ids; i++) { - /* Set up node groups */ - struct sched_group *sg, *prev; - int j; - - cpumask_clear(covered); - cpumask_and(nodemask, cpumask_of_node(i), cpu_map); - if (cpumask_empty(nodemask)) { - sched_group_nodes[i] = NULL; - continue; - } - - sched_domain_node_span(i, domainspan); - cpumask_and(domainspan, domainspan, cpu_map); - - sg = kmalloc_node(sizeof(struct sched_group) + cpumask_size(), - GFP_KERNEL, i); - if (!sg) { - printk(KERN_WARNING "Can not alloc domain group for " - "node %d\n", i); - goto error; - } - sched_group_nodes[i] = sg; - for_each_cpu(j, nodemask) { - struct sched_domain *sd; - - sd = &per_cpu(node_domains, j).sd; - sd->groups = sg; - } - sg->__cpu_power = 0; - cpumask_copy(sched_group_cpus(sg), nodemask); - sg->next = sg; - cpumask_or(covered, covered, nodemask); - prev = sg; - - for (j = 0; j < nr_node_ids; j++) { - int n = (i + j) % nr_node_ids; - - cpumask_complement(notcovered, covered); - cpumask_and(tmpmask, notcovered, cpu_map); - cpumask_and(tmpmask, tmpmask, domainspan); - if (cpumask_empty(tmpmask)) - break; - - cpumask_and(tmpmask, tmpmask, cpumask_of_node(n)); - if (cpumask_empty(tmpmask)) - continue; - - sg = kmalloc_node(sizeof(struct sched_group) + - cpumask_size(), - GFP_KERNEL, i); - if (!sg) { - printk(KERN_WARNING - "Can not alloc domain group for node %d\n", j); - goto error; - } - sg->__cpu_power = 0; - cpumask_copy(sched_group_cpus(sg), tmpmask); - sg->next = prev->next; - cpumask_or(covered, covered, tmpmask); - prev->next = sg; - prev = sg; - } - } -#endif - - /* Calculate CPU power for physical packages and nodes */ -#ifdef CONFIG_SCHED_SMT - for_each_cpu(i, cpu_map) { - struct sched_domain *sd = &per_cpu(cpu_domains, i).sd; - - init_sched_groups_power(i, sd); - } -#endif -#ifdef CONFIG_SCHED_MC - for_each_cpu(i, cpu_map) { - struct sched_domain *sd = &per_cpu(core_domains, i).sd; - - init_sched_groups_power(i, sd); - } -#endif - - for_each_cpu(i, cpu_map) { - struct sched_domain *sd = &per_cpu(phys_domains, i).sd; - - init_sched_groups_power(i, sd); - } - -#ifdef CONFIG_NUMA - for (i = 0; i < nr_node_ids; i++) - init_numa_sched_groups_power(sched_group_nodes[i]); - - if (sd_allnodes) { - struct sched_group *sg; - - cpu_to_allnodes_group(cpumask_first(cpu_map), cpu_map, &sg, - tmpmask); - init_numa_sched_groups_power(sg); - } -#endif - - /* Attach the domains */ - for_each_cpu(i, cpu_map) { - struct sched_domain *sd; -#ifdef CONFIG_SCHED_SMT - sd = &per_cpu(cpu_domains, i).sd; -#elif defined(CONFIG_SCHED_MC) - sd = &per_cpu(core_domains, i).sd; -#else - sd = &per_cpu(phys_domains, i).sd; -#endif - cpu_attach_domain(sd, rd, i); - } - - err = 0; - -free_tmpmask: - free_cpumask_var(tmpmask); -free_send_covered: - free_cpumask_var(send_covered); -free_this_core_map: - free_cpumask_var(this_core_map); -free_this_sibling_map: - free_cpumask_var(this_sibling_map); -free_nodemask: - free_cpumask_var(nodemask); -free_notcovered: -#ifdef CONFIG_NUMA - free_cpumask_var(notcovered); -free_covered: - free_cpumask_var(covered); -free_domainspan: - free_cpumask_var(domainspan); -out: -#endif - return err; - -free_sched_groups: -#ifdef CONFIG_NUMA - kfree(sched_group_nodes); -#endif - goto free_tmpmask; - -#ifdef CONFIG_NUMA -error: - free_sched_groups(cpu_map, tmpmask); - free_rootdomain(rd); - goto free_tmpmask; -#endif -} - -static int build_sched_domains(const struct cpumask *cpu_map) -{ - return __build_sched_domains(cpu_map, NULL); -} - -static struct cpumask *doms_cur; /* current sched domains */ -static int ndoms_cur; /* number of sched domains in 'doms_cur' */ -static struct sched_domain_attr *dattr_cur; - /* attribues of custom domains in 'doms_cur' */ - -/* - * Special case: If a kmalloc of a doms_cur partition (array of - * cpumask) fails, then fallback to a single sched domain, - * as determined by the single cpumask fallback_doms. - */ -static cpumask_var_t fallback_doms; - -/* - * arch_update_cpu_topology lets virtualised architectures update the - * cpu core maps. It is supposed to return 1 if the topology changed - * or 0 if it stayed the same. - */ -int __attribute__((weak)) arch_update_cpu_topology(void) -{ - return 0; -} - -/* - * Set up scheduler domains and groups. Callers must hold the hotplug lock. - * For now this just excludes isolated cpus, but could be used to - * exclude other special cases in the future. - */ -static int arch_init_sched_domains(const struct cpumask *cpu_map) -{ - int err; - - arch_update_cpu_topology(); - ndoms_cur = 1; - doms_cur = kmalloc(cpumask_size(), GFP_KERNEL); - if (!doms_cur) - doms_cur = fallback_doms; - cpumask_andnot(doms_cur, cpu_map, cpu_isolated_map); - dattr_cur = NULL; - err = build_sched_domains(doms_cur); - register_sched_domain_sysctl(); - - return err; -} - -static void arch_destroy_sched_domains(const struct cpumask *cpu_map, - struct cpumask *tmpmask) -{ - free_sched_groups(cpu_map, tmpmask); -} - -/* - * Detach sched domains from a group of cpus specified in cpu_map - * These cpus will now be attached to the NULL domain - */ -static void detach_destroy_domains(const struct cpumask *cpu_map) -{ - /* Save because hotplug lock held. */ - static DECLARE_BITMAP(tmpmask, CONFIG_NR_CPUS); - int i; - - for_each_cpu(i, cpu_map) - cpu_attach_domain(NULL, &def_root_domain, i); - synchronize_sched(); - arch_destroy_sched_domains(cpu_map, to_cpumask(tmpmask)); -} - -/* handle null as "default" */ -static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur, - struct sched_domain_attr *new, int idx_new) -{ - struct sched_domain_attr tmp; - - /* fast path */ - if (!new && !cur) - return 1; - - tmp = SD_ATTR_INIT; - return !memcmp(cur ? (cur + idx_cur) : &tmp, - new ? (new + idx_new) : &tmp, - sizeof(struct sched_domain_attr)); -} - -/* - * Partition sched domains as specified by the 'ndoms_new' - * cpumasks in the array doms_new[] of cpumasks. This compares - * doms_new[] to the current sched domain partitioning, doms_cur[]. - * It destroys each deleted domain and builds each new domain. - * - * 'doms_new' is an array of cpumask's of length 'ndoms_new'. - * The masks don't intersect (don't overlap.) We should setup one - * sched domain for each mask. CPUs not in any of the cpumasks will - * not be load balanced. If the same cpumask appears both in the - * current 'doms_cur' domains and in the new 'doms_new', we can leave - * it as it is. - * - * The passed in 'doms_new' should be kmalloc'd. This routine takes - * ownership of it and will kfree it when done with it. If the caller - * failed the kmalloc call, then it can pass in doms_new == NULL && - * ndoms_new == 1, and partition_sched_domains() will fallback to - * the single partition 'fallback_doms', it also forces the domains - * to be rebuilt. - * - * If doms_new == NULL it will be replaced with cpu_online_mask. - * ndoms_new == 0 is a special case for destroying existing domains, - * and it will not create the default domain. - * - * Call with hotplug lock held - */ -/* FIXME: Change to struct cpumask *doms_new[] */ -void partition_sched_domains(int ndoms_new, struct cpumask *doms_new, - struct sched_domain_attr *dattr_new) -{ - int i, j, n; - int new_topology; - - mutex_lock(&sched_domains_mutex); - - /* always unregister in case we don't destroy any domains */ - unregister_sched_domain_sysctl(); - - /* Let architecture update cpu core mappings. */ - new_topology = arch_update_cpu_topology(); - - n = doms_new ? ndoms_new : 0; - - /* Destroy deleted domains */ - for (i = 0; i < ndoms_cur; i++) { - for (j = 0; j < n && !new_topology; j++) { - if (cpumask_equal(&doms_cur[i], &doms_new[j]) - && dattrs_equal(dattr_cur, i, dattr_new, j)) - goto match1; - } - /* no match - a current sched domain not in new doms_new[] */ - detach_destroy_domains(doms_cur + i); -match1: - ; - } - - if (doms_new == NULL) { - ndoms_cur = 0; - doms_new = fallback_doms; - cpumask_andnot(&doms_new[0], cpu_online_mask, cpu_isolated_map); - WARN_ON_ONCE(dattr_new); - } - - /* Build new domains */ - for (i = 0; i < ndoms_new; i++) { - for (j = 0; j < ndoms_cur && !new_topology; j++) { - if (cpumask_equal(&doms_new[i], &doms_cur[j]) - && dattrs_equal(dattr_new, i, dattr_cur, j)) - goto match2; - } - /* no match - add a new doms_new */ - __build_sched_domains(doms_new + i, - dattr_new ? dattr_new + i : NULL); -match2: - ; - } - - /* Remember the new sched domains */ - if (doms_cur != fallback_doms) - kfree(doms_cur); - kfree(dattr_cur); /* kfree(NULL) is safe */ - doms_cur = doms_new; - dattr_cur = dattr_new; - ndoms_cur = ndoms_new; - - register_sched_domain_sysctl(); - - mutex_unlock(&sched_domains_mutex); -} - -#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) -static void arch_reinit_sched_domains(void) -{ - get_online_cpus(); - - /* Destroy domains first to force the rebuild */ - partition_sched_domains(0, NULL, NULL); - - rebuild_sched_domains(); - put_online_cpus(); -} - -static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt) -{ - unsigned int level = 0; - - if (sscanf(buf, "%u", &level) != 1) - return -EINVAL; - - /* - * level is always be positive so don't check for - * level < POWERSAVINGS_BALANCE_NONE which is 0 - * What happens on 0 or 1 byte write, - * need to check for count as well? - */ - - if (level >= MAX_POWERSAVINGS_BALANCE_LEVELS) - return -EINVAL; - - if (smt) - sched_smt_power_savings = level; - else - sched_mc_power_savings = level; - - arch_reinit_sched_domains(); - - return count; -} - -#ifdef CONFIG_SCHED_MC -static ssize_t sched_mc_power_savings_show(struct sysdev_class *class, - char *page) -{ - return sprintf(page, "%u\n", sched_mc_power_savings); -} -static ssize_t sched_mc_power_savings_store(struct sysdev_class *class, - const char *buf, size_t count) -{ - return sched_power_savings_store(buf, count, 0); -} -static SYSDEV_CLASS_ATTR(sched_mc_power_savings, 0644, - sched_mc_power_savings_show, - sched_mc_power_savings_store); -#endif - -#ifdef CONFIG_SCHED_SMT -static ssize_t sched_smt_power_savings_show(struct sysdev_class *dev, - char *page) -{ - return sprintf(page, "%u\n", sched_smt_power_savings); -} -static ssize_t sched_smt_power_savings_store(struct sysdev_class *dev, - const char *buf, size_t count) -{ - return sched_power_savings_store(buf, count, 1); -} -static SYSDEV_CLASS_ATTR(sched_smt_power_savings, 0644, - sched_smt_power_savings_show, - sched_smt_power_savings_store); -#endif - -int __init sched_create_sysfs_power_savings_entries(struct sysdev_class *cls) -{ - int err = 0; - -#ifdef CONFIG_SCHED_SMT - if (smt_capable()) - err = sysfs_create_file(&cls->kset.kobj, - &attr_sched_smt_power_savings.attr); -#endif -#ifdef CONFIG_SCHED_MC - if (!err && mc_capable()) - err = sysfs_create_file(&cls->kset.kobj, - &attr_sched_mc_power_savings.attr); -#endif - return err; -} -#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */ - -#ifndef CONFIG_CPUSETS -/* - * Add online and remove offline CPUs from the scheduler domains. - * When cpusets are enabled they take over this function. - */ -static int update_sched_domains(struct notifier_block *nfb, - unsigned long action, void *hcpu) -{ - switch (action) { - case CPU_ONLINE: - case CPU_ONLINE_FROZEN: - case CPU_DEAD: - case CPU_DEAD_FROZEN: - partition_sched_domains(1, NULL, NULL); - return NOTIFY_OK; - - default: - return NOTIFY_DONE; - } -} -#endif - -static int update_runtime(struct notifier_block *nfb, - unsigned long action, void *hcpu) -{ - switch (action) { - case CPU_DOWN_PREPARE: - case CPU_DOWN_PREPARE_FROZEN: - return NOTIFY_OK; - - case CPU_DOWN_FAILED: - case CPU_DOWN_FAILED_FROZEN: - case CPU_ONLINE: - case CPU_ONLINE_FROZEN: - return NOTIFY_OK; - - default: - return NOTIFY_DONE; - } -} - -#if defined(CONFIG_SCHED_SMT) || defined(CONFIG_SCHED_MC) -/* - * Cheaper version of the below functions in case support for SMT and MC is - * compiled in but CPUs have no siblings. - */ -static int sole_cpu_idle(unsigned long cpu) -{ - return rq_idle(cpu_rq(cpu)); -} -#endif -#ifdef CONFIG_SCHED_SMT -/* All this CPU's SMT siblings are idle */ -static int siblings_cpu_idle(unsigned long cpu) -{ - return cpumask_subset(&(cpu_rq(cpu)->smt_siblings), - &grq.cpu_idle_map); -} -#endif -#ifdef CONFIG_SCHED_MC -/* All this CPU's shared cache siblings are idle */ -static int cache_cpu_idle(unsigned long cpu) -{ - return cpumask_subset(&(cpu_rq(cpu)->cache_siblings), - &grq.cpu_idle_map); -} -#endif - -void __init sched_init_smp(void) -{ - struct sched_domain *sd; - int cpu; - - cpumask_var_t non_isolated_cpus; - - alloc_cpumask_var(&non_isolated_cpus, GFP_KERNEL); - -#if defined(CONFIG_NUMA) - sched_group_nodes_bycpu = kzalloc(nr_cpu_ids * sizeof(void **), - GFP_KERNEL); - BUG_ON(sched_group_nodes_bycpu == NULL); -#endif - get_online_cpus(); - mutex_lock(&sched_domains_mutex); - arch_init_sched_domains(cpu_online_mask); - cpumask_andnot(non_isolated_cpus, cpu_possible_mask, cpu_isolated_map); - if (cpumask_empty(non_isolated_cpus)) - cpumask_set_cpu(smp_processor_id(), non_isolated_cpus); - mutex_unlock(&sched_domains_mutex); - put_online_cpus(); - -#ifndef CONFIG_CPUSETS - /* XXX: Theoretical race here - CPU may be hotplugged now */ - hotcpu_notifier(update_sched_domains, 0); -#endif - - /* RT runtime code needs to handle some hotplug events */ - hotcpu_notifier(update_runtime, 0); - - /* Move init over to a non-isolated CPU */ - if (set_cpus_allowed_ptr(current, non_isolated_cpus) < 0) - BUG(); - free_cpumask_var(non_isolated_cpus); - - alloc_cpumask_var(&fallback_doms, GFP_KERNEL); - - grq_lock_irq(); - /* - * Set up the relative cache distance of each online cpu from each - * other in a simple array for quick lookup. Locality is determined - * by the closest sched_domain that CPUs are separated by. CPUs with - * shared cache in SMT and MC are treated as local. Separate CPUs - * (within the same package or physically) within the same node are - * treated as not local. CPUs not even in the same domain (different - * nodes) are treated as very distant. - */ - for_each_online_cpu(cpu) { - struct rq *rq = cpu_rq(cpu); - for_each_domain(cpu, sd) { - unsigned long locality; - int other_cpu; - -#ifdef CONFIG_SCHED_SMT - if (sd->level == SD_LV_SIBLING) { - for_each_cpu_mask(other_cpu, *sched_domain_span(sd)) - cpumask_set_cpu(other_cpu, &rq->smt_siblings); - } -#endif -#ifdef CONFIG_SCHED_MC - if (sd->level == SD_LV_MC) { - for_each_cpu_mask(other_cpu, *sched_domain_span(sd)) - cpumask_set_cpu(other_cpu, &rq->cache_siblings); - } -#endif - if (sd->level <= SD_LV_SIBLING) - locality = 1; - else if (sd->level <= SD_LV_MC) - locality = 2; - else if (sd->level <= SD_LV_NODE) - locality = 3; - else - continue; - - for_each_cpu_mask(other_cpu, *sched_domain_span(sd)) { - if (locality < rq->cpu_locality[other_cpu]) - rq->cpu_locality[other_cpu] = locality; - } - } - -/* - * Each runqueue has its own function in case it doesn't have - * siblings of its own allowing mixed topologies. - */ -#ifdef CONFIG_SCHED_SMT - if (cpus_weight(rq->smt_siblings) > 1) - rq->siblings_idle = siblings_cpu_idle; -#endif -#ifdef CONFIG_SCHED_MC - if (cpus_weight(rq->cache_siblings) > 1) - rq->cache_idle = cache_cpu_idle; -#endif - } - grq_unlock_irq(); -} -#else -void __init sched_init_smp(void) -{ -} -#endif /* CONFIG_SMP */ - -unsigned int sysctl_timer_migration = 1; - -int in_sched_functions(unsigned long addr) -{ - return in_lock_functions(addr) || - (addr >= (unsigned long)__sched_text_start - && addr < (unsigned long)__sched_text_end); -} - -void __init sched_init(void) -{ - int i; - struct rq *rq; - - prio_ratios[0] = 128; - for (i = 1 ; i < PRIO_RANGE ; i++) - prio_ratios[i] = prio_ratios[i - 1] * 11 / 10; - - spin_lock_init(&grq.lock); - grq.nr_running = grq.nr_uninterruptible = grq.nr_switches = 0; - grq.niffies = 0; - grq.last_jiffy = jiffies; - spin_lock_init(&grq.iso_lock); - grq.iso_ticks = grq.iso_refractory = 0; - grq.noc = 1; -#ifdef CONFIG_SMP - init_defrootdomain(); - grq.qnr = grq.idle_cpus = 0; - cpumask_clear(&grq.cpu_idle_map); -#else - uprq = &per_cpu(runqueues, 0); -#endif - for_each_possible_cpu(i) { - rq = cpu_rq(i); - rq->user_pc = rq->nice_pc = rq->softirq_pc = rq->system_pc = - rq->iowait_pc = rq->idle_pc = 0; - rq->dither = 0; -#ifdef CONFIG_SMP - rq->sticky_task = NULL; - rq->last_niffy = 0; - rq->sd = NULL; - rq->rd = NULL; - rq->online = 0; - rq->cpu = i; - rq_attach_root(rq, &def_root_domain); -#endif - atomic_set(&rq->nr_iowait, 0); - } - -#ifdef CONFIG_SMP - nr_cpu_ids = i; - /* - * Set the base locality for cpu cache distance calculation to - * "distant" (3). Make sure the distance from a CPU to itself is 0. - */ - for_each_possible_cpu(i) { - int j; - - rq = cpu_rq(i); -#ifdef CONFIG_SCHED_SMT - cpumask_clear(&rq->smt_siblings); - cpumask_set_cpu(i, &rq->smt_siblings); - rq->siblings_idle = sole_cpu_idle; - cpumask_set_cpu(i, &rq->smt_siblings); -#endif -#ifdef CONFIG_SCHED_MC - cpumask_clear(&rq->cache_siblings); - cpumask_set_cpu(i, &rq->cache_siblings); - rq->cache_idle = sole_cpu_idle; - cpumask_set_cpu(i, &rq->cache_siblings); -#endif - rq->cpu_locality = kmalloc(nr_cpu_ids * sizeof(unsigned long), - GFP_NOWAIT); - for_each_possible_cpu(j) { - if (i == j) - rq->cpu_locality[j] = 0; - else - rq->cpu_locality[j] = 4; - } - } -#endif - - for (i = 0; i < PRIO_LIMIT; i++) - INIT_LIST_HEAD(grq.queue + i); - /* delimiter for bitsearch */ - __set_bit(PRIO_LIMIT, grq.prio_bitmap); - -#ifdef CONFIG_PREEMPT_NOTIFIERS - INIT_HLIST_HEAD(&init_task.preempt_notifiers); -#endif - -#ifdef CONFIG_RT_MUTEXES - plist_head_init(&init_task.pi_waiters, &init_task.pi_lock); -#endif - - /* - * The boot idle thread does lazy MMU switching as well: - */ - atomic_inc(&init_mm.mm_count); - enter_lazy_tlb(&init_mm, current); - - /* - * Make us the idle thread. Technically, schedule() should not be - * called from this thread, however somewhere below it might be, - * but because we are the idle thread, we just pick up running again - * when this runqueue becomes "idle". - */ - init_idle(current, smp_processor_id()); - - /* Allocate the nohz_cpu_mask if CONFIG_CPUMASK_OFFSTACK */ - alloc_cpumask_var(&nohz_cpu_mask, GFP_NOWAIT); -#ifdef CONFIG_SMP -#ifdef CONFIG_NO_HZ - alloc_cpumask_var(&nohz.cpu_mask, GFP_NOWAIT); - alloc_cpumask_var(&nohz.ilb_grp_nohz_mask, GFP_NOWAIT); -#endif - alloc_cpumask_var(&cpu_isolated_map, GFP_NOWAIT); -#endif /* SMP */ - perf_counter_init(); -} - -#ifdef CONFIG_DEBUG_SPINLOCK_SLEEP -void __might_sleep(char *file, int line) -{ -#ifdef in_atomic - static unsigned long prev_jiffy; /* ratelimiting */ - - if ((in_atomic() || irqs_disabled()) && - system_state == SYSTEM_RUNNING && !oops_in_progress) { - if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy) - return; - prev_jiffy = jiffies; - printk(KERN_ERR "BUG: sleeping function called from invalid" - " context at %s:%d\n", file, line); - printk("in_atomic():%d, irqs_disabled():%d\n", - in_atomic(), irqs_disabled()); - debug_show_held_locks(current); - if (irqs_disabled()) - print_irqtrace_events(current); - dump_stack(); - } -#endif -} -EXPORT_SYMBOL(__might_sleep); -#endif - -#ifdef CONFIG_MAGIC_SYSRQ -void normalize_rt_tasks(void) -{ - struct task_struct *g, *p; - unsigned long flags; - struct rq *rq; - int queued; - - read_lock_irq(&tasklist_lock); - - do_each_thread(g, p) { - if (!rt_task(p) && !iso_task(p)) - continue; - - spin_lock_irqsave(&p->pi_lock, flags); - rq = __task_grq_lock(p); - - queued = task_queued(p); - if (queued) - dequeue_task(p); - __setscheduler(p, rq, SCHED_NORMAL, 0); - if (queued) { - enqueue_task(p); - try_preempt(p, rq); - } - - __task_grq_unlock(); - spin_unlock_irqrestore(&p->pi_lock, flags); - } while_each_thread(g, p); - - read_unlock_irq(&tasklist_lock); -} -#endif /* CONFIG_MAGIC_SYSRQ */ - -#ifdef CONFIG_IA64 -/* - * These functions are only useful for the IA64 MCA handling. - * - * They can only be called when the whole system has been - * stopped - every CPU needs to be quiescent, and no scheduling - * activity can take place. Using them for anything else would - * be a serious bug, and as a result, they aren't even visible - * under any other configuration. - */ - -/** - * curr_task - return the current task for a given cpu. - * @cpu: the processor in question. - * - * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED! - */ -struct task_struct *curr_task(int cpu) -{ - return cpu_curr(cpu); -} - -/** - * set_curr_task - set the current task for a given cpu. - * @cpu: the processor in question. - * @p: the task pointer to set. - * - * Description: This function must only be used when non-maskable interrupts - * are serviced on a separate stack. It allows the architecture to switch the - * notion of the current task on a cpu in a non-blocking manner. This function - * must be called with all CPU's synchronised, and interrupts disabled, the - * and caller must save the original value of the current task (see - * curr_task() above) and restore that value before reenabling interrupts and - * re-starting the system. - * - * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED! - */ -void set_curr_task(int cpu, struct task_struct *p) -{ - cpu_curr(cpu) = p; -} - -#endif - -/* - * Use precise platform statistics if available: - */ -#ifdef CONFIG_VIRT_CPU_ACCOUNTING -cputime_t task_utime(struct task_struct *p) -{ - return p->utime; -} - -cputime_t task_stime(struct task_struct *p) -{ - return p->stime; -} -#else -cputime_t task_utime(struct task_struct *p) -{ - clock_t utime = cputime_to_clock_t(p->utime), - total = utime + cputime_to_clock_t(p->stime); - u64 temp; - - temp = (u64)nsec_to_clock_t(p->sched_time); - - if (total) { - temp *= utime; - do_div(temp, total); - } - utime = (clock_t)temp; - - p->prev_utime = max(p->prev_utime, clock_t_to_cputime(utime)); - return p->prev_utime; -} - -cputime_t task_stime(struct task_struct *p) -{ - clock_t stime; - - stime = nsec_to_clock_t(p->sched_time) - - cputime_to_clock_t(task_utime(p)); - - if (stime >= 0) - p->prev_stime = max(p->prev_stime, clock_t_to_cputime(stime)); - - return p->prev_stime; -} -#endif - -inline cputime_t task_gtime(struct task_struct *p) -{ - return p->gtime; -} - -void __cpuinit init_idle_bootup_task(struct task_struct *idle) -{} - -#ifdef CONFIG_SCHED_DEBUG -void proc_sched_show_task(struct task_struct *p, struct seq_file *m) -{} - -void proc_sched_set_task(struct task_struct *p) -{} -#endif diff --git a/kernel/sysctl.c b/kernel/sysctl.c index d1c5b23e46e..58be76017fd 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -100,15 +100,10 @@ static int neg_one = -1; #endif static int zero; +static int __maybe_unused one = 1; static int __maybe_unused two = 2; static unsigned long one_ul = 1; -static int __read_mostly one = 1; -static int __read_mostly one_hundred = 100; -#ifdef CONFIG_SCHED_BFS -extern int rr_interval; -extern int sched_iso_cpu; -static int __read_mostly one_thousand = 1000; -#endif +static int one_hundred = 100; /* this is needed for the proc_doulongvec_minmax of vm_dirty_bytes */ static unsigned long dirty_bytes_min = 2 * PAGE_SIZE; @@ -243,7 +238,7 @@ static struct ctl_table root_table[] = { { .ctl_name = 0 } }; -#if defined(CONFIG_SCHED_DEBUG) && !defined(CONFIG_SCHED_BFS) +#ifdef CONFIG_SCHED_DEBUG static int min_sched_granularity_ns = 100000; /* 100 usecs */ static int max_sched_granularity_ns = NSEC_PER_SEC; /* 1 second */ static int min_wakeup_granularity_ns; /* 0 usecs */ @@ -251,15 +246,6 @@ static int max_wakeup_granularity_ns = NSEC_PER_SEC; /* 1 second */ #endif static struct ctl_table kern_table[] = { -#ifndef CONFIG_SCHED_BFS - { - .ctl_name = CTL_UNNUMBERED, - .procname = "sched_child_runs_first", - .data = &sysctl_sched_child_runs_first, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec, - }, #ifdef CONFIG_SCHED_DEBUG { .ctl_name = CTL_UNNUMBERED, @@ -312,6 +298,14 @@ static struct ctl_table kern_table[] = { .strategy = &sysctl_intvec, .extra1 = &zero, }, + { + .ctl_name = CTL_UNNUMBERED, + .procname = "sched_child_runs_first", + .data = &sysctl_sched_child_runs_first, + .maxlen = sizeof(unsigned int), + .mode = 0644, + .proc_handler = &proc_dointvec, + }, { .ctl_name = CTL_UNNUMBERED, .procname = "sched_features", @@ -336,14 +330,6 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = &proc_dointvec, }, - { - .ctl_name = CTL_UNNUMBERED, - .procname = "sched_time_avg", - .data = &sysctl_sched_time_avg, - .maxlen = sizeof(unsigned int), - .mode = 0644, - .proc_handler = &proc_dointvec, - }, { .ctl_name = CTL_UNNUMBERED, .procname = "timer_migration", @@ -380,7 +366,6 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = &proc_dointvec, }, -#endif /* !CONFIG_SCHED_BFS */ #ifdef CONFIG_PROVE_LOCKING { .ctl_name = CTL_UNNUMBERED, @@ -813,30 +798,6 @@ static struct ctl_table kern_table[] = { .proc_handler = &proc_dointvec, }, #endif -#ifdef CONFIG_SCHED_BFS - { - .ctl_name = CTL_UNNUMBERED, - .procname = "rr_interval", - .data = &rr_interval, - .maxlen = sizeof (int), - .mode = 0644, - .proc_handler = &proc_dointvec_minmax, - .strategy = &sysctl_intvec, - .extra1 = &one, - .extra2 = &one_thousand, - }, - { - .ctl_name = CTL_UNNUMBERED, - .procname = "iso_cpu", - .data = &sched_iso_cpu, - .maxlen = sizeof (int), - .mode = 0644, - .proc_handler = &proc_dointvec_minmax, - .strategy = &sysctl_intvec, - .extra1 = &zero, - .extra2 = &one_hundred, - }, -#endif #if defined(CONFIG_S390) && defined(CONFIG_SMP) { .ctl_name = KERN_SPIN_RETRY, diff --git a/kernel/timer.c b/kernel/timer.c index 77c14aa33d2..23de35c548d 100644 --- a/kernel/timer.c +++ b/kernel/timer.c @@ -1153,7 +1153,8 @@ void update_process_times(int user_tick) struct task_struct *p = current; int cpu = smp_processor_id(); - /* Accounting is done within sched_bfs.c */ + /* Note: this timer irq context must be accounted for as well. */ + account_process_tick(p, user_tick); run_local_timers(); if (rcu_pending(cpu)) rcu_check_callbacks(cpu, user_tick); @@ -1197,7 +1198,7 @@ void do_timer(unsigned long ticks) { jiffies_64 += ticks; update_wall_time(); - calc_global_load(); + calc_global_load(ticks); } #ifdef __ARCH_WANT_SYS_ALARM diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 811f667360a..8c358395d33 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -275,10 +275,10 @@ unsigned long trace_flags = TRACE_ITER_PRINT_PARENT | TRACE_ITER_PRINTK | void trace_wake_up(void) { /* - * The grunqueue_is_locked() can fail, but this is the best we + * The runqueue_is_locked() can fail, but this is the best we * have for now: */ - if (!(trace_flags & TRACE_ITER_BLOCK) && !grunqueue_is_locked()) + if (!(trace_flags & TRACE_ITER_BLOCK) && !runqueue_is_locked()) wake_up(&trace_wait); } diff --git a/kernel/workqueue.c b/kernel/workqueue.c index ea1b4e7674d..0668795d881 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -317,6 +317,8 @@ static int worker_thread(void *__cwq) if (cwq->wq->freezeable) set_freezable(); + set_user_nice(current, -5); + for (;;) { prepare_to_wait(&cwq->more_work, &wait, TASK_INTERRUPTIBLE); if (!freezing(current) && diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index cd686743e66..5e91973f5e9 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -723,37 +723,6 @@ config RCU_TORTURE_TEST_RUNNABLE Say N here if you want the RCU torture tests to start only after being manually enabled via /proc. -config RCU_TORTURE_TEST - tristate "torture tests for RCU" - depends on DEBUG_KERNEL && !SCHED_BFS - default n - help - This option provides a kernel module that runs torture tests - on the RCU infrastructure. The kernel module may be built - after the fact on the running kernel to be tested, if desired. - - Say Y here if you want RCU torture tests to be built into - the kernel. - Say M if you want the RCU torture tests to build as a module. - Say N if you are unsure. - -config RCU_TORTURE_TEST_RUNNABLE - bool "torture tests for RCU runnable by default" - depends on RCU_TORTURE_TEST = y - default n - help - This option provides a way to build the RCU torture tests - directly into the kernel without them starting up at boot - time. You can use /proc/sys/kernel/rcutorture_runnable - to manually override this setting. This /proc file is - available only when the RCU torture tests have been built - into the kernel. - - Say Y here if you want the RCU torture tests to start during - boot (you probably don't). - Say N here if you want the RCU torture tests to start only - after being manually enabled via /proc. - config RCU_CPU_STALL_DETECTOR bool "Check for stalled CPUs delaying RCU grace periods" depends on CLASSIC_RCU || TREE_RCU diff --git a/mm/oom_kill.c b/mm/oom_kill.c index e9b6b326f97..ed452e9485d 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -338,7 +338,7 @@ static void __oom_kill_task(struct task_struct *p, int verbose) * all the memory it needs. That way it should be able to * exit() and clear out its resources quickly... */ - p->time_slice = HZ; + p->rt.time_slice = HZ; set_tsk_thread_flag(p, TIF_MEMDIE); force_sig(SIGKILL, p);