-diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt
-index 3a3b30ac2a75..9e0745cafbd8 100644
---- a/Documentation/sysrq.txt
-+++ b/Documentation/sysrq.txt
-@@ -59,10 +59,17 @@ On PowerPC - Press 'ALT - Print Screen (or F13) - <command key>,
- On other - If you know of the key combos for other architectures, please
- let me know so I can add them to this section.
-
--On all - write a character to /proc/sysrq-trigger. e.g.:
--
-+On all - write a character to /proc/sysrq-trigger, e.g.:
- echo t > /proc/sysrq-trigger
-
-+On all - Enable network SysRq by writing a cookie to icmp_echo_sysrq, e.g.
-+ echo 0x01020304 >/proc/sys/net/ipv4/icmp_echo_sysrq
-+ Send an ICMP echo request with this pattern plus the particular
-+ SysRq command key. Example:
-+ # ping -c1 -s57 -p0102030468
-+ will trigger the SysRq-H (help) command.
-+
-+
- * What are the 'command' keys?
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- 'b' - Will immediately reboot the system without syncing or unmounting
-diff --git a/Documentation/trace/histograms.txt b/Documentation/trace/histograms.txt
-new file mode 100644
-index 000000000000..6f2aeabf7faa
---- /dev/null
-+++ b/Documentation/trace/histograms.txt
-@@ -0,0 +1,186 @@
-+ Using the Linux Kernel Latency Histograms
-+
-+
-+This document gives a short explanation how to enable, configure and use
-+latency histograms. Latency histograms are primarily relevant in the
-+context of real-time enabled kernels (CONFIG_PREEMPT/CONFIG_PREEMPT_RT)
-+and are used in the quality management of the Linux real-time
-+capabilities.
-+
-+
-+* Purpose of latency histograms
-+
-+A latency histogram continuously accumulates the frequencies of latency
-+data. There are two types of histograms
-+- potential sources of latencies
-+- effective latencies
-+
-+
-+* Potential sources of latencies
-+
-+Potential sources of latencies are code segments where interrupts,
-+preemption or both are disabled (aka critical sections). To create
-+histograms of potential sources of latency, the kernel stores the time
-+stamp at the start of a critical section, determines the time elapsed
-+when the end of the section is reached, and increments the frequency
-+counter of that latency value - irrespective of whether any concurrently
-+running process is affected by latency or not.
-+- Configuration items (in the Kernel hacking/Tracers submenu)
-+ CONFIG_INTERRUPT_OFF_LATENCY
-+ CONFIG_PREEMPT_OFF_LATENCY
-+
-+
-+* Effective latencies
-+
-+Effective latencies are actually occuring during wakeup of a process. To
-+determine effective latencies, the kernel stores the time stamp when a
-+process is scheduled to be woken up, and determines the duration of the
-+wakeup time shortly before control is passed over to this process. Note
-+that the apparent latency in user space may be somewhat longer, since the
-+process may be interrupted after control is passed over to it but before
-+the execution in user space takes place. Simply measuring the interval
-+between enqueuing and wakeup may also not appropriate in cases when a
-+process is scheduled as a result of a timer expiration. The timer may have
-+missed its deadline, e.g. due to disabled interrupts, but this latency
-+would not be registered. Therefore, the offsets of missed timers are
-+recorded in a separate histogram. If both wakeup latency and missed timer
-+offsets are configured and enabled, a third histogram may be enabled that
-+records the overall latency as a sum of the timer latency, if any, and the
-+wakeup latency. This histogram is called "timerandwakeup".
-+- Configuration items (in the Kernel hacking/Tracers submenu)
-+ CONFIG_WAKEUP_LATENCY
-+ CONFIG_MISSED_TIMER_OFSETS
-+
-+
-+* Usage
-+
-+The interface to the administration of the latency histograms is located
-+in the debugfs file system. To mount it, either enter
-+
-+mount -t sysfs nodev /sys
-+mount -t debugfs nodev /sys/kernel/debug
-+
-+from shell command line level, or add
-+
-+nodev /sys sysfs defaults 0 0
-+nodev /sys/kernel/debug debugfs defaults 0 0
-+
-+to the file /etc/fstab. All latency histogram related files are then
-+available in the directory /sys/kernel/debug/tracing/latency_hist. A
-+particular histogram type is enabled by writing non-zero to the related
-+variable in the /sys/kernel/debug/tracing/latency_hist/enable directory.
-+Select "preemptirqsoff" for the histograms of potential sources of
-+latencies and "wakeup" for histograms of effective latencies etc. The
-+histogram data - one per CPU - are available in the files
-+
-+/sys/kernel/debug/tracing/latency_hist/preemptoff/CPUx
-+/sys/kernel/debug/tracing/latency_hist/irqsoff/CPUx
-+/sys/kernel/debug/tracing/latency_hist/preemptirqsoff/CPUx
-+/sys/kernel/debug/tracing/latency_hist/wakeup/CPUx
-+/sys/kernel/debug/tracing/latency_hist/wakeup/sharedprio/CPUx
-+/sys/kernel/debug/tracing/latency_hist/missed_timer_offsets/CPUx
-+/sys/kernel/debug/tracing/latency_hist/timerandwakeup/CPUx
-+
-+The histograms are reset by writing non-zero to the file "reset" in a
-+particular latency directory. To reset all latency data, use
-+
-+#!/bin/sh
-+
-+TRACINGDIR=/sys/kernel/debug/tracing
-+HISTDIR=$TRACINGDIR/latency_hist
-+
-+if test -d $HISTDIR
-+then
-+ cd $HISTDIR
-+ for i in `find . | grep /reset$`
-+ do
-+ echo 1 >$i
-+ done
-+fi
-+
-+
-+* Data format
-+
-+Latency data are stored with a resolution of one microsecond. The
-+maximum latency is 10,240 microseconds. The data are only valid, if the
-+overflow register is empty. Every output line contains the latency in
-+microseconds in the first row and the number of samples in the second
-+row. To display only lines with a positive latency count, use, for
-+example,
-+
-+grep -v " 0$" /sys/kernel/debug/tracing/latency_hist/preemptoff/CPU0
-+
-+#Minimum latency: 0 microseconds.
-+#Average latency: 0 microseconds.
-+#Maximum latency: 25 microseconds.
-+#Total samples: 3104770694
-+#There are 0 samples greater or equal than 10240 microseconds
-+#usecs samples
-+ 0 2984486876
-+ 1 49843506
-+ 2 58219047
-+ 3 5348126
-+ 4 2187960
-+ 5 3388262
-+ 6 959289
-+ 7 208294
-+ 8 40420
-+ 9 4485
-+ 10 14918
-+ 11 18340
-+ 12 25052
-+ 13 19455
-+ 14 5602
-+ 15 969
-+ 16 47
-+ 17 18
-+ 18 14
-+ 19 1
-+ 20 3
-+ 21 2
-+ 22 5
-+ 23 2
-+ 25 1
-+
-+
-+* Wakeup latency of a selected process
-+
-+To only collect wakeup latency data of a particular process, write the
-+PID of the requested process to
-+
-+/sys/kernel/debug/tracing/latency_hist/wakeup/pid
-+
-+PIDs are not considered, if this variable is set to 0.
-+
-+
-+* Details of the process with the highest wakeup latency so far
-+
-+Selected data of the process that suffered from the highest wakeup
-+latency that occurred in a particular CPU are available in the file
-+
-+/sys/kernel/debug/tracing/latency_hist/wakeup/max_latency-CPUx.
-+
-+In addition, other relevant system data at the time when the
-+latency occurred are given.
-+
-+The format of the data is (all in one line):
-+<PID> <Priority> <Latency> (<Timeroffset>) <Command> \
-+<- <PID> <Priority> <Command> <Timestamp>
-+
-+The value of <Timeroffset> is only relevant in the combined timer
-+and wakeup latency recording. In the wakeup recording, it is
-+always 0, in the missed_timer_offsets recording, it is the same
-+as <Latency>.
-+
-+When retrospectively searching for the origin of a latency and
-+tracing was not enabled, it may be helpful to know the name and
-+some basic data of the task that (finally) was switching to the
-+late real-tlme task. In addition to the victim's data, also the
-+data of the possible culprit are therefore displayed after the
-+"<-" symbol.
-+
-+Finally, the timestamp of the time when the latency occurred
-+in <seconds>.<microseconds> after the most recent system boot
-+is provided.
-+
-+These data are also reset when the wakeup histogram is reset.
-diff --git a/arch/Kconfig b/arch/Kconfig
-index 659bdd079277..099fc0f5155e 100644
---- a/arch/Kconfig
-+++ b/arch/Kconfig
-@@ -9,6 +9,7 @@ config OPROFILE
- tristate "OProfile system profiling"
- depends on PROFILING
- depends on HAVE_OPROFILE
-+ depends on !PREEMPT_RT_FULL
- select RING_BUFFER
- select RING_BUFFER_ALLOW_SWAP
- help
-@@ -52,6 +53,7 @@ config KPROBES
- config JUMP_LABEL
- bool "Optimize very unlikely/likely branches"
- depends on HAVE_ARCH_JUMP_LABEL
-+ depends on (!INTERRUPT_OFF_HIST && !PREEMPT_OFF_HIST && !WAKEUP_LATENCY_HIST && !MISSED_TIMER_OFFSETS_HIST)
- help
- This option enables a transparent branch optimization that
- makes certain almost-always-true or almost-always-false branch
-diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
-index b5d529fdffab..5715844e83e3 100644
---- a/arch/arm/Kconfig
-+++ b/arch/arm/Kconfig
-@@ -36,7 +36,7 @@ config ARM
- select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
- select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
- select HAVE_ARCH_HARDENED_USERCOPY
-- select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
-+ select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU && !PREEMPT_RT_BASE
- select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
- select HAVE_ARCH_MMAP_RND_BITS if MMU
- select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
-@@ -75,6 +75,7 @@ config ARM
- select HAVE_PERF_EVENTS
- select HAVE_PERF_REGS
- select HAVE_PERF_USER_STACK_DUMP
-+ select HAVE_PREEMPT_LAZY
- select HAVE_RCU_TABLE_FREE if (SMP && ARM_LPAE)
- select HAVE_REGS_AND_STACK_ACCESS_API
- select HAVE_SYSCALL_TRACEPOINTS
-diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h
-index e53638c8ed8a..6095a1649865 100644
---- a/arch/arm/include/asm/irq.h
-+++ b/arch/arm/include/asm/irq.h
-@@ -22,6 +22,8 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/alpha/include/asm/spinlock_types.h linux-4.14/arch/alpha/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/alpha/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/alpha/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef _ALPHA_SPINLOCK_TYPES_H
+ #define _ALPHA_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile unsigned int lock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/include/asm/irq.h linux-4.14/arch/arm/include/asm/irq.h
+--- linux-4.14.orig/arch/arm/include/asm/irq.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/include/asm/irq.h 2018-09-05 11:05:07.000000000 +0200
+@@ -23,6 +23,8 @@
#endif
#ifndef __ASSEMBLY__
struct irqaction;
struct pt_regs;
extern void migrate_irqs(void);
-diff --git a/arch/arm/include/asm/switch_to.h b/arch/arm/include/asm/switch_to.h
-index 12ebfcc1d539..c962084605bc 100644
---- a/arch/arm/include/asm/switch_to.h
-+++ b/arch/arm/include/asm/switch_to.h
-@@ -3,6 +3,13 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/include/asm/spinlock_types.h linux-4.14/arch/arm/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/arm/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ #define TICKET_SHIFT 16
+
+ typedef struct {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/include/asm/switch_to.h linux-4.14/arch/arm/include/asm/switch_to.h
+--- linux-4.14.orig/arch/arm/include/asm/switch_to.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/include/asm/switch_to.h 2018-09-05 11:05:07.000000000 +0200
+@@ -4,6 +4,13 @@
#include <linux/thread_info.h>
/*
* For v7 SMP cores running a preemptible kernel we may be pre-empted
* during a TLB maintenance operation, so execute an inner-shareable dsb
-@@ -25,6 +32,7 @@ extern struct task_struct *__switch_to(struct task_struct *, struct thread_info
+@@ -26,6 +33,7 @@
#define switch_to(prev,next,last) \
do { \
__complete_pending_tlbi(); \
last = __switch_to(prev,task_thread_info(prev), task_thread_info(next)); \
} while (0)
-diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
-index 776757d1604a..1f36a4eccc72 100644
---- a/arch/arm/include/asm/thread_info.h
-+++ b/arch/arm/include/asm/thread_info.h
-@@ -49,6 +49,7 @@ struct cpu_context_save {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/include/asm/thread_info.h linux-4.14/arch/arm/include/asm/thread_info.h
+--- linux-4.14.orig/arch/arm/include/asm/thread_info.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/include/asm/thread_info.h 2018-09-05 11:05:07.000000000 +0200
+@@ -49,6 +49,7 @@
struct thread_info {
unsigned long flags; /* low level flags */
int preempt_count; /* 0 => preemptable, <0 => bug */
mm_segment_t addr_limit; /* address limit */
struct task_struct *task; /* main task structure */
__u32 cpu; /* cpu */
-@@ -142,7 +143,8 @@ extern int vfp_restore_user_hwstate(struct user_vfp __user *,
+@@ -142,7 +143,8 @@
#define TIF_SYSCALL_TRACE 4 /* syscall trace active */
#define TIF_SYSCALL_AUDIT 5 /* syscall auditing active */
#define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
#define TIF_NOHZ 12 /* in adaptive nohz mode */
#define TIF_USING_IWMMXT 17
-@@ -152,6 +154,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp __user *,
+@@ -152,6 +154,7 @@
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
-@@ -167,7 +170,8 @@ extern int vfp_restore_user_hwstate(struct user_vfp __user *,
+@@ -167,7 +170,8 @@
* Change these and you break ASM code in entry-common.S
*/
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
#endif /* __KERNEL__ */
#endif /* __ASM_ARM_THREAD_INFO_H */
-diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
-index 608008229c7d..3866da3f7bb7 100644
---- a/arch/arm/kernel/asm-offsets.c
-+++ b/arch/arm/kernel/asm-offsets.c
-@@ -65,6 +65,7 @@ int main(void)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/Kconfig linux-4.14/arch/arm/Kconfig
+--- linux-4.14.orig/arch/arm/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -45,7 +45,7 @@
+ select HARDIRQS_SW_RESEND
+ select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
+ select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+- select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
++ select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU && !PREEMPT_RT_BASE
+ select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
+ select HAVE_ARCH_MMAP_RND_BITS if MMU
+ select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
+@@ -85,6 +85,7 @@
+ select HAVE_PERF_EVENTS
+ select HAVE_PERF_REGS
+ select HAVE_PERF_USER_STACK_DUMP
++ select HAVE_PREEMPT_LAZY
+ select HAVE_RCU_TABLE_FREE if (SMP && ARM_LPAE)
+ select HAVE_REGS_AND_STACK_ACCESS_API
+ select HAVE_SYSCALL_TRACEPOINTS
+@@ -2164,7 +2165,7 @@
+
+ config KERNEL_MODE_NEON
+ bool "Support for NEON in kernel mode"
+- depends on NEON && AEABI
++ depends on NEON && AEABI && !PREEMPT_RT_BASE
+ help
+ Say Y to include support for NEON in kernel mode.
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/asm-offsets.c linux-4.14/arch/arm/kernel/asm-offsets.c
+--- linux-4.14.orig/arch/arm/kernel/asm-offsets.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/asm-offsets.c 2018-09-05 11:05:07.000000000 +0200
+@@ -65,6 +65,7 @@
BLANK();
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
DEFINE(TI_TASK, offsetof(struct thread_info, task));
DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
-diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
-index 9f157e7c51e7..468e224d76aa 100644
---- a/arch/arm/kernel/entry-armv.S
-+++ b/arch/arm/kernel/entry-armv.S
-@@ -220,11 +220,18 @@ ENDPROC(__dabt_svc)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/entry-armv.S linux-4.14/arch/arm/kernel/entry-armv.S
+--- linux-4.14.orig/arch/arm/kernel/entry-armv.S 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/entry-armv.S 2018-09-05 11:05:07.000000000 +0200
+@@ -220,11 +220,18 @@
#ifdef CONFIG_PREEMPT
ldr r8, [tsk, #TI_PREEMPT] @ get preempt count
#endif
svc_exit r5, irq = 1 @ return from exception
-@@ -239,8 +246,14 @@ ENDPROC(__irq_svc)
+@@ -239,8 +246,14 @@
1: bl preempt_schedule_irq @ irq en/disable is done inside
ldr r0, [tsk, #TI_FLAGS] @ get new tasks TI_FLAGS
tst r0, #_TIF_NEED_RESCHED
#endif
__und_fault:
-diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
-index 10c3283d6c19..8872937862cc 100644
---- a/arch/arm/kernel/entry-common.S
-+++ b/arch/arm/kernel/entry-common.S
-@@ -36,7 +36,9 @@
- UNWIND(.cantunwind )
- disable_irq_notrace @ disable interrupts
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/entry-common.S linux-4.14/arch/arm/kernel/entry-common.S
+--- linux-4.14.orig/arch/arm/kernel/entry-common.S 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/entry-common.S 2018-09-05 11:05:07.000000000 +0200
+@@ -53,7 +53,9 @@
+ cmp r2, #TASK_SIZE
+ blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
+ tst r1, #_TIF_SECCOMP
bne fast_work_pending
- /* perform architecture specific actions before user return */
-@@ -62,8 +64,11 @@ ENDPROC(ret_fast_syscall)
- str r0, [sp, #S_R0 + S_OFF]! @ save returned r0
- disable_irq_notrace @ disable interrupts
+
+@@ -83,8 +85,11 @@
+ cmp r2, #TASK_SIZE
+ blne addr_limit_check_failed
ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing
- tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK
+ tst r1, #((_TIF_SYSCALL_WORK | _TIF_WORK_MASK) & ~_TIF_SECCOMP)
-+ bne do_slower_path
++ bne do_slower_path
+ tst r1, #_TIF_SECCOMP
beq no_work_pending
+do_slower_path:
UNWIND(.fnend )
ENDPROC(ret_fast_syscall)
-diff --git a/arch/arm/kernel/patch.c b/arch/arm/kernel/patch.c
-index 69bda1a5707e..1f665acaa6a9 100644
---- a/arch/arm/kernel/patch.c
-+++ b/arch/arm/kernel/patch.c
-@@ -15,7 +15,7 @@ struct patch {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/patch.c linux-4.14/arch/arm/kernel/patch.c
+--- linux-4.14.orig/arch/arm/kernel/patch.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/patch.c 2018-09-05 11:05:07.000000000 +0200
+@@ -16,7 +16,7 @@
unsigned int insn;
};
static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
__acquires(&patch_lock)
-@@ -32,7 +32,7 @@ static void __kprobes *patch_map(void *addr, int fixmap, unsigned long *flags)
+@@ -33,7 +33,7 @@
return addr;
if (flags)
else
__acquire(&patch_lock);
-@@ -47,7 +47,7 @@ static void __kprobes patch_unmap(int fixmap, unsigned long *flags)
+@@ -48,7 +48,7 @@
clear_fixmap(fixmap);
if (flags)
else
__release(&patch_lock);
}
-diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
-index 91d2d5b01414..750550098b59 100644
---- a/arch/arm/kernel/process.c
-+++ b/arch/arm/kernel/process.c
-@@ -322,6 +322,30 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/process.c linux-4.14/arch/arm/kernel/process.c
+--- linux-4.14.orig/arch/arm/kernel/process.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/process.c 2018-09-05 11:05:07.000000000 +0200
+@@ -325,6 +325,30 @@
}
#ifdef CONFIG_MMU
#ifdef CONFIG_KUSER_HELPERS
/*
* The vectors page is always readable from user space for the
-diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
-index 7b8f2141427b..96541e00b74a 100644
---- a/arch/arm/kernel/signal.c
-+++ b/arch/arm/kernel/signal.c
-@@ -572,7 +572,8 @@ do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/signal.c linux-4.14/arch/arm/kernel/signal.c
+--- linux-4.14.orig/arch/arm/kernel/signal.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/signal.c 2018-09-05 11:05:07.000000000 +0200
+@@ -615,7 +615,8 @@
*/
trace_hardirqs_off();
do {
schedule();
} else {
if (unlikely(!user_mode(regs)))
-diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
-index 7dd14e8395e6..4cd7e3d98035 100644
---- a/arch/arm/kernel/smp.c
-+++ b/arch/arm/kernel/smp.c
-@@ -234,8 +234,6 @@ int __cpu_disable(void)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/smp.c linux-4.14/arch/arm/kernel/smp.c
+--- linux-4.14.orig/arch/arm/kernel/smp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/smp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -236,8 +236,6 @@
flush_cache_louis();
local_flush_tlb_all();
return 0;
}
-@@ -251,6 +249,9 @@ void __cpu_die(unsigned int cpu)
- pr_err("CPU%u: cpu didn't die\n", cpu);
- return;
+@@ -255,6 +253,7 @@
}
-+
-+ clear_tasks_mm_cpumask(cpu);
-+
- pr_notice("CPU%u: shutdown\n", cpu);
+ pr_debug("CPU%u: shutdown\n", cpu);
++ clear_tasks_mm_cpumask(cpu);
/*
-diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c
-index 0bee233fef9a..314cfb232a63 100644
---- a/arch/arm/kernel/unwind.c
-+++ b/arch/arm/kernel/unwind.c
-@@ -93,7 +93,7 @@ extern const struct unwind_idx __start_unwind_idx[];
+ * platform_cpu_kill() is generally expected to do the powering off
+ * and/or cutting of clocks to the dying CPU. Optionally, this may
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/kernel/unwind.c linux-4.14/arch/arm/kernel/unwind.c
+--- linux-4.14.orig/arch/arm/kernel/unwind.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/kernel/unwind.c 2018-09-05 11:05:07.000000000 +0200
+@@ -93,7 +93,7 @@
static const struct unwind_idx *__origin_unwind_idx;
extern const struct unwind_idx __stop_unwind_idx[];
static LIST_HEAD(unwind_tables);
/* Convert a prel31 symbol to an absolute address */
-@@ -201,7 +201,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+@@ -201,7 +201,7 @@
/* module unwind tables */
struct unwind_table *table;
list_for_each_entry(table, &unwind_tables, list) {
if (addr >= table->begin_addr &&
addr < table->end_addr) {
-@@ -213,7 +213,7 @@ static const struct unwind_idx *unwind_find_idx(unsigned long addr)
+@@ -213,7 +213,7 @@
break;
}
}
}
pr_debug("%s: idx = %p\n", __func__, idx);
-@@ -529,9 +529,9 @@ struct unwind_table *unwind_table_add(unsigned long start, unsigned long size,
+@@ -529,9 +529,9 @@
tab->begin_addr = text_addr;
tab->end_addr = text_addr + text_size;
return tab;
}
-@@ -543,9 +543,9 @@ void unwind_table_del(struct unwind_table *tab)
+@@ -543,9 +543,9 @@
if (!tab)
return;
kfree(tab);
}
-diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
-index 19b5f5c1c0ff..82aa639e6737 100644
---- a/arch/arm/kvm/arm.c
-+++ b/arch/arm/kvm/arm.c
-@@ -619,7 +619,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
- * involves poking the GIC, which must be done in a
- * non-preemptible context.
- */
-- preempt_disable();
-+ migrate_disable();
- kvm_pmu_flush_hwstate(vcpu);
- kvm_timer_flush_hwstate(vcpu);
- kvm_vgic_flush_hwstate(vcpu);
-@@ -640,7 +640,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
- kvm_pmu_sync_hwstate(vcpu);
- kvm_timer_sync_hwstate(vcpu);
- kvm_vgic_sync_hwstate(vcpu);
-- preempt_enable();
-+ migrate_enable();
- continue;
- }
-
-@@ -696,7 +696,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
-
- kvm_vgic_sync_hwstate(vcpu);
-
-- preempt_enable();
-+ migrate_enable();
-
- ret = handle_exit(vcpu, run, ret);
- }
-diff --git a/arch/arm/mach-exynos/platsmp.c b/arch/arm/mach-exynos/platsmp.c
-index 98ffe1e62ad5..df9769ddece5 100644
---- a/arch/arm/mach-exynos/platsmp.c
-+++ b/arch/arm/mach-exynos/platsmp.c
-@@ -229,7 +229,7 @@ static void __iomem *scu_base_addr(void)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mach-exynos/platsmp.c linux-4.14/arch/arm/mach-exynos/platsmp.c
+--- linux-4.14.orig/arch/arm/mach-exynos/platsmp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mach-exynos/platsmp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -229,7 +229,7 @@
return (void __iomem *)(S5P_VA_SCU);
}
static void exynos_secondary_init(unsigned int cpu)
{
-@@ -242,8 +242,8 @@ static void exynos_secondary_init(unsigned int cpu)
+@@ -242,8 +242,8 @@
/*
* Synchronise with the boot thread.
*/
}
int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr)
-@@ -307,7 +307,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -307,7 +307,7 @@
* Set synchronisation state between this boot processor
* and the secondary one
*/
/*
* The secondary processor is waiting to be released from
-@@ -334,7 +334,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -334,7 +334,7 @@
if (timeout == 0) {
printk(KERN_ERR "cpu1 power enable failed");
return -ETIMEDOUT;
}
}
-@@ -380,7 +380,7 @@ static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -380,7 +380,7 @@
* calibrations, then wait for it to finish
*/
fail:
return pen_release != -1 ? ret : 0;
}
-diff --git a/arch/arm/mach-hisi/platmcpm.c b/arch/arm/mach-hisi/platmcpm.c
-index 4b653a8cb75c..b03d5a922cb1 100644
---- a/arch/arm/mach-hisi/platmcpm.c
-+++ b/arch/arm/mach-hisi/platmcpm.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mach-hisi/platmcpm.c linux-4.14/arch/arm/mach-hisi/platmcpm.c
+--- linux-4.14.orig/arch/arm/mach-hisi/platmcpm.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mach-hisi/platmcpm.c 2018-09-05 11:05:07.000000000 +0200
@@ -61,7 +61,7 @@
static void __iomem *sysctrl, *fabric;
static u32 fabric_phys_addr;
/*
* [0]: bootwrapper physical address
-@@ -113,7 +113,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle)
+@@ -113,7 +113,7 @@
if (cluster >= HIP04_MAX_CLUSTERS || cpu >= HIP04_MAX_CPUS_PER_CLUSTER)
return -EINVAL;
if (hip04_cpu_table[cluster][cpu])
goto out;
-@@ -147,7 +147,7 @@ static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle)
+@@ -147,7 +147,7 @@
out:
hip04_cpu_table[cluster][cpu]++;
return 0;
}
-@@ -162,11 +162,11 @@ static void hip04_cpu_die(unsigned int l_cpu)
+@@ -162,11 +162,11 @@
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
return;
} else if (hip04_cpu_table[cluster][cpu] > 1) {
pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu);
-@@ -174,7 +174,7 @@ static void hip04_cpu_die(unsigned int l_cpu)
+@@ -174,7 +174,7 @@
}
last_man = hip04_cluster_is_down(cluster);
if (last_man) {
/* Since it's Cortex A15, disable L2 prefetching. */
asm volatile(
-@@ -203,7 +203,7 @@ static int hip04_cpu_kill(unsigned int l_cpu)
+@@ -203,7 +203,7 @@
cpu >= HIP04_MAX_CPUS_PER_CLUSTER);
count = TIMEOUT_MSEC / POLL_MSEC;
for (tries = 0; tries < count; tries++) {
if (hip04_cpu_table[cluster][cpu])
goto err;
-@@ -211,10 +211,10 @@ static int hip04_cpu_kill(unsigned int l_cpu)
+@@ -211,10 +211,10 @@
data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster));
if (data & CORE_WFI_STATUS(cpu))
break;
}
if (tries >= count)
goto err;
-@@ -231,10 +231,10 @@ static int hip04_cpu_kill(unsigned int l_cpu)
+@@ -231,10 +231,10 @@
goto err;
if (hip04_cluster_is_down(cluster))
hip04_set_snoop_filter(cluster, 0);
return 0;
}
#endif
-diff --git a/arch/arm/mach-omap2/omap-smp.c b/arch/arm/mach-omap2/omap-smp.c
-index b4de3da6dffa..b52893319d75 100644
---- a/arch/arm/mach-omap2/omap-smp.c
-+++ b/arch/arm/mach-omap2/omap-smp.c
-@@ -64,7 +64,7 @@ static const struct omap_smp_config omap5_cfg __initconst = {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mach-omap2/omap-smp.c linux-4.14/arch/arm/mach-omap2/omap-smp.c
+--- linux-4.14.orig/arch/arm/mach-omap2/omap-smp.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/arm/mach-omap2/omap-smp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -69,7 +69,7 @@
.startup_addr = omap5_secondary_startup,
};
void __iomem *omap4_get_scu_base(void)
{
-@@ -131,8 +131,8 @@ static void omap4_secondary_init(unsigned int cpu)
+@@ -177,8 +177,8 @@
/*
* Synchronise with the boot thread.
*/
}
static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
-@@ -146,7 +146,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -191,7 +191,7 @@
* Set synchronisation state between this boot processor
* and the secondary one
*/
/*
* Update the AuxCoreBoot0 with boot state for secondary core.
-@@ -223,7 +223,7 @@ static int omap4_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -270,7 +270,7 @@
* Now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
return 0;
}
-diff --git a/arch/arm/mach-prima2/platsmp.c b/arch/arm/mach-prima2/platsmp.c
-index 0875b99add18..18b6d98d2581 100644
---- a/arch/arm/mach-prima2/platsmp.c
-+++ b/arch/arm/mach-prima2/platsmp.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mach-prima2/platsmp.c linux-4.14/arch/arm/mach-prima2/platsmp.c
+--- linux-4.14.orig/arch/arm/mach-prima2/platsmp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mach-prima2/platsmp.c 2018-09-05 11:05:07.000000000 +0200
@@ -22,7 +22,7 @@
static void __iomem *clk_base;
static void sirfsoc_secondary_init(unsigned int cpu)
{
-@@ -36,8 +36,8 @@ static void sirfsoc_secondary_init(unsigned int cpu)
+@@ -36,8 +36,8 @@
/*
* Synchronise with the boot thread.
*/
}
static const struct of_device_id clk_ids[] = {
-@@ -75,7 +75,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -75,7 +75,7 @@
/* make sure write buffer is drained */
mb();
/*
* The secondary processor is waiting to be released from
-@@ -107,7 +107,7 @@ static int sirfsoc_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -107,7 +107,7 @@
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
return pen_release != -1 ? -ENOSYS : 0;
}
-diff --git a/arch/arm/mach-qcom/platsmp.c b/arch/arm/mach-qcom/platsmp.c
-index 5494c9e0c909..e8ce157d3548 100644
---- a/arch/arm/mach-qcom/platsmp.c
-+++ b/arch/arm/mach-qcom/platsmp.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mach-qcom/platsmp.c linux-4.14/arch/arm/mach-qcom/platsmp.c
+--- linux-4.14.orig/arch/arm/mach-qcom/platsmp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mach-qcom/platsmp.c 2018-09-05 11:05:07.000000000 +0200
@@ -46,7 +46,7 @@
extern void secondary_startup_arm(void);
#ifdef CONFIG_HOTPLUG_CPU
static void qcom_cpu_die(unsigned int cpu)
-@@ -60,8 +60,8 @@ static void qcom_secondary_init(unsigned int cpu)
+@@ -60,8 +60,8 @@
/*
* Synchronise with the boot thread.
*/
}
static int scss_release_secondary(unsigned int cpu)
-@@ -284,7 +284,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int))
+@@ -284,7 +284,7 @@
* set synchronisation state between this boot processor
* and the secondary one
*/
/*
* Send the secondary CPU a soft interrupt, thereby causing
-@@ -297,7 +297,7 @@ static int qcom_boot_secondary(unsigned int cpu, int (*func)(unsigned int))
+@@ -297,7 +297,7 @@
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
return ret;
}
-diff --git a/arch/arm/mach-spear/platsmp.c b/arch/arm/mach-spear/platsmp.c
-index 8d1e2d551786..7fa56cc78118 100644
---- a/arch/arm/mach-spear/platsmp.c
-+++ b/arch/arm/mach-spear/platsmp.c
-@@ -32,7 +32,7 @@ static void write_pen_release(int val)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mach-spear/platsmp.c linux-4.14/arch/arm/mach-spear/platsmp.c
+--- linux-4.14.orig/arch/arm/mach-spear/platsmp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mach-spear/platsmp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -32,7 +32,7 @@
sync_cache_w(&pen_release);
}
static void __iomem *scu_base = IOMEM(VA_SCU_BASE);
-@@ -47,8 +47,8 @@ static void spear13xx_secondary_init(unsigned int cpu)
+@@ -47,8 +47,8 @@
/*
* Synchronise with the boot thread.
*/
}
static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
-@@ -59,7 +59,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -59,7 +59,7 @@
* set synchronisation state between this boot processor
* and the secondary one
*/
/*
* The secondary processor is waiting to be released from
-@@ -84,7 +84,7 @@ static int spear13xx_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -84,7 +84,7 @@
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
return pen_release != -1 ? -ENOSYS : 0;
}
-diff --git a/arch/arm/mach-sti/platsmp.c b/arch/arm/mach-sti/platsmp.c
-index ea5a2277ee46..b988e081ac79 100644
---- a/arch/arm/mach-sti/platsmp.c
-+++ b/arch/arm/mach-sti/platsmp.c
-@@ -35,7 +35,7 @@ static void write_pen_release(int val)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mach-sti/platsmp.c linux-4.14/arch/arm/mach-sti/platsmp.c
+--- linux-4.14.orig/arch/arm/mach-sti/platsmp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mach-sti/platsmp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -35,7 +35,7 @@
sync_cache_w(&pen_release);
}
static void sti_secondary_init(unsigned int cpu)
{
-@@ -48,8 +48,8 @@ static void sti_secondary_init(unsigned int cpu)
+@@ -48,8 +48,8 @@
/*
* Synchronise with the boot thread.
*/
}
static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
-@@ -60,7 +60,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -60,7 +60,7 @@
* set synchronisation state between this boot processor
* and the secondary one
*/
/*
* The secondary processor is waiting to be released from
-@@ -91,7 +91,7 @@ static int sti_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -91,7 +91,7 @@
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
return pen_release != -1 ? -ENOSYS : 0;
}
-diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
-index 3a2e678b8d30..3ed1e9ba6a01 100644
---- a/arch/arm/mm/fault.c
-+++ b/arch/arm/mm/fault.c
-@@ -430,6 +430,9 @@ do_translation_fault(unsigned long addr, unsigned int fsr,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mm/fault.c linux-4.14/arch/arm/mm/fault.c
+--- linux-4.14.orig/arch/arm/mm/fault.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mm/fault.c 2018-09-05 11:05:07.000000000 +0200
+@@ -434,6 +434,9 @@
if (addr < TASK_SIZE)
return do_page_fault(addr, fsr, regs);
if (user_mode(regs))
goto bad_area;
-@@ -497,6 +500,9 @@ do_translation_fault(unsigned long addr, unsigned int fsr,
+@@ -501,6 +504,9 @@
static int
do_sect_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
{
do_bad_area(addr, fsr, regs);
return 0;
}
-diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
-index d02f8187b1cc..542692dbd40a 100644
---- a/arch/arm/mm/highmem.c
-+++ b/arch/arm/mm/highmem.c
-@@ -34,6 +34,11 @@ static inline pte_t get_fixmap_pte(unsigned long vaddr)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/mm/highmem.c linux-4.14/arch/arm/mm/highmem.c
+--- linux-4.14.orig/arch/arm/mm/highmem.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/mm/highmem.c 2018-09-05 11:05:07.000000000 +0200
+@@ -34,6 +34,11 @@
return *ptep;
}
void *kmap(struct page *page)
{
might_sleep();
-@@ -54,12 +59,13 @@ EXPORT_SYMBOL(kunmap);
+@@ -54,12 +59,13 @@
void *kmap_atomic(struct page *page)
{
pagefault_disable();
if (!PageHighMem(page))
return page_address(page);
-@@ -79,7 +85,7 @@ void *kmap_atomic(struct page *page)
+@@ -79,7 +85,7 @@
type = kmap_atomic_idx_push();
vaddr = __fix_to_virt(idx);
#ifdef CONFIG_DEBUG_HIGHMEM
/*
-@@ -93,7 +99,10 @@ void *kmap_atomic(struct page *page)
+@@ -93,7 +99,10 @@
* in place, so the contained TLB flush ensures the TLB is updated
* with the new mapping.
*/
return (void *)vaddr;
}
-@@ -106,44 +115,75 @@ void __kunmap_atomic(void *kvaddr)
+@@ -106,44 +115,75 @@
if (kvaddr >= (void *)FIXADDR_START) {
type = kmap_atomic_idx();
+ }
+}
+#endif
-diff --git a/arch/arm/plat-versatile/platsmp.c b/arch/arm/plat-versatile/platsmp.c
-index c2366510187a..6b60f582b738 100644
---- a/arch/arm/plat-versatile/platsmp.c
-+++ b/arch/arm/plat-versatile/platsmp.c
-@@ -32,7 +32,7 @@ static void write_pen_release(int val)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm/plat-versatile/platsmp.c linux-4.14/arch/arm/plat-versatile/platsmp.c
+--- linux-4.14.orig/arch/arm/plat-versatile/platsmp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm/plat-versatile/platsmp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -32,7 +32,7 @@
sync_cache_w(&pen_release);
}
void versatile_secondary_init(unsigned int cpu)
{
-@@ -45,8 +45,8 @@ void versatile_secondary_init(unsigned int cpu)
+@@ -45,8 +45,8 @@
/*
* Synchronise with the boot thread.
*/
}
int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
-@@ -57,7 +57,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -57,7 +57,7 @@
* Set synchronisation state between this boot processor
* and the secondary one
*/
/*
* This is really belt and braces; we hold unintended secondary
-@@ -87,7 +87,7 @@ int versatile_boot_secondary(unsigned int cpu, struct task_struct *idle)
+@@ -87,7 +87,7 @@
* now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
return pen_release != -1 ? -ENOSYS : 0;
}
-diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
-index 969ef880d234..1182fe883771 100644
---- a/arch/arm64/Kconfig
-+++ b/arch/arm64/Kconfig
-@@ -91,6 +91,7 @@ config ARM64
- select HAVE_PERF_EVENTS
- select HAVE_PERF_REGS
- select HAVE_PERF_USER_STACK_DUMP
-+ select HAVE_PREEMPT_LAZY
- select HAVE_REGS_AND_STACK_ACCESS_API
- select HAVE_RCU_TABLE_FREE
- select HAVE_SYSCALL_TRACEPOINTS
-@@ -694,7 +695,7 @@ config XEN_DOM0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/crypto/crc32-ce-glue.c linux-4.14/arch/arm64/crypto/crc32-ce-glue.c
+--- linux-4.14.orig/arch/arm64/crypto/crc32-ce-glue.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/arm64/crypto/crc32-ce-glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -208,7 +208,8 @@
+
+ static int __init crc32_pmull_mod_init(void)
+ {
+- if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && (elf_hwcap & HWCAP_PMULL)) {
++ if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) &&
++ !IS_ENABLED(CONFIG_PREEMPT_RT_BASE) && (elf_hwcap & HWCAP_PMULL)) {
+ crc32_pmull_algs[0].update = crc32_pmull_update;
+ crc32_pmull_algs[1].update = crc32c_pmull_update;
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/crypto/Kconfig linux-4.14/arch/arm64/crypto/Kconfig
+--- linux-4.14.orig/arch/arm64/crypto/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm64/crypto/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -19,19 +19,19 @@
+
+ config CRYPTO_SHA1_ARM64_CE
+ tristate "SHA-1 digest algorithm (ARMv8 Crypto Extensions)"
+- depends on KERNEL_MODE_NEON
++ depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_HASH
+ select CRYPTO_SHA1
+
+ config CRYPTO_SHA2_ARM64_CE
+ tristate "SHA-224/SHA-256 digest algorithm (ARMv8 Crypto Extensions)"
+- depends on KERNEL_MODE_NEON
++ depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_HASH
+ select CRYPTO_SHA256_ARM64
+
+ config CRYPTO_GHASH_ARM64_CE
+ tristate "GHASH/AES-GCM using ARMv8 Crypto Extensions"
+- depends on KERNEL_MODE_NEON
++ depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_HASH
+ select CRYPTO_GF128MUL
+ select CRYPTO_AES
+@@ -39,7 +39,7 @@
+
+ config CRYPTO_CRCT10DIF_ARM64_CE
+ tristate "CRCT10DIF digest algorithm using PMULL instructions"
+- depends on KERNEL_MODE_NEON && CRC_T10DIF
++ depends on KERNEL_MODE_NEON && CRC_T10DIF && !PREEMPT_RT_BASE
+ select CRYPTO_HASH
+
+ config CRYPTO_CRC32_ARM64_CE
+@@ -53,13 +53,13 @@
+
+ config CRYPTO_AES_ARM64_CE
+ tristate "AES core cipher using ARMv8 Crypto Extensions"
+- depends on ARM64 && KERNEL_MODE_NEON
++ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_ALGAPI
+ select CRYPTO_AES_ARM64
+
+ config CRYPTO_AES_ARM64_CE_CCM
+ tristate "AES in CCM mode using ARMv8 Crypto Extensions"
+- depends on ARM64 && KERNEL_MODE_NEON
++ depends on ARM64 && KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_ALGAPI
+ select CRYPTO_AES_ARM64_CE
+ select CRYPTO_AES_ARM64
+@@ -67,7 +67,7 @@
+
+ config CRYPTO_AES_ARM64_CE_BLK
+ tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions"
+- depends on KERNEL_MODE_NEON
++ depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_AES_ARM64_CE
+ select CRYPTO_AES_ARM64
+@@ -75,7 +75,7 @@
+
+ config CRYPTO_AES_ARM64_NEON_BLK
+ tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions"
+- depends on KERNEL_MODE_NEON
++ depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_AES_ARM64
+ select CRYPTO_AES
+@@ -83,13 +83,13 @@
+
+ config CRYPTO_CHACHA20_NEON
+ tristate "NEON accelerated ChaCha20 symmetric cipher"
+- depends on KERNEL_MODE_NEON
++ depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_CHACHA20
+
+ config CRYPTO_AES_ARM64_BS
+ tristate "AES in ECB/CBC/CTR/XTS modes using bit-sliced NEON algorithm"
+- depends on KERNEL_MODE_NEON
++ depends on KERNEL_MODE_NEON && !PREEMPT_RT_BASE
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_AES_ARM64_NEON_BLK
+ select CRYPTO_AES_ARM64
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/include/asm/spinlock_types.h linux-4.14/arch/arm64/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/arm64/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/arm64/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -16,10 +16,6 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#if !defined(__LINUX_SPINLOCK_TYPES_H) && !defined(__ASM_SPINLOCK_H)
+-# error "please don't include this file directly"
+-#endif
+-
+ #include <linux/types.h>
- config XEN
- bool "Xen guest support on ARM64"
-- depends on ARM64 && OF
-+ depends on ARM64 && OF && !PREEMPT_RT_FULL
- select SWIOTLB_XEN
- select PARAVIRT
- help
-diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
-index e9ea5a6bd449..6c500ad63c6a 100644
---- a/arch/arm64/include/asm/thread_info.h
-+++ b/arch/arm64/include/asm/thread_info.h
-@@ -49,6 +49,7 @@ struct thread_info {
- mm_segment_t addr_limit; /* address limit */
- struct task_struct *task; /* main task structure */
+ #define TICKET_SHIFT 16
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/include/asm/thread_info.h linux-4.14/arch/arm64/include/asm/thread_info.h
+--- linux-4.14.orig/arch/arm64/include/asm/thread_info.h 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/arm64/include/asm/thread_info.h 2018-09-05 11:05:07.000000000 +0200
+@@ -43,6 +43,7 @@
+ u64 ttbr0; /* saved TTBR0_EL1 */
+ #endif
int preempt_count; /* 0 => preemptable, <0 => bug */
+ int preempt_lazy_count; /* 0 => preemptable, <0 => bug */
- int cpu; /* cpu */
};
-@@ -112,6 +113,7 @@ static inline struct thread_info *current_thread_info(void)
- #define TIF_NEED_RESCHED 1
- #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
+ #define INIT_THREAD_INFO(tsk) \
+@@ -82,6 +83,7 @@
#define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */
-+#define TIF_NEED_RESCHED_LAZY 4
+ #define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */
+ #define TIF_FSCHECK 5 /* Check FS is USER_DS on return */
++#define TIF_NEED_RESCHED_LAZY 6
#define TIF_NOHZ 7
#define TIF_SYSCALL_TRACE 8
#define TIF_SYSCALL_AUDIT 9
-@@ -127,6 +129,7 @@ static inline struct thread_info *current_thread_info(void)
+@@ -98,6 +100,7 @@
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE)
#define _TIF_NOHZ (1 << TIF_NOHZ)
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
-@@ -135,7 +138,9 @@ static inline struct thread_info *current_thread_info(void)
- #define _TIF_32BIT (1 << TIF_32BIT)
+@@ -109,8 +112,9 @@
#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
-- _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
-+ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
-+ _TIF_NEED_RESCHED_LAZY)
-+#define _TIF_NEED_RESCHED_MASK (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)
+ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+- _TIF_UPROBE | _TIF_FSCHECK)
++ _TIF_UPROBE | _TIF_FSCHECK | _TIF_NEED_RESCHED_LAZY)
++#define _TIF_NEED_RESCHED_MASK (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
-diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
-index 4a2f0f0fef32..6bf2bc17c400 100644
---- a/arch/arm64/kernel/asm-offsets.c
-+++ b/arch/arm64/kernel/asm-offsets.c
-@@ -38,6 +38,7 @@ int main(void)
+ _TIF_NOHZ)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/Kconfig linux-4.14/arch/arm64/Kconfig
+--- linux-4.14.orig/arch/arm64/Kconfig 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/arm64/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -103,6 +103,7 @@
+ select HAVE_PERF_EVENTS
+ select HAVE_PERF_REGS
+ select HAVE_PERF_USER_STACK_DUMP
++ select HAVE_PREEMPT_LAZY
+ select HAVE_REGS_AND_STACK_ACCESS_API
+ select HAVE_RCU_TABLE_FREE
+ select HAVE_SYSCALL_TRACEPOINTS
+@@ -791,7 +792,7 @@
+
+ config XEN
+ bool "Xen guest support on ARM64"
+- depends on ARM64 && OF
++ depends on ARM64 && OF && !PREEMPT_RT_FULL
+ select SWIOTLB_XEN
+ select PARAVIRT
+ help
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/kernel/asm-offsets.c linux-4.14/arch/arm64/kernel/asm-offsets.c
+--- linux-4.14.orig/arch/arm64/kernel/asm-offsets.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/arm64/kernel/asm-offsets.c 2018-09-05 11:05:07.000000000 +0200
+@@ -39,6 +39,7 @@
BLANK();
- DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
- DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
-+ DEFINE(TI_PREEMPT_LAZY, offsetof(struct thread_info, preempt_lazy_count));
- DEFINE(TI_ADDR_LIMIT, offsetof(struct thread_info, addr_limit));
- DEFINE(TI_TASK, offsetof(struct thread_info, task));
- DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
-diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
-index 79b0fe24d5b7..f3c959ade308 100644
---- a/arch/arm64/kernel/entry.S
-+++ b/arch/arm64/kernel/entry.S
-@@ -428,11 +428,16 @@ ENDPROC(el1_sync)
+ DEFINE(TSK_TI_FLAGS, offsetof(struct task_struct, thread_info.flags));
+ DEFINE(TSK_TI_PREEMPT, offsetof(struct task_struct, thread_info.preempt_count));
++ DEFINE(TSK_TI_PREEMPT_LAZY, offsetof(struct task_struct, thread_info.preempt_lazy_count));
+ DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit));
+ #ifdef CONFIG_ARM64_SW_TTBR0_PAN
+ DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0));
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/kernel/entry.S linux-4.14/arch/arm64/kernel/entry.S
+--- linux-4.14.orig/arch/arm64/kernel/entry.S 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/arm64/kernel/entry.S 2018-09-05 11:05:07.000000000 +0200
+@@ -637,11 +637,16 @@
#ifdef CONFIG_PREEMPT
- ldr w24, [tsk, #TI_PREEMPT] // get preempt count
+ ldr w24, [tsk, #TSK_TI_PREEMPT] // get preempt count
- cbnz w24, 1f // preempt count != 0
+ cbnz w24, 2f // preempt count != 0
- ldr x0, [tsk, #TI_FLAGS] // get flags
+ ldr x0, [tsk, #TSK_TI_FLAGS] // get flags
- tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling?
- bl el1_preempt
+ tbnz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling?
+
-+ ldr w24, [tsk, #TI_PREEMPT_LAZY] // get preempt lazy count
++ ldr w24, [tsk, #TSK_TI_PREEMPT_LAZY] // get preempt lazy count
+ cbnz w24, 2f // preempt lazy count != 0
+ tbz x0, #TIF_NEED_RESCHED_LAZY, 2f // needs rescheduling?
1:
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
bl trace_hardirqs_on
-@@ -446,6 +451,7 @@ ENDPROC(el1_irq)
+@@ -655,6 +660,7 @@
1: bl preempt_schedule_irq // irq en/disable is done inside
- ldr x0, [tsk, #TI_FLAGS] // get new tasks TI_FLAGS
+ ldr x0, [tsk, #TSK_TI_FLAGS] // get new tasks TI_FLAGS
tbnz x0, #TIF_NEED_RESCHED, 1b // needs rescheduling?
+ tbnz x0, #TIF_NEED_RESCHED_LAZY, 1b // needs rescheduling?
ret x24
#endif
-diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
-index 404dd67080b9..639dc6d12e72 100644
---- a/arch/arm64/kernel/signal.c
-+++ b/arch/arm64/kernel/signal.c
-@@ -409,7 +409,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
- */
- trace_hardirqs_off();
- do {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/arm64/kernel/signal.c linux-4.14/arch/arm64/kernel/signal.c
+--- linux-4.14.orig/arch/arm64/kernel/signal.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/arm64/kernel/signal.c 2018-09-05 11:05:07.000000000 +0200
+@@ -756,7 +756,7 @@
+ /* Check valid user FS if needed */
+ addr_limit_user_check();
+
- if (thread_flags & _TIF_NEED_RESCHED) {
+ if (thread_flags & _TIF_NEED_RESCHED_MASK) {
schedule();
} else {
local_irq_enable();
-diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
-index b3c5bde43d34..8122bf058de0 100644
---- a/arch/mips/Kconfig
-+++ b/arch/mips/Kconfig
-@@ -2514,7 +2514,7 @@ config MIPS_ASID_BITS_VARIABLE
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/blackfin/include/asm/spinlock_types.h linux-4.14/arch/blackfin/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/blackfin/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/blackfin/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -7,10 +7,6 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ #include <asm/rwlock.h>
+
+ typedef struct {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/hexagon/include/asm/spinlock_types.h linux-4.14/arch/hexagon/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/hexagon/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/hexagon/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -21,10 +21,6 @@
+ #ifndef _ASM_SPINLOCK_TYPES_H
+ #define _ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile unsigned int lock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/ia64/include/asm/spinlock_types.h linux-4.14/arch/ia64/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/ia64/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/ia64/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef _ASM_IA64_SPINLOCK_TYPES_H
+ #define _ASM_IA64_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile unsigned int lock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/ia64/kernel/mca.c linux-4.14/arch/ia64/kernel/mca.c
+--- linux-4.14.orig/arch/ia64/kernel/mca.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/ia64/kernel/mca.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1824,7 +1824,7 @@
+ ti->cpu = cpu;
+ p->stack = ti;
+ p->state = TASK_UNINTERRUPTIBLE;
+- cpumask_set_cpu(cpu, &p->cpus_allowed);
++ cpumask_set_cpu(cpu, &p->cpus_mask);
+ INIT_LIST_HEAD(&p->tasks);
+ p->parent = p->real_parent = p->group_leader = p;
+ INIT_LIST_HEAD(&p->children);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/Kconfig linux-4.14/arch/Kconfig
+--- linux-4.14.orig/arch/Kconfig 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -20,6 +20,7 @@
+ tristate "OProfile system profiling"
+ depends on PROFILING
+ depends on HAVE_OPROFILE
++ depends on !PREEMPT_RT_FULL
+ select RING_BUFFER
+ select RING_BUFFER_ALLOW_SWAP
+ help
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/m32r/include/asm/spinlock_types.h linux-4.14/arch/m32r/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/m32r/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/m32r/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef _ASM_M32R_SPINLOCK_TYPES_H
+ #define _ASM_M32R_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile int slock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/metag/include/asm/spinlock_types.h linux-4.14/arch/metag/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/metag/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/metag/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef _ASM_METAG_SPINLOCK_TYPES_H
+ #define _ASM_METAG_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile unsigned int lock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/mips/include/asm/switch_to.h linux-4.14/arch/mips/include/asm/switch_to.h
+--- linux-4.14.orig/arch/mips/include/asm/switch_to.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/mips/include/asm/switch_to.h 2018-09-05 11:05:07.000000000 +0200
+@@ -42,7 +42,7 @@
+ * inline to try to keep the overhead down. If we have been forced to run on
+ * a "CPU" with an FPU because of a previous high level of FP computation,
+ * but did not actually use the FPU during the most recent time-slice (CU1
+- * isn't set), we undo the restriction on cpus_allowed.
++ * isn't set), we undo the restriction on cpus_mask.
+ *
+ * We're not calling set_cpus_allowed() here, because we have no need to
+ * force prompt migration - we're already switching the current CPU to a
+@@ -57,7 +57,7 @@
+ test_ti_thread_flag(__prev_ti, TIF_FPUBOUND) && \
+ (!(KSTK_STATUS(prev) & ST0_CU1))) { \
+ clear_ti_thread_flag(__prev_ti, TIF_FPUBOUND); \
+- prev->cpus_allowed = prev->thread.user_cpus_allowed; \
++ prev->cpus_mask = prev->thread.user_cpus_allowed; \
+ } \
+ next->thread.emulated_fp = 0; \
+ } while(0)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/mips/Kconfig linux-4.14/arch/mips/Kconfig
+--- linux-4.14.orig/arch/mips/Kconfig 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/mips/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -2519,7 +2519,7 @@
#
config HIGHMEM
bool "High Memory Support"
config CPU_SUPPORTS_HIGHMEM
bool
-diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
-index 65fba4c34cd7..4b5ba68910e0 100644
---- a/arch/powerpc/Kconfig
-+++ b/arch/powerpc/Kconfig
-@@ -52,10 +52,11 @@ config LOCKDEP_SUPPORT
-
- config RWSEM_GENERIC_SPINLOCK
- bool
-+ default y if PREEMPT_RT_FULL
-
- config RWSEM_XCHGADD_ALGORITHM
- bool
-- default y
-+ default y if !PREEMPT_RT_FULL
-
- config GENERIC_LOCKBREAK
- bool
-@@ -134,6 +135,7 @@ config PPC
- select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
- select GENERIC_STRNCPY_FROM_USER
- select GENERIC_STRNLEN_USER
-+ select HAVE_PREEMPT_LAZY
- select HAVE_MOD_ARCH_SPECIFIC
- select MODULES_USE_ELF_RELA
- select CLONE_BACKWARDS
-@@ -321,7 +323,7 @@ menu "Kernel options"
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/mips/kernel/mips-mt-fpaff.c linux-4.14/arch/mips/kernel/mips-mt-fpaff.c
+--- linux-4.14.orig/arch/mips/kernel/mips-mt-fpaff.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/mips/kernel/mips-mt-fpaff.c 2018-09-05 11:05:07.000000000 +0200
+@@ -177,7 +177,7 @@
+ if (retval)
+ goto out_unlock;
- config HIGHMEM
- bool "High memory support"
-- depends on PPC32
-+ depends on PPC32 && !PREEMPT_RT_FULL
+- cpumask_or(&allowed, &p->thread.user_cpus_allowed, &p->cpus_allowed);
++ cpumask_or(&allowed, &p->thread.user_cpus_allowed, p->cpus_ptr);
+ cpumask_and(&mask, &allowed, cpu_active_mask);
- source kernel/Kconfig.hz
- source kernel/Kconfig.preempt
-diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
-index 87e4b2d8dcd4..981e501a4359 100644
---- a/arch/powerpc/include/asm/thread_info.h
-+++ b/arch/powerpc/include/asm/thread_info.h
-@@ -43,6 +43,8 @@ struct thread_info {
+ out_unlock:
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/mips/kernel/traps.c linux-4.14/arch/mips/kernel/traps.c
+--- linux-4.14.orig/arch/mips/kernel/traps.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/mips/kernel/traps.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1193,12 +1193,12 @@
+ * restricted the allowed set to exclude any CPUs with FPUs,
+ * we'll skip the procedure.
+ */
+- if (cpumask_intersects(¤t->cpus_allowed, &mt_fpu_cpumask)) {
++ if (cpumask_intersects(¤t->cpus_mask, &mt_fpu_cpumask)) {
+ cpumask_t tmask;
+
+ current->thread.user_cpus_allowed
+- = current->cpus_allowed;
+- cpumask_and(&tmask, ¤t->cpus_allowed,
++ = current->cpus_mask;
++ cpumask_and(&tmask, ¤t->cpus_mask,
+ &mt_fpu_cpumask);
+ set_cpus_allowed_ptr(current, &tmask);
+ set_thread_flag(TIF_FPUBOUND);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/mn10300/include/asm/spinlock_types.h linux-4.14/arch/mn10300/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/mn10300/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/mn10300/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef _ASM_SPINLOCK_TYPES_H
+ #define _ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct arch_spinlock {
+ unsigned int slock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/include/asm/spinlock_types.h linux-4.14/arch/powerpc/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/powerpc/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/powerpc/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef _ASM_POWERPC_SPINLOCK_TYPES_H
+ #define _ASM_POWERPC_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile unsigned int slock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/include/asm/thread_info.h linux-4.14/arch/powerpc/include/asm/thread_info.h
+--- linux-4.14.orig/arch/powerpc/include/asm/thread_info.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/powerpc/include/asm/thread_info.h 2018-09-05 11:05:07.000000000 +0200
+@@ -36,6 +36,8 @@
int cpu; /* cpu we're on */
int preempt_count; /* 0 => preemptable,
<0 => BUG */
unsigned long local_flags; /* private flags for thread */
#ifdef CONFIG_LIVEPATCH
unsigned long *livepatch_sp;
-@@ -88,8 +90,7 @@ static inline struct thread_info *current_thread_info(void)
+@@ -81,8 +83,7 @@
#define TIF_SYSCALL_TRACE 0 /* syscall trace active */
#define TIF_SIGPENDING 1 /* signal pending */
#define TIF_NEED_RESCHED 2 /* rescheduling necessary */
+#define TIF_NEED_RESCHED_LAZY 3 /* lazy rescheduling necessary */
#define TIF_32BIT 4 /* 32 bit binary */
#define TIF_RESTORE_TM 5 /* need to restore TM FP/VEC/VSX */
- #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
-@@ -107,6 +108,8 @@ static inline struct thread_info *current_thread_info(void)
+ #define TIF_PATCH_PENDING 6 /* pending live patching update */
+@@ -101,6 +102,8 @@
#if defined(CONFIG_PPC64)
#define TIF_ELF2ABI 18 /* function descriptors must die! */
#endif
/* as above, but as bit values */
#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
-@@ -125,14 +128,16 @@ static inline struct thread_info *current_thread_info(void)
+@@ -120,14 +123,16 @@
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_EMULATE_STACK_STORE (1<<TIF_EMULATE_STACK_STORE)
#define _TIF_NOHZ (1<<TIF_NOHZ)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NEED_RESCHED | \
_TIF_NOTIFY_RESUME | _TIF_UPROBE | \
-- _TIF_RESTORE_TM)
-+ _TIF_RESTORE_TM | _TIF_NEED_RESCHED_LAZY)
+- _TIF_RESTORE_TM | _TIF_PATCH_PENDING)
++ _TIF_RESTORE_TM | _TIF_PATCH_PENDING | _TIF_NEED_RESCHED_LAZY)
#define _TIF_PERSYSCALL_MASK (_TIF_RESTOREALL|_TIF_NOERROR)
+#define _TIF_NEED_RESCHED_MASK (_TIF_NEED_RESCHED | _TIF_NEED_RESCHED_LAZY)
/* Bits in local_flags */
/* Don't move TLF_NAPPING without adjusting the code in entry_32.S */
-diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
-index c833d88c423d..96e9fbc3f684 100644
---- a/arch/powerpc/kernel/asm-offsets.c
-+++ b/arch/powerpc/kernel/asm-offsets.c
-@@ -156,6 +156,7 @@ int main(void)
- DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
- DEFINE(TI_LOCAL_FLAGS, offsetof(struct thread_info, local_flags));
- DEFINE(TI_PREEMPT, offsetof(struct thread_info, preempt_count));
-+ DEFINE(TI_PREEMPT_LAZY, offsetof(struct thread_info, preempt_lazy_count));
- DEFINE(TI_TASK, offsetof(struct thread_info, task));
- DEFINE(TI_CPU, offsetof(struct thread_info, cpu));
-
-diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
-index 3841d749a430..6dbaeff192b9 100644
---- a/arch/powerpc/kernel/entry_32.S
-+++ b/arch/powerpc/kernel/entry_32.S
-@@ -835,7 +835,14 @@ user_exc_return: /* r10 contains MSR_KERNEL here */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/Kconfig linux-4.14/arch/powerpc/Kconfig
+--- linux-4.14.orig/arch/powerpc/Kconfig 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/powerpc/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -111,10 +111,11 @@
+
+ config RWSEM_GENERIC_SPINLOCK
+ bool
++ default y if PREEMPT_RT_FULL
+
+ config RWSEM_XCHGADD_ALGORITHM
+ bool
+- default y
++ default y if !PREEMPT_RT_FULL
+
+ config GENERIC_LOCKBREAK
+ bool
+@@ -215,6 +216,7 @@
+ select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH
+ select HAVE_PERF_REGS
+ select HAVE_PERF_USER_STACK_DUMP
++ select HAVE_PREEMPT_LAZY
+ select HAVE_RCU_TABLE_FREE if SMP
+ select HAVE_REGS_AND_STACK_ACCESS_API
+ select HAVE_SYSCALL_TRACEPOINTS
+@@ -390,7 +392,7 @@
+
+ config HIGHMEM
+ bool "High memory support"
+- depends on PPC32
++ depends on PPC32 && !PREEMPT_RT_FULL
+
+ source kernel/Kconfig.hz
+ source kernel/Kconfig.preempt
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/kernel/asm-offsets.c linux-4.14/arch/powerpc/kernel/asm-offsets.c
+--- linux-4.14.orig/arch/powerpc/kernel/asm-offsets.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/powerpc/kernel/asm-offsets.c 2018-09-05 11:05:07.000000000 +0200
+@@ -156,6 +156,7 @@
+ OFFSET(TI_FLAGS, thread_info, flags);
+ OFFSET(TI_LOCAL_FLAGS, thread_info, local_flags);
+ OFFSET(TI_PREEMPT, thread_info, preempt_count);
++ OFFSET(TI_PREEMPT_LAZY, thread_info, preempt_lazy_count);
+ OFFSET(TI_TASK, thread_info, task);
+ OFFSET(TI_CPU, thread_info, cpu);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/kernel/entry_32.S linux-4.14/arch/powerpc/kernel/entry_32.S
+--- linux-4.14.orig/arch/powerpc/kernel/entry_32.S 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/powerpc/kernel/entry_32.S 2018-09-05 11:05:07.000000000 +0200
+@@ -866,7 +866,14 @@
cmpwi 0,r0,0 /* if non-zero, just restore regs and return */
bne restore
andi. r8,r8,_TIF_NEED_RESCHED
lwz r3,_MSR(r1)
andi. r0,r3,MSR_EE /* interrupts off? */
beq restore /* don't schedule if so */
-@@ -846,11 +853,11 @@ user_exc_return: /* r10 contains MSR_KERNEL here */
+@@ -877,11 +884,11 @@
*/
bl trace_hardirqs_off
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
/* And now, to properly rebalance the above, we tell lockdep they
* are being turned back on, which will happen when we return
-@@ -1171,7 +1178,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX)
+@@ -1204,7 +1211,7 @@
#endif /* !(CONFIG_4xx || CONFIG_BOOKE) */
do_work: /* r10 contains MSR_KERNEL here */
beq do_user_signal
do_resched: /* r10 contains MSR_KERNEL here */
-@@ -1192,7 +1199,7 @@ do_resched: /* r10 contains MSR_KERNEL here */
+@@ -1225,7 +1232,7 @@
MTMSRD(r10) /* disable interrupts */
CURRENT_THREAD_INFO(r9, r1)
lwz r9,TI_FLAGS(r9)
bne- do_resched
andi. r0,r9,_TIF_USER_WORK_MASK
beq restore_user
-diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
-index 6432d4bf08c8..5509a26f1070 100644
---- a/arch/powerpc/kernel/entry_64.S
-+++ b/arch/powerpc/kernel/entry_64.S
-@@ -656,7 +656,7 @@ _GLOBAL(ret_from_except_lite)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/kernel/entry_64.S linux-4.14/arch/powerpc/kernel/entry_64.S
+--- linux-4.14.orig/arch/powerpc/kernel/entry_64.S 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/powerpc/kernel/entry_64.S 2018-09-05 11:05:07.000000000 +0200
+@@ -690,7 +690,7 @@
bl restore_math
b restore
#endif
beq 2f
bl restore_interrupts
SCHEDULE_USER
-@@ -718,10 +718,18 @@ _GLOBAL(ret_from_except_lite)
+@@ -752,10 +752,18 @@
#ifdef CONFIG_PREEMPT
/* Check if we need to preempt */
-- andi. r0,r4,_TIF_NEED_RESCHED
-- beq+ restore
-- /* Check that preempt_count() == 0 and interrupts are enabled */
- lwz r8,TI_PREEMPT(r9)
++ lwz r8,TI_PREEMPT(r9)
+ cmpwi 0,r8,0 /* if non-zero, just restore regs and return */
+ bne restore
-+ andi. r0,r4,_TIF_NEED_RESCHED
+ andi. r0,r4,_TIF_NEED_RESCHED
+ bne+ check_count
+
+ andi. r0,r4,_TIF_NEED_RESCHED_LAZY
-+ beq+ restore
+ beq+ restore
+ lwz r8,TI_PREEMPT_LAZY(r9)
+
-+ /* Check that preempt_count() == 0 and interrupts are enabled */
+ /* Check that preempt_count() == 0 and interrupts are enabled */
+- lwz r8,TI_PREEMPT(r9)
+check_count:
cmpwi cr1,r8,0
ld r0,SOFTE(r1)
cmpdi r0,0
-@@ -738,7 +746,7 @@ _GLOBAL(ret_from_except_lite)
+@@ -772,7 +780,7 @@
/* Re-test flags and eventually loop */
CURRENT_THREAD_INFO(r9, r1)
ld r4,TI_FLAGS(r9)
bne 1b
/*
-diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
-index 3c05c311e35e..f83f6ac1274d 100644
---- a/arch/powerpc/kernel/irq.c
-+++ b/arch/powerpc/kernel/irq.c
-@@ -638,6 +638,7 @@ void irq_ctx_init(void)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/kernel/irq.c linux-4.14/arch/powerpc/kernel/irq.c
+--- linux-4.14.orig/arch/powerpc/kernel/irq.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/powerpc/kernel/irq.c 2018-09-05 11:05:07.000000000 +0200
+@@ -693,6 +693,7 @@
}
}
void do_softirq_own_stack(void)
{
struct thread_info *curtp, *irqtp;
-@@ -655,6 +656,7 @@ void do_softirq_own_stack(void)
+@@ -710,6 +711,7 @@
if (irqtp->flags)
set_bits(irqtp->flags, &curtp->flags);
}
irq_hw_number_t virq_to_hw(unsigned int virq)
{
-diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
-index 030d72df5dd5..b471a709e100 100644
---- a/arch/powerpc/kernel/misc_32.S
-+++ b/arch/powerpc/kernel/misc_32.S
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/kernel/misc_32.S linux-4.14/arch/powerpc/kernel/misc_32.S
+--- linux-4.14.orig/arch/powerpc/kernel/misc_32.S 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/powerpc/kernel/misc_32.S 2018-09-05 11:05:07.000000000 +0200
@@ -41,6 +41,7 @@
* We store the saved ksp_limit in the unused part
* of the STACK_FRAME_OVERHEAD
_GLOBAL(call_do_softirq)
mflr r0
stw r0,4(r1)
-@@ -57,6 +58,7 @@ _GLOBAL(call_do_softirq)
+@@ -57,6 +58,7 @@
stw r10,THREAD+KSP_LIMIT(r2)
mtlr r0
blr
/*
* void call_do_irq(struct pt_regs *regs, struct thread_info *irqtp);
-diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
-index 4f178671f230..39e7d84a3492 100644
---- a/arch/powerpc/kernel/misc_64.S
-+++ b/arch/powerpc/kernel/misc_64.S
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/kernel/misc_64.S linux-4.14/arch/powerpc/kernel/misc_64.S
+--- linux-4.14.orig/arch/powerpc/kernel/misc_64.S 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/powerpc/kernel/misc_64.S 2018-09-05 11:05:07.000000000 +0200
@@ -31,6 +31,7 @@
.text
_GLOBAL(call_do_softirq)
mflr r0
std r0,16(r1)
-@@ -41,6 +42,7 @@ _GLOBAL(call_do_softirq)
+@@ -41,6 +42,7 @@
ld r0,16(r1)
mtlr r0
blr
_GLOBAL(call_do_irq)
mflr r0
-diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
-index 029be26b5a17..9528089ea142 100644
---- a/arch/powerpc/kvm/Kconfig
-+++ b/arch/powerpc/kvm/Kconfig
-@@ -175,6 +175,7 @@ config KVM_E500MC
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/kvm/Kconfig linux-4.14/arch/powerpc/kvm/Kconfig
+--- linux-4.14.orig/arch/powerpc/kvm/Kconfig 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/powerpc/kvm/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -177,6 +177,7 @@
config KVM_MPIC
bool "KVM in-kernel MPIC emulation"
depends on KVM && E500
select HAVE_KVM_IRQCHIP
select HAVE_KVM_IRQFD
select HAVE_KVM_IRQ_ROUTING
-diff --git a/arch/powerpc/platforms/ps3/device-init.c b/arch/powerpc/platforms/ps3/device-init.c
-index e48462447ff0..2670cee66064 100644
---- a/arch/powerpc/platforms/ps3/device-init.c
-+++ b/arch/powerpc/platforms/ps3/device-init.c
-@@ -752,7 +752,7 @@ static int ps3_notification_read_write(struct ps3_notification_device *dev,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/platforms/cell/spufs/sched.c linux-4.14/arch/powerpc/platforms/cell/spufs/sched.c
+--- linux-4.14.orig/arch/powerpc/platforms/cell/spufs/sched.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/powerpc/platforms/cell/spufs/sched.c 2018-09-05 11:05:07.000000000 +0200
+@@ -141,7 +141,7 @@
+ * runqueue. The context will be rescheduled on the proper node
+ * if it is timesliced or preempted.
+ */
+- cpumask_copy(&ctx->cpus_allowed, ¤t->cpus_allowed);
++ cpumask_copy(&ctx->cpus_allowed, current->cpus_ptr);
+
+ /* Save the current cpu id for spu interrupt routing. */
+ ctx->last_ran = raw_smp_processor_id();
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/powerpc/platforms/ps3/device-init.c linux-4.14/arch/powerpc/platforms/ps3/device-init.c
+--- linux-4.14.orig/arch/powerpc/platforms/ps3/device-init.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/powerpc/platforms/ps3/device-init.c 2018-09-05 11:05:07.000000000 +0200
+@@ -752,7 +752,7 @@
}
pr_debug("%s:%u: notification %s issued\n", __func__, __LINE__, op);
dev->done.done || kthread_should_stop());
if (kthread_should_stop())
res = -EINTR;
-diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
-index 6c0378c0b8b5..abd58b4dff97 100644
---- a/arch/sh/kernel/irq.c
-+++ b/arch/sh/kernel/irq.c
-@@ -147,6 +147,7 @@ void irq_ctx_exit(int cpu)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/s390/include/asm/spinlock_types.h linux-4.14/arch/s390/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/s390/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/s390/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ int lock;
+ } __attribute__ ((aligned (4))) arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/sh/include/asm/spinlock_types.h linux-4.14/arch/sh/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/sh/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/sh/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef __ASM_SH_SPINLOCK_TYPES_H
+ #define __ASM_SH_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile unsigned int lock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/sh/kernel/irq.c linux-4.14/arch/sh/kernel/irq.c
+--- linux-4.14.orig/arch/sh/kernel/irq.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/sh/kernel/irq.c 2018-09-05 11:05:07.000000000 +0200
+@@ -148,6 +148,7 @@
hardirq_ctx[cpu] = NULL;
}
void do_softirq_own_stack(void)
{
struct thread_info *curctx;
-@@ -174,6 +175,7 @@ void do_softirq_own_stack(void)
+@@ -175,6 +176,7 @@
"r5", "r6", "r7", "r8", "r9", "r15", "t", "pr"
);
}
#else
static inline void handle_one_irq(unsigned int irq)
{
-diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
-index 165ecdd24d22..b68a464a22be 100644
---- a/arch/sparc/Kconfig
-+++ b/arch/sparc/Kconfig
-@@ -194,12 +194,10 @@ config NR_CPUS
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/sparc/Kconfig linux-4.14/arch/sparc/Kconfig
+--- linux-4.14.orig/arch/sparc/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/sparc/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -206,12 +206,10 @@
source kernel/Kconfig.hz
config RWSEM_GENERIC_SPINLOCK
config GENERIC_HWEIGHT
bool
-diff --git a/arch/sparc/kernel/irq_64.c b/arch/sparc/kernel/irq_64.c
-index 34a7930b76ef..773740521008 100644
---- a/arch/sparc/kernel/irq_64.c
-+++ b/arch/sparc/kernel/irq_64.c
-@@ -854,6 +854,7 @@ void __irq_entry handler_irq(int pil, struct pt_regs *regs)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/sparc/kernel/irq_64.c linux-4.14/arch/sparc/kernel/irq_64.c
+--- linux-4.14.orig/arch/sparc/kernel/irq_64.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/sparc/kernel/irq_64.c 2018-09-05 11:05:07.000000000 +0200
+@@ -855,6 +855,7 @@
set_irq_regs(old_regs);
}
void do_softirq_own_stack(void)
{
void *orig_sp, *sp = softirq_stack[smp_processor_id()];
-@@ -868,6 +869,7 @@ void do_softirq_own_stack(void)
+@@ -869,6 +870,7 @@
__asm__ __volatile__("mov %0, %%sp"
: : "r" (orig_sp));
}
#ifdef CONFIG_HOTPLUG_CPU
void fixup_irqs(void)
-diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
-index bada636d1065..f8a995c90c01 100644
---- a/arch/x86/Kconfig
-+++ b/arch/x86/Kconfig
-@@ -17,6 +17,7 @@ config X86_64
- ### Arch settings
- config X86
- def_bool y
-+ select HAVE_PREEMPT_LAZY
- select ACPI_LEGACY_TABLES_LOOKUP if ACPI
- select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
- select ANON_INODES
-@@ -232,8 +233,11 @@ config ARCH_MAY_HAVE_PC_FDC
- def_bool y
- depends on ISA_DMA_API
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/tile/include/asm/setup.h linux-4.14/arch/tile/include/asm/setup.h
+--- linux-4.14.orig/arch/tile/include/asm/setup.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/tile/include/asm/setup.h 2018-09-05 11:05:07.000000000 +0200
+@@ -49,7 +49,7 @@
+
+ /* Hook hardwall code into changes in affinity. */
+ #define arch_set_cpus_allowed(p, new_mask) do { \
+- if (!cpumask_equal(&p->cpus_allowed, new_mask)) \
++ if (!cpumask_equal(p->cpus_ptr, new_mask)) \
+ hardwall_deactivate_all(p); \
+ } while (0)
+ #endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/tile/include/asm/spinlock_types.h linux-4.14/arch/tile/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/tile/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/tile/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -15,10 +15,6 @@
+ #ifndef _ASM_TILE_SPINLOCK_TYPES_H
+ #define _ASM_TILE_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ #ifdef __tilegx__
+
+ /* Low 15 bits are "next"; high 15 bits are "current". */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/tile/kernel/hardwall.c linux-4.14/arch/tile/kernel/hardwall.c
+--- linux-4.14.orig/arch/tile/kernel/hardwall.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/tile/kernel/hardwall.c 2018-09-05 11:05:07.000000000 +0200
+@@ -590,12 +590,12 @@
+ * Get our affinity; if we're not bound to this tile uniquely,
+ * we can't access the network registers.
+ */
+- if (cpumask_weight(&p->cpus_allowed) != 1)
++ if (p->nr_cpus_allowed != 1)
+ return -EPERM;
-+config RWSEM_GENERIC_SPINLOCK
-+ def_bool PREEMPT_RT_FULL
-+
- config RWSEM_XCHGADD_ALGORITHM
-- def_bool y
-+ def_bool !RWSEM_GENERIC_SPINLOCK && !PREEMPT_RT_FULL
+ /* Make sure we are bound to a cpu assigned to this resource. */
+ cpu = smp_processor_id();
+- BUG_ON(cpumask_first(&p->cpus_allowed) != cpu);
++ BUG_ON(cpumask_first(p->cpus_ptr) != cpu);
+ if (!cpumask_test_cpu(cpu, &info->cpumask))
+ return -EINVAL;
- config GENERIC_CALIBRATE_DELAY
- def_bool y
-@@ -897,7 +901,7 @@ config IOMMU_HELPER
- config MAXSMP
- bool "Enable Maximum number of SMP Processors and NUMA Nodes"
- depends on X86_64 && SMP && DEBUG_KERNEL
-- select CPUMASK_OFFSTACK
-+ select CPUMASK_OFFSTACK if !PREEMPT_RT_FULL
- ---help---
- Enable maximum number of CPUS and NUMA Nodes for this architecture.
- If unsure, say N.
-diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
-index aa8b0672f87a..2429414bfc71 100644
---- a/arch/x86/crypto/aesni-intel_glue.c
-+++ b/arch/x86/crypto/aesni-intel_glue.c
-@@ -372,14 +372,14 @@ static int ecb_encrypt(struct blkcipher_desc *desc,
- err = blkcipher_walk_virt(desc, &walk);
- desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+@@ -621,17 +621,17 @@
+ * Deactivate a task's hardwall. Must hold lock for hardwall_type.
+ * This method may be called from exit_thread(), so we don't want to
+ * rely on too many fields of struct task_struct still being valid.
+- * We assume the cpus_allowed, pid, and comm fields are still valid.
++ * We assume the nr_cpus_allowed, pid, and comm fields are still valid.
+ */
+ static void _hardwall_deactivate(struct hardwall_type *hwt,
+ struct task_struct *task)
+ {
+ struct thread_struct *ts = &task->thread;
+
+- if (cpumask_weight(&task->cpus_allowed) != 1) {
++ if (task->nr_cpus_allowed != 1) {
+ pr_err("pid %d (%s) releasing %s hardwall with an affinity mask containing %d cpus!\n",
+ task->pid, task->comm, hwt->name,
+- cpumask_weight(&task->cpus_allowed));
++ task->nr_cpus_allowed);
+ BUG();
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/aesni-intel_glue.c linux-4.14/arch/x86/crypto/aesni-intel_glue.c
+--- linux-4.14.orig/arch/x86/crypto/aesni-intel_glue.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/crypto/aesni-intel_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -387,14 +387,14 @@
+
+ err = skcipher_walk_virt(&walk, req, true);
- kernel_fpu_begin();
while ((nbytes = walk.nbytes)) {
+ kernel_fpu_begin();
aesni_ecb_enc(ctx, walk.dst.virt.addr, walk.src.virt.addr,
-- nbytes & AES_BLOCK_MASK);
-+ nbytes & AES_BLOCK_MASK);
+ nbytes & AES_BLOCK_MASK);
+ kernel_fpu_end();
nbytes &= AES_BLOCK_SIZE - 1;
- err = blkcipher_walk_done(desc, &walk, nbytes);
+ err = skcipher_walk_done(&walk, nbytes);
}
- kernel_fpu_end();
return err;
}
-@@ -396,14 +396,14 @@ static int ecb_decrypt(struct blkcipher_desc *desc,
- err = blkcipher_walk_virt(desc, &walk);
- desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+@@ -409,14 +409,14 @@
+
+ err = skcipher_walk_virt(&walk, req, true);
- kernel_fpu_begin();
while ((nbytes = walk.nbytes)) {
nbytes & AES_BLOCK_MASK);
+ kernel_fpu_end();
nbytes &= AES_BLOCK_SIZE - 1;
- err = blkcipher_walk_done(desc, &walk, nbytes);
+ err = skcipher_walk_done(&walk, nbytes);
}
- kernel_fpu_end();
return err;
}
-@@ -420,14 +420,14 @@ static int cbc_encrypt(struct blkcipher_desc *desc,
- err = blkcipher_walk_virt(desc, &walk);
- desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+@@ -431,14 +431,14 @@
+
+ err = skcipher_walk_virt(&walk, req, true);
- kernel_fpu_begin();
while ((nbytes = walk.nbytes)) {
nbytes & AES_BLOCK_MASK, walk.iv);
+ kernel_fpu_end();
nbytes &= AES_BLOCK_SIZE - 1;
- err = blkcipher_walk_done(desc, &walk, nbytes);
+ err = skcipher_walk_done(&walk, nbytes);
}
- kernel_fpu_end();
return err;
}
-@@ -444,14 +444,14 @@ static int cbc_decrypt(struct blkcipher_desc *desc,
- err = blkcipher_walk_virt(desc, &walk);
- desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+@@ -453,14 +453,14 @@
+
+ err = skcipher_walk_virt(&walk, req, true);
- kernel_fpu_begin();
while ((nbytes = walk.nbytes)) {
nbytes & AES_BLOCK_MASK, walk.iv);
+ kernel_fpu_end();
nbytes &= AES_BLOCK_SIZE - 1;
- err = blkcipher_walk_done(desc, &walk, nbytes);
+ err = skcipher_walk_done(&walk, nbytes);
}
- kernel_fpu_end();
return err;
}
-@@ -503,18 +503,20 @@ static int ctr_crypt(struct blkcipher_desc *desc,
- err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE);
- desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+@@ -510,18 +510,20 @@
+
+ err = skcipher_walk_virt(&walk, req, true);
- kernel_fpu_begin();
while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
nbytes & AES_BLOCK_MASK, walk.iv);
+ kernel_fpu_end();
nbytes &= AES_BLOCK_SIZE - 1;
- err = blkcipher_walk_done(desc, &walk, nbytes);
+ err = skcipher_walk_done(&walk, nbytes);
}
if (walk.nbytes) {
+ kernel_fpu_begin();
ctr_crypt_final(ctx, &walk);
+ kernel_fpu_end();
- err = blkcipher_walk_done(desc, &walk, 0);
+ err = skcipher_walk_done(&walk, 0);
}
- kernel_fpu_end();
return err;
}
-diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c
-index 8648158f3916..d7699130ee36 100644
---- a/arch/x86/crypto/cast5_avx_glue.c
-+++ b/arch/x86/crypto/cast5_avx_glue.c
-@@ -59,7 +59,7 @@ static inline void cast5_fpu_end(bool fpu_enabled)
- static int ecb_crypt(struct blkcipher_desc *desc, struct blkcipher_walk *walk,
- bool enc)
- {
-- bool fpu_enabled = false;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/camellia_aesni_avx2_glue.c linux-4.14/arch/x86/crypto/camellia_aesni_avx2_glue.c
+--- linux-4.14.orig/arch/x86/crypto/camellia_aesni_avx2_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/camellia_aesni_avx2_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -206,6 +206,20 @@
+ bool fpu_enabled;
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void camellia_fpu_end_rt(struct crypt_priv *ctx)
++{
++ bool fpu_enabled = ctx->fpu_enabled;
++
++ if (!fpu_enabled)
++ return;
++ camellia_fpu_end(fpu_enabled);
++ ctx->fpu_enabled = false;
++}
++#else
++static void camellia_fpu_end_rt(struct crypt_priv *ctx) { }
++#endif
++
+ static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+ {
+ const unsigned int bsize = CAMELLIA_BLOCK_SIZE;
+@@ -221,16 +235,19 @@
+ }
+
+ if (nbytes >= CAMELLIA_AESNI_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ camellia_ecb_enc_16way(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * CAMELLIA_AESNI_PARALLEL_BLOCKS;
+ nbytes -= bsize * CAMELLIA_AESNI_PARALLEL_BLOCKS;
+ }
+
+ while (nbytes >= CAMELLIA_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ camellia_enc_blk_2way(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * CAMELLIA_PARALLEL_BLOCKS;
+ nbytes -= bsize * CAMELLIA_PARALLEL_BLOCKS;
+ }
++ camellia_fpu_end_rt(ctx);
+
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+ camellia_enc_blk(ctx->ctx, srcdst, srcdst);
+@@ -251,16 +268,19 @@
+ }
+
+ if (nbytes >= CAMELLIA_AESNI_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ camellia_ecb_dec_16way(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * CAMELLIA_AESNI_PARALLEL_BLOCKS;
+ nbytes -= bsize * CAMELLIA_AESNI_PARALLEL_BLOCKS;
+ }
+
+ while (nbytes >= CAMELLIA_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ camellia_dec_blk_2way(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * CAMELLIA_PARALLEL_BLOCKS;
+ nbytes -= bsize * CAMELLIA_PARALLEL_BLOCKS;
+ }
++ camellia_fpu_end_rt(ctx);
+
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+ camellia_dec_blk(ctx->ctx, srcdst, srcdst);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/camellia_aesni_avx_glue.c linux-4.14/arch/x86/crypto/camellia_aesni_avx_glue.c
+--- linux-4.14.orig/arch/x86/crypto/camellia_aesni_avx_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/camellia_aesni_avx_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -210,6 +210,21 @@
+ bool fpu_enabled;
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void camellia_fpu_end_rt(struct crypt_priv *ctx)
++{
++ bool fpu_enabled = ctx->fpu_enabled;
++
++ if (!fpu_enabled)
++ return;
++ camellia_fpu_end(fpu_enabled);
++ ctx->fpu_enabled = false;
++}
++
++#else
++static void camellia_fpu_end_rt(struct crypt_priv *ctx) { }
++#endif
++
+ static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+ {
+ const unsigned int bsize = CAMELLIA_BLOCK_SIZE;
+@@ -225,10 +240,12 @@
+ }
+
+ while (nbytes >= CAMELLIA_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ camellia_enc_blk_2way(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * CAMELLIA_PARALLEL_BLOCKS;
+ nbytes -= bsize * CAMELLIA_PARALLEL_BLOCKS;
+ }
++ camellia_fpu_end_rt(ctx);
+
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+ camellia_enc_blk(ctx->ctx, srcdst, srcdst);
+@@ -249,10 +266,12 @@
+ }
+
+ while (nbytes >= CAMELLIA_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ camellia_dec_blk_2way(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * CAMELLIA_PARALLEL_BLOCKS;
+ nbytes -= bsize * CAMELLIA_PARALLEL_BLOCKS;
+ }
++ camellia_fpu_end_rt(ctx);
+
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+ camellia_dec_blk(ctx->ctx, srcdst, srcdst);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/cast5_avx_glue.c linux-4.14/arch/x86/crypto/cast5_avx_glue.c
+--- linux-4.14.orig/arch/x86/crypto/cast5_avx_glue.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/crypto/cast5_avx_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -59,7 +59,7 @@
+ static int ecb_crypt(struct blkcipher_desc *desc, struct blkcipher_walk *walk,
+ bool enc)
+ {
+- bool fpu_enabled = false;
+ bool fpu_enabled;
struct cast5_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
const unsigned int bsize = CAST5_BLOCK_SIZE;
unsigned int nbytes;
-@@ -75,7 +75,7 @@ static int ecb_crypt(struct blkcipher_desc *desc, struct blkcipher_walk *walk,
+@@ -73,7 +73,7 @@
u8 *wsrc = walk->src.virt.addr;
u8 *wdst = walk->dst.virt.addr;
/* Process multi-block batch */
if (nbytes >= bsize * CAST5_PARALLEL_BLOCKS) {
-@@ -103,10 +103,9 @@ static int ecb_crypt(struct blkcipher_desc *desc, struct blkcipher_walk *walk,
+@@ -102,10 +102,9 @@
} while (nbytes >= bsize);
done:
return err;
}
-@@ -227,7 +226,7 @@ static unsigned int __cbc_decrypt(struct blkcipher_desc *desc,
+@@ -226,7 +225,7 @@
static int cbc_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
struct scatterlist *src, unsigned int nbytes)
{
struct blkcipher_walk walk;
int err;
-@@ -236,12 +235,11 @@ static int cbc_decrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+@@ -235,12 +234,11 @@
desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
while ((nbytes = walk.nbytes)) {
return err;
}
-@@ -311,7 +309,7 @@ static unsigned int __ctr_crypt(struct blkcipher_desc *desc,
+@@ -309,7 +307,7 @@
static int ctr_crypt(struct blkcipher_desc *desc, struct scatterlist *dst,
struct scatterlist *src, unsigned int nbytes)
{
struct blkcipher_walk walk;
int err;
-@@ -320,13 +318,12 @@ static int ctr_crypt(struct blkcipher_desc *desc, struct scatterlist *dst,
+@@ -318,13 +316,12 @@
desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
while ((nbytes = walk.nbytes) >= CAST5_BLOCK_SIZE) {
if (walk.nbytes) {
ctr_crypt_final(desc, &walk);
err = blkcipher_walk_done(desc, &walk, 0);
-diff --git a/arch/x86/crypto/glue_helper.c b/arch/x86/crypto/glue_helper.c
-index 6a85598931b5..3a506ce7ed93 100644
---- a/arch/x86/crypto/glue_helper.c
-+++ b/arch/x86/crypto/glue_helper.c
-@@ -39,7 +39,7 @@ static int __glue_ecb_crypt_128bit(const struct common_glue_ctx *gctx,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/cast6_avx_glue.c linux-4.14/arch/x86/crypto/cast6_avx_glue.c
+--- linux-4.14.orig/arch/x86/crypto/cast6_avx_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/cast6_avx_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -205,19 +205,33 @@
+ bool fpu_enabled;
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void cast6_fpu_end_rt(struct crypt_priv *ctx)
++{
++ bool fpu_enabled = ctx->fpu_enabled;
++
++ if (!fpu_enabled)
++ return;
++ cast6_fpu_end(fpu_enabled);
++ ctx->fpu_enabled = false;
++}
++
++#else
++static void cast6_fpu_end_rt(struct crypt_priv *ctx) { }
++#endif
++
+ static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+ {
+ const unsigned int bsize = CAST6_BLOCK_SIZE;
+ struct crypt_priv *ctx = priv;
+ int i;
+
+- ctx->fpu_enabled = cast6_fpu_begin(ctx->fpu_enabled, nbytes);
+-
+ if (nbytes == bsize * CAST6_PARALLEL_BLOCKS) {
++ ctx->fpu_enabled = cast6_fpu_begin(ctx->fpu_enabled, nbytes);
+ cast6_ecb_enc_8way(ctx->ctx, srcdst, srcdst);
++ cast6_fpu_end_rt(ctx);
+ return;
+ }
+-
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+ __cast6_encrypt(ctx->ctx, srcdst, srcdst);
+ }
+@@ -228,10 +242,10 @@
+ struct crypt_priv *ctx = priv;
+ int i;
+
+- ctx->fpu_enabled = cast6_fpu_begin(ctx->fpu_enabled, nbytes);
+-
+ if (nbytes == bsize * CAST6_PARALLEL_BLOCKS) {
++ ctx->fpu_enabled = cast6_fpu_begin(ctx->fpu_enabled, nbytes);
+ cast6_ecb_dec_8way(ctx->ctx, srcdst, srcdst);
++ cast6_fpu_end_rt(ctx);
+ return;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/chacha20_glue.c linux-4.14/arch/x86/crypto/chacha20_glue.c
+--- linux-4.14.orig/arch/x86/crypto/chacha20_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/chacha20_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -81,23 +81,24 @@
+
+ crypto_chacha20_init(state, ctx, walk.iv);
+
+- kernel_fpu_begin();
+-
+ while (walk.nbytes >= CHACHA20_BLOCK_SIZE) {
++ kernel_fpu_begin();
++
+ chacha20_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr,
+ rounddown(walk.nbytes, CHACHA20_BLOCK_SIZE));
++ kernel_fpu_end();
+ err = skcipher_walk_done(&walk,
+ walk.nbytes % CHACHA20_BLOCK_SIZE);
+ }
+
+ if (walk.nbytes) {
++ kernel_fpu_begin();
+ chacha20_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr,
+ walk.nbytes);
++ kernel_fpu_end();
+ err = skcipher_walk_done(&walk, 0);
+ }
+
+- kernel_fpu_end();
+-
+ return err;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/glue_helper.c linux-4.14/arch/x86/crypto/glue_helper.c
+--- linux-4.14.orig/arch/x86/crypto/glue_helper.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/glue_helper.c 2018-09-05 11:05:07.000000000 +0200
+@@ -40,7 +40,7 @@
void *ctx = crypto_blkcipher_ctx(desc->tfm);
const unsigned int bsize = 128 / 8;
unsigned int nbytes, i, func_bytes;
int err;
err = blkcipher_walk_virt(desc, walk);
-@@ -49,7 +49,7 @@ static int __glue_ecb_crypt_128bit(const struct common_glue_ctx *gctx,
+@@ -50,7 +50,7 @@
u8 *wdst = walk->dst.virt.addr;
fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
for (i = 0; i < gctx->num_funcs; i++) {
func_bytes = bsize * gctx->funcs[i].num_blocks;
-@@ -71,10 +71,10 @@ static int __glue_ecb_crypt_128bit(const struct common_glue_ctx *gctx,
+@@ -72,10 +72,10 @@
}
done:
return err;
}
-@@ -194,7 +194,7 @@ int glue_cbc_decrypt_128bit(const struct common_glue_ctx *gctx,
+@@ -192,7 +192,7 @@
struct scatterlist *src, unsigned int nbytes)
{
const unsigned int bsize = 128 / 8;
struct blkcipher_walk walk;
int err;
-@@ -203,12 +203,12 @@ int glue_cbc_decrypt_128bit(const struct common_glue_ctx *gctx,
+@@ -201,12 +201,12 @@
while ((nbytes = walk.nbytes)) {
fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
return err;
}
EXPORT_SYMBOL_GPL(glue_cbc_decrypt_128bit);
-@@ -277,7 +277,7 @@ int glue_ctr_crypt_128bit(const struct common_glue_ctx *gctx,
+@@ -275,7 +275,7 @@
struct scatterlist *src, unsigned int nbytes)
{
const unsigned int bsize = 128 / 8;
struct blkcipher_walk walk;
int err;
-@@ -286,13 +286,12 @@ int glue_ctr_crypt_128bit(const struct common_glue_ctx *gctx,
+@@ -284,13 +284,12 @@
while ((nbytes = walk.nbytes) >= bsize) {
fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
if (walk.nbytes) {
glue_ctr_crypt_final_128bit(
gctx->funcs[gctx->num_funcs - 1].fn_u.ctr, desc, &walk);
-@@ -347,7 +346,7 @@ int glue_xts_crypt_128bit(const struct common_glue_ctx *gctx,
+@@ -380,7 +379,7 @@
void *tweak_ctx, void *crypt_ctx)
{
const unsigned int bsize = 128 / 8;
struct blkcipher_walk walk;
int err;
-@@ -360,21 +359,21 @@ int glue_xts_crypt_128bit(const struct common_glue_ctx *gctx,
+@@ -393,21 +392,21 @@
/* set minimum length to bsize, for tweak_fn */
fpu_enabled = glue_fpu_begin(bsize, gctx->fpu_blocks_limit,
return err;
}
EXPORT_SYMBOL_GPL(glue_xts_crypt_128bit);
-diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
-index bdd9cc59d20f..56d01a339ba4 100644
---- a/arch/x86/entry/common.c
-+++ b/arch/x86/entry/common.c
-@@ -129,7 +129,7 @@ static long syscall_trace_enter(struct pt_regs *regs)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/serpent_avx2_glue.c linux-4.14/arch/x86/crypto/serpent_avx2_glue.c
+--- linux-4.14.orig/arch/x86/crypto/serpent_avx2_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/serpent_avx2_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -184,6 +184,21 @@
+ bool fpu_enabled;
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void serpent_fpu_end_rt(struct crypt_priv *ctx)
++{
++ bool fpu_enabled = ctx->fpu_enabled;
++
++ if (!fpu_enabled)
++ return;
++ serpent_fpu_end(fpu_enabled);
++ ctx->fpu_enabled = false;
++}
++
++#else
++static void serpent_fpu_end_rt(struct crypt_priv *ctx) { }
++#endif
++
+ static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+ {
+ const unsigned int bsize = SERPENT_BLOCK_SIZE;
+@@ -199,10 +214,12 @@
+ }
+
+ while (nbytes >= SERPENT_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ serpent_ecb_enc_8way_avx(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * SERPENT_PARALLEL_BLOCKS;
+ nbytes -= bsize * SERPENT_PARALLEL_BLOCKS;
+ }
++ serpent_fpu_end_rt(ctx);
+
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+ __serpent_encrypt(ctx->ctx, srcdst, srcdst);
+@@ -223,10 +240,12 @@
+ }
+
+ while (nbytes >= SERPENT_PARALLEL_BLOCKS * bsize) {
++ kernel_fpu_resched();
+ serpent_ecb_dec_8way_avx(ctx->ctx, srcdst, srcdst);
+ srcdst += bsize * SERPENT_PARALLEL_BLOCKS;
+ nbytes -= bsize * SERPENT_PARALLEL_BLOCKS;
+ }
++ serpent_fpu_end_rt(ctx);
+
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+ __serpent_decrypt(ctx->ctx, srcdst, srcdst);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/serpent_avx_glue.c linux-4.14/arch/x86/crypto/serpent_avx_glue.c
+--- linux-4.14.orig/arch/x86/crypto/serpent_avx_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/serpent_avx_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -218,16 +218,31 @@
+ bool fpu_enabled;
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void serpent_fpu_end_rt(struct crypt_priv *ctx)
++{
++ bool fpu_enabled = ctx->fpu_enabled;
++
++ if (!fpu_enabled)
++ return;
++ serpent_fpu_end(fpu_enabled);
++ ctx->fpu_enabled = false;
++}
++
++#else
++static void serpent_fpu_end_rt(struct crypt_priv *ctx) { }
++#endif
++
+ static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+ {
+ const unsigned int bsize = SERPENT_BLOCK_SIZE;
+ struct crypt_priv *ctx = priv;
+ int i;
+
+- ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+-
+ if (nbytes == bsize * SERPENT_PARALLEL_BLOCKS) {
++ ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+ serpent_ecb_enc_8way_avx(ctx->ctx, srcdst, srcdst);
++ serpent_fpu_end_rt(ctx);
+ return;
+ }
+
+@@ -241,10 +256,10 @@
+ struct crypt_priv *ctx = priv;
+ int i;
+
+- ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+-
+ if (nbytes == bsize * SERPENT_PARALLEL_BLOCKS) {
++ ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+ serpent_ecb_dec_8way_avx(ctx->ctx, srcdst, srcdst);
++ serpent_fpu_end_rt(ctx);
+ return;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/serpent_sse2_glue.c linux-4.14/arch/x86/crypto/serpent_sse2_glue.c
+--- linux-4.14.orig/arch/x86/crypto/serpent_sse2_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/serpent_sse2_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -187,16 +187,31 @@
+ bool fpu_enabled;
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void serpent_fpu_end_rt(struct crypt_priv *ctx)
++{
++ bool fpu_enabled = ctx->fpu_enabled;
++
++ if (!fpu_enabled)
++ return;
++ serpent_fpu_end(fpu_enabled);
++ ctx->fpu_enabled = false;
++}
++
++#else
++static void serpent_fpu_end_rt(struct crypt_priv *ctx) { }
++#endif
++
+ static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+ {
+ const unsigned int bsize = SERPENT_BLOCK_SIZE;
+ struct crypt_priv *ctx = priv;
+ int i;
+
+- ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+-
+ if (nbytes == bsize * SERPENT_PARALLEL_BLOCKS) {
++ ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+ serpent_enc_blk_xway(ctx->ctx, srcdst, srcdst);
++ serpent_fpu_end_rt(ctx);
+ return;
+ }
+
+@@ -210,10 +225,10 @@
+ struct crypt_priv *ctx = priv;
+ int i;
+
+- ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+-
+ if (nbytes == bsize * SERPENT_PARALLEL_BLOCKS) {
++ ctx->fpu_enabled = serpent_fpu_begin(ctx->fpu_enabled, nbytes);
+ serpent_dec_blk_xway(ctx->ctx, srcdst, srcdst);
++ serpent_fpu_end_rt(ctx);
+ return;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/crypto/twofish_avx_glue.c linux-4.14/arch/x86/crypto/twofish_avx_glue.c
+--- linux-4.14.orig/arch/x86/crypto/twofish_avx_glue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/crypto/twofish_avx_glue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -218,6 +218,21 @@
+ bool fpu_enabled;
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void twofish_fpu_end_rt(struct crypt_priv *ctx)
++{
++ bool fpu_enabled = ctx->fpu_enabled;
++
++ if (!fpu_enabled)
++ return;
++ twofish_fpu_end(fpu_enabled);
++ ctx->fpu_enabled = false;
++}
++
++#else
++static void twofish_fpu_end_rt(struct crypt_priv *ctx) { }
++#endif
++
+ static void encrypt_callback(void *priv, u8 *srcdst, unsigned int nbytes)
+ {
+ const unsigned int bsize = TF_BLOCK_SIZE;
+@@ -228,12 +243,16 @@
+
+ if (nbytes == bsize * TWOFISH_PARALLEL_BLOCKS) {
+ twofish_ecb_enc_8way(ctx->ctx, srcdst, srcdst);
++ twofish_fpu_end_rt(ctx);
+ return;
+ }
+
+- for (i = 0; i < nbytes / (bsize * 3); i++, srcdst += bsize * 3)
++ for (i = 0; i < nbytes / (bsize * 3); i++, srcdst += bsize * 3) {
++ kernel_fpu_resched();
+ twofish_enc_blk_3way(ctx->ctx, srcdst, srcdst);
++ }
+
++ twofish_fpu_end_rt(ctx);
+ nbytes %= bsize * 3;
+
+ for (i = 0; i < nbytes / bsize; i++, srcdst += bsize)
+@@ -250,11 +269,15 @@
+
+ if (nbytes == bsize * TWOFISH_PARALLEL_BLOCKS) {
+ twofish_ecb_dec_8way(ctx->ctx, srcdst, srcdst);
++ twofish_fpu_end_rt(ctx);
+ return;
+ }
+
+- for (i = 0; i < nbytes / (bsize * 3); i++, srcdst += bsize * 3)
++ for (i = 0; i < nbytes / (bsize * 3); i++, srcdst += bsize * 3) {
++ kernel_fpu_resched();
+ twofish_dec_blk_3way(ctx->ctx, srcdst, srcdst);
++ }
++ twofish_fpu_end_rt(ctx);
+
+ nbytes %= bsize * 3;
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/entry/common.c linux-4.14/arch/x86/entry/common.c
+--- linux-4.14.orig/arch/x86/entry/common.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/entry/common.c 2018-09-05 11:05:07.000000000 +0200
+@@ -133,7 +133,7 @@
#define EXIT_TO_USERMODE_LOOP_FLAGS \
(_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
-- _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY)
-+ _TIF_NEED_RESCHED_MASK | _TIF_USER_RETURN_NOTIFY)
+- _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY | _TIF_PATCH_PENDING)
++ _TIF_NEED_RESCHED_MASK | _TIF_USER_RETURN_NOTIFY | _TIF_PATCH_PENDING)
static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
{
-@@ -145,9 +145,16 @@ static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
+@@ -148,9 +148,16 @@
/* We have work to do. */
local_irq_enable();
if (cached_flags & _TIF_UPROBE)
uprobe_notify_resume(regs);
-diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
-index edba8606b99a..4a3389535fc6 100644
---- a/arch/x86/entry/entry_32.S
-+++ b/arch/x86/entry/entry_32.S
-@@ -308,8 +308,25 @@ END(ret_from_exception)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/entry/entry_32.S linux-4.14/arch/x86/entry/entry_32.S
+--- linux-4.14.orig/arch/x86/entry/entry_32.S 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/entry/entry_32.S 2018-09-05 11:05:07.000000000 +0200
+@@ -350,8 +350,25 @@
ENTRY(resume_kernel)
DISABLE_INTERRUPTS(CLBR_ANY)
- need_resched:
+ .Lneed_resched:
+ # preempt count == 0 + NEED_RS set?
cmpl $0, PER_CPU_VAR(__preempt_count)
+#ifndef CONFIG_PREEMPT_LAZY
+ cmpl $_PREEMPT_ENABLED,PER_CPU_VAR(__preempt_count)
+ jne restore_all
+
-+ movl PER_CPU_VAR(current_task), %ebp
-+ cmpl $0,TASK_TI_preempt_lazy_count(%ebp) # non-zero preempt_lazy_count ?
-+ jnz restore_all
++ movl PER_CPU_VAR(current_task), %ebp
++ cmpl $0,TASK_TI_preempt_lazy_count(%ebp) # non-zero preempt_lazy_count ?
++ jnz restore_all
+
-+ testl $_TIF_NEED_RESCHED_LAZY, TASK_TI_flags(%ebp)
-+ jz restore_all
++ testl $_TIF_NEED_RESCHED_LAZY, TASK_TI_flags(%ebp)
++ jz restore_all
+test_int_off:
+#endif
testl $X86_EFLAGS_IF, PT_EFLAGS(%esp) # interrupts off (exception path) ?
jz restore_all
call preempt_schedule_irq
-diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
-index ef766a358b37..28401f826ab1 100644
---- a/arch/x86/entry/entry_64.S
-+++ b/arch/x86/entry/entry_64.S
-@@ -546,7 +546,23 @@ GLOBAL(retint_user)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/entry/entry_64.S linux-4.14/arch/x86/entry/entry_64.S
+--- linux-4.14.orig/arch/x86/entry/entry_64.S 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/entry/entry_64.S 2018-09-05 11:05:07.000000000 +0200
+@@ -633,7 +633,23 @@
bt $9, EFLAGS(%rsp) /* were interrupts off? */
jnc 1f
0: cmpl $0, PER_CPU_VAR(__preempt_count)
+#ifndef CONFIG_PREEMPT_LAZY
- jnz 1f
++ jnz 1f
+#else
+ jz do_preempt_schedule_irq
+
+
+ movq PER_CPU_VAR(current_task), %rcx
+ cmpl $0, TASK_TI_preempt_lazy_count(%rcx)
-+ jnz 1f
+ jnz 1f
+
+ bt $TIF_NEED_RESCHED_LAZY,TASK_TI_flags(%rcx)
+ jnc 1f
call preempt_schedule_irq
jmp 0b
1:
-@@ -894,6 +910,7 @@ EXPORT_SYMBOL(native_load_gs_index)
+@@ -988,6 +1004,7 @@
jmp 2b
.previous
/* Call softirq on interrupt stack. Interrupts are off. */
ENTRY(do_softirq_own_stack)
pushq %rbp
-@@ -906,6 +923,7 @@ ENTRY(do_softirq_own_stack)
- decl PER_CPU_VAR(irq_count)
+@@ -998,6 +1015,7 @@
+ leaveq
ret
- END(do_softirq_own_stack)
+ ENDPROC(do_softirq_own_stack)
+#endif
#ifdef CONFIG_XEN
- idtentry xen_hypervisor_callback xen_do_hypervisor_callback has_error_code=0
-diff --git a/arch/x86/include/asm/preempt.h b/arch/x86/include/asm/preempt.h
-index 17f218645701..11bd1b7ee6eb 100644
---- a/arch/x86/include/asm/preempt.h
-+++ b/arch/x86/include/asm/preempt.h
-@@ -79,17 +79,46 @@ static __always_inline void __preempt_count_sub(int val)
+ idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/include/asm/fpu/api.h linux-4.14/arch/x86/include/asm/fpu/api.h
+--- linux-4.14.orig/arch/x86/include/asm/fpu/api.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/include/asm/fpu/api.h 2018-09-05 11:05:07.000000000 +0200
+@@ -25,6 +25,7 @@
+ extern void __kernel_fpu_end(void);
+ extern void kernel_fpu_begin(void);
+ extern void kernel_fpu_end(void);
++extern void kernel_fpu_resched(void);
+ extern bool irq_fpu_usable(void);
+
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/include/asm/preempt.h linux-4.14/arch/x86/include/asm/preempt.h
+--- linux-4.14.orig/arch/x86/include/asm/preempt.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/include/asm/preempt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -86,17 +86,46 @@
* a decrement which hits zero means we have no preempt_count and should
* reschedule.
*/
}
#ifdef CONFIG_PREEMPT
-diff --git a/arch/x86/include/asm/signal.h b/arch/x86/include/asm/signal.h
-index 8af22be0fe61..d1328789b759 100644
---- a/arch/x86/include/asm/signal.h
-+++ b/arch/x86/include/asm/signal.h
-@@ -27,6 +27,19 @@ typedef struct {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/include/asm/signal.h linux-4.14/arch/x86/include/asm/signal.h
+--- linux-4.14.orig/arch/x86/include/asm/signal.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/include/asm/signal.h 2018-09-05 11:05:07.000000000 +0200
+@@ -28,6 +28,19 @@
#define SA_IA32_ABI 0x02000000u
#define SA_X32_ABI 0x01000000u
#ifndef CONFIG_COMPAT
typedef sigset_t compat_sigset_t;
#endif
-diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
-index 58505f01962f..02fa39652cd6 100644
---- a/arch/x86/include/asm/stackprotector.h
-+++ b/arch/x86/include/asm/stackprotector.h
-@@ -59,7 +59,7 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/include/asm/stackprotector.h linux-4.14/arch/x86/include/asm/stackprotector.h
+--- linux-4.14.orig/arch/x86/include/asm/stackprotector.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/include/asm/stackprotector.h 2018-09-05 11:05:07.000000000 +0200
+@@ -60,7 +60,7 @@
*/
static __always_inline void boot_init_stack_canary(void)
{
u64 tsc;
#ifdef CONFIG_X86_64
-@@ -70,8 +70,15 @@ static __always_inline void boot_init_stack_canary(void)
+@@ -71,8 +71,14 @@
* of randomness. The TSC only matters for very early init,
* there it already has some randomness on most systems. Later
* on during the bootup the random pool has true entropy too.
-+ *
+ * For preempt-rt we need to weaken the randomness a bit, as
+ * we can't call into the random generator from atomic context
+ * due to locking constraints. We just leave canary
+#endif
tsc = rdtsc();
canary += tsc + (tsc << 32UL);
-
-diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
-index ad6f5eb07a95..5ceb3a1c2b1a 100644
---- a/arch/x86/include/asm/thread_info.h
-+++ b/arch/x86/include/asm/thread_info.h
-@@ -54,11 +54,14 @@ struct task_struct;
-
+ canary &= CANARY_MASK;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/include/asm/thread_info.h linux-4.14/arch/x86/include/asm/thread_info.h
+--- linux-4.14.orig/arch/x86/include/asm/thread_info.h 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/include/asm/thread_info.h 2018-09-05 11:05:07.000000000 +0200
+@@ -56,11 +56,14 @@
struct thread_info {
unsigned long flags; /* low level flags */
-+ int preempt_lazy_count; /* 0 => lazy preemptable
-+ <0 => BUG */
+ u32 status; /* thread synchronous flags */
++ int preempt_lazy_count; /* 0 => lazy preemptable
++ <0 => BUG */
};
#define INIT_THREAD_INFO(tsk) \
}
#define init_stack (init_thread_union.stack)
-@@ -67,6 +70,10 @@ struct thread_info {
+@@ -69,6 +72,10 @@
#include <asm/asm-offsets.h>
#endif
/*
-@@ -85,6 +92,7 @@ struct thread_info {
+@@ -85,6 +92,7 @@
#define TIF_SYSCALL_EMU 6 /* syscall emulation active */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
#define TIF_SECCOMP 8 /* secure computing */
+#define TIF_NEED_RESCHED_LAZY 9 /* lazy rescheduling necessary */
#define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */
#define TIF_UPROBE 12 /* breakpointed or singlestepping */
- #define TIF_NOTSC 16 /* TSC is not accessible in userland */
-@@ -108,6 +116,7 @@ struct thread_info {
+ #define TIF_PATCH_PENDING 13 /* pending live patching update */
+@@ -112,6 +120,7 @@
#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
+#define _TIF_NEED_RESCHED_LAZY (1 << TIF_NEED_RESCHED_LAZY)
#define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY)
#define _TIF_UPROBE (1 << TIF_UPROBE)
- #define _TIF_NOTSC (1 << TIF_NOTSC)
-@@ -143,6 +152,8 @@ struct thread_info {
+ #define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)
+@@ -153,6 +162,8 @@
#define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
#define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
#define STACK_WARN (THREAD_SIZE/8)
/*
-diff --git a/arch/x86/include/asm/uv/uv_bau.h b/arch/x86/include/asm/uv/uv_bau.h
-index 57ab86d94d64..35d25e27180f 100644
---- a/arch/x86/include/asm/uv/uv_bau.h
-+++ b/arch/x86/include/asm/uv/uv_bau.h
-@@ -624,9 +624,9 @@ struct bau_control {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/include/asm/uv/uv_bau.h linux-4.14/arch/x86/include/asm/uv/uv_bau.h
+--- linux-4.14.orig/arch/x86/include/asm/uv/uv_bau.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/include/asm/uv/uv_bau.h 2018-09-05 11:05:07.000000000 +0200
+@@ -643,9 +643,9 @@
cycles_t send_message;
cycles_t period_end;
cycles_t period_time;
/* tunables */
int max_concurr;
int max_concurr_const;
-@@ -815,15 +815,15 @@ static inline int atom_asr(short i, struct atomic_short *v)
+@@ -847,15 +847,15 @@
* to be lowered below the current 'v'. atomic_add_unless can only stop
* on equal.
*/
return 1;
}
-diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
-index 931ced8ca345..167975ac8af7 100644
---- a/arch/x86/kernel/acpi/boot.c
-+++ b/arch/x86/kernel/acpi/boot.c
-@@ -87,7 +87,9 @@ static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
- * ->ioapic_mutex
- * ->ioapic_lock
- */
-+#ifdef CONFIG_X86_IO_APIC
- static DEFINE_MUTEX(acpi_ioapic_lock);
-+#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/Kconfig linux-4.14/arch/x86/Kconfig
+--- linux-4.14.orig/arch/x86/Kconfig 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -169,6 +169,7 @@
+ select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI
+ select HAVE_PERF_REGS
+ select HAVE_PERF_USER_STACK_DUMP
++ select HAVE_PREEMPT_LAZY
+ select HAVE_RCU_TABLE_FREE
+ select HAVE_REGS_AND_STACK_ACCESS_API
+ select HAVE_RELIABLE_STACKTRACE if X86_64 && UNWINDER_FRAME_POINTER && STACK_VALIDATION
+@@ -256,8 +257,11 @@
+ def_bool y
+ depends on ISA_DMA_API
+
++config RWSEM_GENERIC_SPINLOCK
++ def_bool PREEMPT_RT_FULL
++
+ config RWSEM_XCHGADD_ALGORITHM
+- def_bool y
++ def_bool !RWSEM_GENERIC_SPINLOCK && !PREEMPT_RT_FULL
- /* --------------------------------------------------------------------------
- Boot-time Configuration
-diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
-index 3d8ff40ecc6f..2e96d4e0295b 100644
---- a/arch/x86/kernel/apic/io_apic.c
-+++ b/arch/x86/kernel/apic/io_apic.c
-@@ -1712,7 +1712,8 @@ static bool io_apic_level_ack_pending(struct mp_chip_data *data)
+ config GENERIC_CALIBRATE_DELAY
+ def_bool y
+@@ -932,7 +936,7 @@
+ config MAXSMP
+ bool "Enable Maximum number of SMP Processors and NUMA Nodes"
+ depends on X86_64 && SMP && DEBUG_KERNEL
+- select CPUMASK_OFFSTACK
++ select CPUMASK_OFFSTACK if !PREEMPT_RT_FULL
+ ---help---
+ Enable maximum number of CPUS and NUMA Nodes for this architecture.
+ If unsure, say N.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kernel/apic/io_apic.c linux-4.14/arch/x86/kernel/apic/io_apic.c
+--- linux-4.14.orig/arch/x86/kernel/apic/io_apic.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kernel/apic/io_apic.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1691,7 +1691,8 @@
static inline bool ioapic_irqd_mask(struct irq_data *data)
{
/* If we are moving the irq we need to mask it */
mask_ioapic_irq(data);
return true;
}
-diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
-index c62e015b126c..0cc71257fca6 100644
---- a/arch/x86/kernel/asm-offsets.c
-+++ b/arch/x86/kernel/asm-offsets.c
-@@ -36,6 +36,7 @@ void common(void) {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kernel/asm-offsets.c linux-4.14/arch/x86/kernel/asm-offsets.c
+--- linux-4.14.orig/arch/x86/kernel/asm-offsets.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kernel/asm-offsets.c 2018-09-05 11:05:07.000000000 +0200
+@@ -38,6 +38,7 @@
BLANK();
OFFSET(TASK_TI_flags, task_struct, thread_info.flags);
OFFSET(TASK_addr_limit, task_struct, thread.addr_limit);
BLANK();
-@@ -91,4 +92,5 @@ void common(void) {
+@@ -94,6 +95,7 @@
BLANK();
DEFINE(PTREGS_SIZE, sizeof(struct pt_regs));
+ DEFINE(_PREEMPT_ENABLED, PREEMPT_ENABLED);
- }
-diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
-index a7fdf453d895..e3a0e969a66e 100644
---- a/arch/x86/kernel/cpu/mcheck/mce.c
-+++ b/arch/x86/kernel/cpu/mcheck/mce.c
-@@ -41,6 +41,8 @@
- #include <linux/debugfs.h>
- #include <linux/irq_work.h>
- #include <linux/export.h>
-+#include <linux/jiffies.h>
-+#include <linux/swork.h>
- #include <linux/jump_label.h>
- #include <asm/processor.h>
-@@ -1317,7 +1319,7 @@ void mce_log_therm_throt_event(__u64 status)
- static unsigned long check_interval = INITIAL_CHECK_INTERVAL;
+ /* TLB state for the entry code */
+ OFFSET(TLB_STATE_user_pcid_flush_mask, tlb_state, user_pcid_flush_mask);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kernel/cpu/mcheck/dev-mcelog.c linux-4.14/arch/x86/kernel/cpu/mcheck/dev-mcelog.c
+--- linux-4.14.orig/arch/x86/kernel/cpu/mcheck/dev-mcelog.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/kernel/cpu/mcheck/dev-mcelog.c 2018-09-05 11:05:07.000000000 +0200
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/kmod.h>
+ #include <linux/poll.h>
++#include <linux/swork.h>
- static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
--static DEFINE_PER_CPU(struct timer_list, mce_timer);
-+static DEFINE_PER_CPU(struct hrtimer, mce_timer);
+ #include "mce-internal.h"
- static unsigned long mce_adjust_timer_default(unsigned long interval)
- {
-@@ -1326,32 +1328,18 @@ static unsigned long mce_adjust_timer_default(unsigned long interval)
+@@ -86,13 +87,43 @@
- static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default;
+ static DECLARE_WORK(mce_trigger_work, mce_do_trigger);
--static void __restart_timer(struct timer_list *t, unsigned long interval)
-+static enum hrtimer_restart __restart_timer(struct hrtimer *timer, unsigned long interval)
- {
-- unsigned long when = jiffies + interval;
-- unsigned long flags;
--
-- local_irq_save(flags);
-
-- if (timer_pending(t)) {
-- if (time_before(when, t->expires))
-- mod_timer(t, when);
-- } else {
-- t->expires = round_jiffies(when);
-- add_timer_on(t, smp_processor_id());
-- }
--
-- local_irq_restore(flags);
-+ if (!interval)
-+ return HRTIMER_NORESTART;
-+ hrtimer_forward_now(timer, ns_to_ktime(jiffies_to_nsecs(interval)));
-+ return HRTIMER_RESTART;
+-void mce_work_trigger(void)
++static void __mce_work_trigger(struct swork_event *event)
+ {
+ if (mce_helper[0])
+ schedule_work(&mce_trigger_work);
}
--static void mce_timer_fn(unsigned long data)
-+static enum hrtimer_restart mce_timer_fn(struct hrtimer *timer)
- {
-- struct timer_list *t = this_cpu_ptr(&mce_timer);
-- int cpu = smp_processor_id();
- unsigned long iv;
-
-- WARN_ON(cpu != data);
--
- iv = __this_cpu_read(mce_next_interval);
-
- if (mce_available(this_cpu_ptr(&cpu_info))) {
-@@ -1374,7 +1362,7 @@ static void mce_timer_fn(unsigned long data)
-
- done:
- __this_cpu_write(mce_next_interval, iv);
-- __restart_timer(t, iv);
-+ return __restart_timer(timer, iv);
- }
-
- /*
-@@ -1382,7 +1370,7 @@ static void mce_timer_fn(unsigned long data)
- */
- void mce_timer_kick(unsigned long interval)
- {
-- struct timer_list *t = this_cpu_ptr(&mce_timer);
-+ struct hrtimer *t = this_cpu_ptr(&mce_timer);
- unsigned long iv = __this_cpu_read(mce_next_interval);
-
- __restart_timer(t, interval);
-@@ -1397,7 +1385,7 @@ static void mce_timer_delete_all(void)
- int cpu;
-
- for_each_online_cpu(cpu)
-- del_timer_sync(&per_cpu(mce_timer, cpu));
-+ hrtimer_cancel(&per_cpu(mce_timer, cpu));
- }
-
- static void mce_do_trigger(struct work_struct *work)
-@@ -1407,6 +1395,56 @@ static void mce_do_trigger(struct work_struct *work)
-
- static DECLARE_WORK(mce_trigger_work, mce_do_trigger);
-
-+static void __mce_notify_work(struct swork_event *event)
-+{
-+ /* Not more than two messages every minute */
-+ static DEFINE_RATELIMIT_STATE(ratelimit, 60*HZ, 2);
-+
-+ /* wake processes polling /dev/mcelog */
-+ wake_up_interruptible(&mce_chrdev_wait);
-+
-+ /*
-+ * There is no risk of missing notifications because
-+ * work_pending is always cleared before the function is
-+ * executed.
-+ */
-+ if (mce_helper[0] && !work_pending(&mce_trigger_work))
-+ schedule_work(&mce_trigger_work);
-+
-+ if (__ratelimit(&ratelimit))
-+ pr_info(HW_ERR "Machine check events logged\n");
-+}
-+
+#ifdef CONFIG_PREEMPT_RT_FULL
+static bool notify_work_ready __read_mostly;
+static struct swork_event notify_work;
+ if (err)
+ return err;
+
-+ INIT_SWORK(¬ify_work, __mce_notify_work);
++ INIT_SWORK(¬ify_work, __mce_work_trigger);
+ notify_work_ready = true;
+ return 0;
+}
+
-+static void mce_notify_work(void)
++void mce_work_trigger(void)
+{
+ if (notify_work_ready)
+ swork_queue(¬ify_work);
+}
++
+#else
-+static void mce_notify_work(void)
++void mce_work_trigger(void)
+{
-+ __mce_notify_work(NULL);
++ __mce_work_trigger(NULL);
+}
+static inline int mce_notify_work_init(void) { return 0; }
+#endif
+
- /*
- * Notify the user(s) about new machine check events.
- * Can be called from interrupt context, but not from machine check/NMI
-@@ -1414,19 +1452,8 @@ static DECLARE_WORK(mce_trigger_work, mce_do_trigger);
- */
- int mce_notify_irq(void)
+ static ssize_t
+ show_trigger(struct device *s, struct device_attribute *attr, char *buf)
{
-- /* Not more than two messages every minute */
-- static DEFINE_RATELIMIT_STATE(ratelimit, 60*HZ, 2);
+@@ -356,7 +387,7 @@
+
+ return err;
+ }
-
- if (test_and_clear_bit(0, &mce_need_notify)) {
-- /* wake processes polling /dev/mcelog */
-- wake_up_interruptible(&mce_chrdev_wait);
++ mce_notify_work_init();
+ mce_register_decode_chain(&dev_mcelog_nb);
+ return 0;
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kernel/cpu/mcheck/mce.c linux-4.14/arch/x86/kernel/cpu/mcheck/mce.c
+--- linux-4.14.orig/arch/x86/kernel/cpu/mcheck/mce.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kernel/cpu/mcheck/mce.c 2018-09-05 11:05:07.000000000 +0200
+@@ -42,6 +42,7 @@
+ #include <linux/debugfs.h>
+ #include <linux/irq_work.h>
+ #include <linux/export.h>
++#include <linux/jiffies.h>
+ #include <linux/jump_label.h>
+
+ #include <asm/intel-family.h>
+@@ -1365,7 +1366,7 @@
+ static unsigned long check_interval = INITIAL_CHECK_INTERVAL;
+
+ static DEFINE_PER_CPU(unsigned long, mce_next_interval); /* in jiffies */
+-static DEFINE_PER_CPU(struct timer_list, mce_timer);
++static DEFINE_PER_CPU(struct hrtimer, mce_timer);
+
+ static unsigned long mce_adjust_timer_default(unsigned long interval)
+ {
+@@ -1374,27 +1375,19 @@
+
+ static unsigned long (*mce_adjust_timer)(unsigned long interval) = mce_adjust_timer_default;
+
+-static void __start_timer(struct timer_list *t, unsigned long interval)
++static void __start_timer(struct hrtimer *t, unsigned long iv)
+ {
+- unsigned long when = jiffies + interval;
+- unsigned long flags;
-
-- if (mce_helper[0])
-- schedule_work(&mce_trigger_work);
+- local_irq_save(flags);
-
-- if (__ratelimit(&ratelimit))
-- pr_info(HW_ERR "Machine check events logged\n");
+- if (!timer_pending(t) || time_before(when, t->expires))
+- mod_timer(t, round_jiffies(when));
++ if (!iv)
++ return;
+
+- local_irq_restore(flags);
++ hrtimer_start_range_ns(t, ns_to_ktime(jiffies_to_usecs(iv) * 1000ULL),
++ 0, HRTIMER_MODE_REL_PINNED);
+ }
+
+-static void mce_timer_fn(unsigned long data)
++static enum hrtimer_restart mce_timer_fn(struct hrtimer *timer)
+ {
+- struct timer_list *t = this_cpu_ptr(&mce_timer);
+- int cpu = smp_processor_id();
+ unsigned long iv;
+
+- WARN_ON(cpu != data);
-
-+ mce_notify_work();
- return 1;
- }
- return 0;
-@@ -1732,7 +1759,7 @@ static void __mcheck_cpu_clear_vendor(struct cpuinfo_x86 *c)
- }
+ iv = __this_cpu_read(mce_next_interval);
+
+ if (mce_available(this_cpu_ptr(&cpu_info))) {
+@@ -1417,7 +1410,11 @@
+
+ done:
+ __this_cpu_write(mce_next_interval, iv);
+- __start_timer(t, iv);
++ if (!iv)
++ return HRTIMER_NORESTART;
++
++ hrtimer_forward_now(timer, ns_to_ktime(jiffies_to_nsecs(iv)));
++ return HRTIMER_RESTART;
}
--static void mce_start_timer(unsigned int cpu, struct timer_list *t)
-+static void mce_start_timer(unsigned int cpu, struct hrtimer *t)
+ /*
+@@ -1425,7 +1422,7 @@
+ */
+ void mce_timer_kick(unsigned long interval)
{
- unsigned long iv = check_interval * HZ;
+- struct timer_list *t = this_cpu_ptr(&mce_timer);
++ struct hrtimer *t = this_cpu_ptr(&mce_timer);
+ unsigned long iv = __this_cpu_read(mce_next_interval);
-@@ -1741,16 +1768,17 @@ static void mce_start_timer(unsigned int cpu, struct timer_list *t)
+ __start_timer(t, interval);
+@@ -1440,7 +1437,7 @@
+ int cpu;
- per_cpu(mce_next_interval, cpu) = iv;
+ for_each_online_cpu(cpu)
+- del_timer_sync(&per_cpu(mce_timer, cpu));
++ hrtimer_cancel(&per_cpu(mce_timer, cpu));
+ }
-- t->expires = round_jiffies(jiffies + iv);
-- add_timer_on(t, cpu);
-+ hrtimer_start_range_ns(t, ns_to_ktime(jiffies_to_usecs(iv) * 1000ULL),
-+ 0, HRTIMER_MODE_REL_PINNED);
+ /*
+@@ -1769,7 +1766,7 @@
+ }
}
- static void __mcheck_cpu_init_timer(void)
+-static void mce_start_timer(struct timer_list *t)
++static void mce_start_timer(struct hrtimer *t)
+ {
+ unsigned long iv = check_interval * HZ;
+
+@@ -1782,18 +1779,19 @@
+
+ static void __mcheck_cpu_setup_timer(void)
{
- struct timer_list *t = this_cpu_ptr(&mce_timer);
+- unsigned int cpu = smp_processor_id();
+ struct hrtimer *t = this_cpu_ptr(&mce_timer);
- unsigned int cpu = smp_processor_id();
- setup_pinned_timer(t, mce_timer_fn, cpu);
+ hrtimer_init(t, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+ t->function = mce_timer_fn;
- mce_start_timer(cpu, t);
}
-@@ -2475,6 +2503,8 @@ static void mce_disable_cpu(void *h)
- if (!mce_available(raw_cpu_ptr(&cpu_info)))
- return;
-
-+ hrtimer_cancel(this_cpu_ptr(&mce_timer));
+ static void __mcheck_cpu_init_timer(void)
+ {
+- struct timer_list *t = this_cpu_ptr(&mce_timer);
+- unsigned int cpu = smp_processor_id();
++ struct hrtimer *t = this_cpu_ptr(&mce_timer);
+
- if (!(action & CPU_TASKS_FROZEN))
- cmci_clear();
++ hrtimer_init(t, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ t->function = mce_timer_fn;
-@@ -2497,6 +2527,7 @@ static void mce_reenable_cpu(void *h)
- if (b->init)
- wrmsrl(msr_ops.ctl(i), b->ctl);
- }
-+ __mcheck_cpu_init_timer();
+- setup_pinned_timer(t, mce_timer_fn, cpu);
+ mce_start_timer(t);
}
- /* Get notified when a cpu comes on/off. Be hotplug friendly. */
-@@ -2504,7 +2535,6 @@ static int
- mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
+@@ -2309,7 +2307,7 @@
+
+ static int mce_cpu_online(unsigned int cpu)
{
- unsigned int cpu = (unsigned long)hcpu;
-- struct timer_list *t = &per_cpu(mce_timer, cpu);
+- struct timer_list *t = this_cpu_ptr(&mce_timer);
++ struct hrtimer *t = this_cpu_ptr(&mce_timer);
+ int ret;
- switch (action & ~CPU_TASKS_FROZEN) {
- case CPU_ONLINE:
-@@ -2524,11 +2554,9 @@ mce_cpu_callback(struct notifier_block *nfb, unsigned long action, void *hcpu)
- break;
- case CPU_DOWN_PREPARE:
- smp_call_function_single(cpu, mce_disable_cpu, &action, 1);
-- del_timer_sync(t);
- break;
- case CPU_DOWN_FAILED:
- smp_call_function_single(cpu, mce_reenable_cpu, &action, 1);
-- mce_start_timer(cpu, t);
- break;
- }
+ mce_device_create(cpu);
+@@ -2326,10 +2324,10 @@
-@@ -2567,6 +2595,10 @@ static __init int mcheck_init_device(void)
- goto err_out;
- }
+ static int mce_cpu_pre_down(unsigned int cpu)
+ {
+- struct timer_list *t = this_cpu_ptr(&mce_timer);
++ struct hrtimer *t = this_cpu_ptr(&mce_timer);
-+ err = mce_notify_work_init();
-+ if (err)
-+ goto err_out;
-+
- if (!zalloc_cpumask_var(&mce_device_initialized, GFP_KERNEL)) {
- err = -ENOMEM;
- goto err_out;
-diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
-index 1f38d9a4d9de..053bf3b2ef39 100644
---- a/arch/x86/kernel/irq_32.c
-+++ b/arch/x86/kernel/irq_32.c
-@@ -127,6 +127,7 @@ void irq_ctx_init(int cpu)
+ mce_disable_cpu();
+- del_timer_sync(t);
++ hrtimer_cancel(t);
+ mce_threshold_remove_device(cpu);
+ mce_device_remove(cpu);
+ return 0;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kernel/fpu/core.c linux-4.14/arch/x86/kernel/fpu/core.c
+--- linux-4.14.orig/arch/x86/kernel/fpu/core.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kernel/fpu/core.c 2018-09-05 11:05:07.000000000 +0200
+@@ -138,6 +138,18 @@
+ }
+ EXPORT_SYMBOL_GPL(kernel_fpu_end);
+
++void kernel_fpu_resched(void)
++{
++ WARN_ON_FPU(!this_cpu_read(in_kernel_fpu));
++
++ if (should_resched(PREEMPT_OFFSET)) {
++ kernel_fpu_end();
++ cond_resched();
++ kernel_fpu_begin();
++ }
++}
++EXPORT_SYMBOL_GPL(kernel_fpu_resched);
++
+ /*
+ * Save the FPU state (mark it for reload if necessary):
+ *
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kernel/irq_32.c linux-4.14/arch/x86/kernel/irq_32.c
+--- linux-4.14.orig/arch/x86/kernel/irq_32.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kernel/irq_32.c 2018-09-05 11:05:07.000000000 +0200
+@@ -130,6 +130,7 @@
cpu, per_cpu(hardirq_stack, cpu), per_cpu(softirq_stack, cpu));
}
void do_softirq_own_stack(void)
{
struct irq_stack *irqstk;
-@@ -143,6 +144,7 @@ void do_softirq_own_stack(void)
+@@ -146,6 +147,7 @@
call_on_stack(__do_softirq, isp);
}
bool handle_irq(struct irq_desc *desc, struct pt_regs *regs)
{
-diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
-index bd7be8efdc4c..b3b0a7f7b1ca 100644
---- a/arch/x86/kernel/process_32.c
-+++ b/arch/x86/kernel/process_32.c
-@@ -35,6 +35,7 @@
- #include <linux/uaccess.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kernel/process_32.c linux-4.14/arch/x86/kernel/process_32.c
+--- linux-4.14.orig/arch/x86/kernel/process_32.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kernel/process_32.c 2018-09-05 11:05:07.000000000 +0200
+@@ -38,6 +38,7 @@
#include <linux/io.h>
#include <linux/kdebug.h>
+ #include <linux/syscalls.h>
+#include <linux/highmem.h>
#include <asm/pgtable.h>
#include <asm/ldt.h>
-@@ -195,6 +196,35 @@ start_thread(struct pt_regs *regs, unsigned long new_ip, unsigned long new_sp)
+@@ -198,6 +199,35 @@
}
EXPORT_SYMBOL_GPL(start_thread);
/*
* switch_to(x,y) should switch tasks from x to y.
-@@ -271,6 +301,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
+@@ -273,6 +303,8 @@
task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
__switch_to_xtra(prev_p, next_p, tss);
/*
* Leave lazy mode, flushing any hypercalls made here.
* This must be done before restoring TLS segments so
-diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
-index 3f05c044720b..fe68afd37162 100644
---- a/arch/x86/kvm/lapic.c
-+++ b/arch/x86/kvm/lapic.c
-@@ -1939,6 +1939,7 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kvm/lapic.c linux-4.14/arch/x86/kvm/lapic.c
+--- linux-4.14.orig/arch/x86/kvm/lapic.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kvm/lapic.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2120,7 +2120,7 @@
+ apic->vcpu = vcpu;
+
hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC,
- HRTIMER_MODE_ABS_PINNED);
+- HRTIMER_MODE_ABS_PINNED);
++ HRTIMER_MODE_ABS_PINNED_HARD);
apic->lapic_timer.timer.function = apic_timer_fn;
-+ apic->lapic_timer.timer.irqsafe = 1;
/*
- * APIC is created enabled. This will prevent kvm_lapic_set_base from
-diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
-index 487b957e7802..a144b8cb358b 100644
---- a/arch/x86/kvm/x86.c
-+++ b/arch/x86/kvm/x86.c
-@@ -5932,6 +5932,13 @@ int kvm_arch_init(void *opaque)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/kvm/x86.c linux-4.14/arch/x86/kvm/x86.c
+--- linux-4.14.orig/arch/x86/kvm/x86.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/kvm/x86.c 2018-09-05 11:05:07.000000000 +0200
+@@ -6285,6 +6285,13 @@
goto out;
}
r = kvm_mmu_module_init();
if (r)
goto out_free_percpu;
-diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c
-index 6d18b70ed5a9..f752724c22e8 100644
---- a/arch/x86/mm/highmem_32.c
-+++ b/arch/x86/mm/highmem_32.c
-@@ -32,10 +32,11 @@ EXPORT_SYMBOL(kunmap);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/mm/highmem_32.c linux-4.14/arch/x86/mm/highmem_32.c
+--- linux-4.14.orig/arch/x86/mm/highmem_32.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/mm/highmem_32.c 2018-09-05 11:05:07.000000000 +0200
+@@ -32,10 +32,11 @@
*/
void *kmap_atomic_prot(struct page *page, pgprot_t prot)
{
pagefault_disable();
if (!PageHighMem(page))
-@@ -45,7 +46,10 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot)
+@@ -45,7 +46,10 @@
idx = type + KM_TYPE_NR*smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
BUG_ON(!pte_none(*(kmap_pte-idx)));
arch_flush_lazy_mmu_mode();
return (void *)vaddr;
-@@ -88,6 +92,9 @@ void __kunmap_atomic(void *kvaddr)
+@@ -88,6 +92,9 @@
* is a bad idea also, in case the page changes cacheability
* attributes or becomes a protected page in a hypervisor.
*/
kpte_clear_flush(kmap_pte-idx, vaddr);
kmap_atomic_idx_pop();
arch_flush_lazy_mmu_mode();
-@@ -100,7 +107,7 @@ void __kunmap_atomic(void *kvaddr)
+@@ -100,7 +107,7 @@
#endif
pagefault_enable();
}
EXPORT_SYMBOL(__kunmap_atomic);
-diff --git a/arch/x86/mm/iomap_32.c b/arch/x86/mm/iomap_32.c
-index ada98b39b8ad..585f6829653b 100644
---- a/arch/x86/mm/iomap_32.c
-+++ b/arch/x86/mm/iomap_32.c
-@@ -56,6 +56,7 @@ EXPORT_SYMBOL_GPL(iomap_free);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/mm/iomap_32.c linux-4.14/arch/x86/mm/iomap_32.c
+--- linux-4.14.orig/arch/x86/mm/iomap_32.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/mm/iomap_32.c 2018-09-05 11:05:07.000000000 +0200
+@@ -56,6 +56,7 @@
void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
{
unsigned long vaddr;
int idx, type;
-@@ -65,7 +66,12 @@ void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot)
+@@ -65,7 +66,12 @@
type = kmap_atomic_idx_push();
idx = type + KM_TYPE_NR * smp_processor_id();
vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
arch_flush_lazy_mmu_mode();
return (void *)vaddr;
-@@ -113,6 +119,9 @@ iounmap_atomic(void __iomem *kvaddr)
+@@ -113,6 +119,9 @@
* is a bad idea also, in case the page changes cacheability
* attributes or becomes a protected page in a hypervisor.
*/
kpte_clear_flush(kmap_pte-idx, vaddr);
kmap_atomic_idx_pop();
}
-diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
-index e3353c97d086..01664968555c 100644
---- a/arch/x86/mm/pageattr.c
-+++ b/arch/x86/mm/pageattr.c
-@@ -214,7 +214,15 @@ static void cpa_flush_array(unsigned long *start, int numpages, int cache,
- int in_flags, struct page **pages)
- {
- unsigned int i, level;
-+#ifdef CONFIG_PREEMPT
-+ /*
-+ * Avoid wbinvd() because it causes latencies on all CPUs,
-+ * regardless of any CPU isolation that may be in effect.
-+ */
-+ unsigned long do_wbinvd = 0;
-+#else
- unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */
-+#endif
-
- BUG_ON(irqs_disabled());
-
-diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
-index 9e42842e924a..5398f97172f9 100644
---- a/arch/x86/platform/uv/tlb_uv.c
-+++ b/arch/x86/platform/uv/tlb_uv.c
-@@ -748,9 +748,9 @@ static void destination_plugged(struct bau_desc *bau_desc,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/platform/uv/tlb_uv.c linux-4.14/arch/x86/platform/uv/tlb_uv.c
+--- linux-4.14.orig/arch/x86/platform/uv/tlb_uv.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/arch/x86/platform/uv/tlb_uv.c 2018-09-05 11:05:07.000000000 +0200
+@@ -740,9 +740,9 @@
quiesce_local_uvhub(hmaster);
end_uvhub_quiesce(hmaster);
-@@ -770,9 +770,9 @@ static void destination_timeout(struct bau_desc *bau_desc,
+@@ -762,9 +762,9 @@
quiesce_local_uvhub(hmaster);
end_uvhub_quiesce(hmaster);
-@@ -793,7 +793,7 @@ static void disable_for_period(struct bau_control *bcp, struct ptc_stats *stat)
+@@ -785,7 +785,7 @@
cycles_t tm1;
hmaster = bcp->uvhub_master;
if (!bcp->baudisabled) {
stat->s_bau_disabled++;
tm1 = get_cycles();
-@@ -806,7 +806,7 @@ static void disable_for_period(struct bau_control *bcp, struct ptc_stats *stat)
+@@ -798,7 +798,7 @@
}
}
}
}
static void count_max_concurr(int stat, struct bau_control *bcp,
-@@ -869,7 +869,7 @@ static void record_send_stats(cycles_t time1, cycles_t time2,
+@@ -861,7 +861,7 @@
*/
static void uv1_throttle(struct bau_control *hmaster, struct ptc_stats *stat)
{
atomic_t *v;
v = &hmaster->active_descriptor_count;
-@@ -1002,7 +1002,7 @@ static int check_enable(struct bau_control *bcp, struct ptc_stats *stat)
+@@ -995,7 +995,7 @@
struct bau_control *hmaster;
hmaster = bcp->uvhub_master;
if (bcp->baudisabled && (get_cycles() >= bcp->set_bau_on_time)) {
stat->s_bau_reenabled++;
for_each_present_cpu(tcpu) {
-@@ -1014,10 +1014,10 @@ static int check_enable(struct bau_control *bcp, struct ptc_stats *stat)
+@@ -1007,10 +1007,10 @@
tbcp->period_giveups = 0;
}
}
return -1;
}
-@@ -1940,9 +1940,9 @@ static void __init init_per_cpu_tunables(void)
+@@ -1942,9 +1942,9 @@
bcp->cong_reps = congested_reps;
bcp->disabled_period = sec_2_cycles(disabled_period);
bcp->giveup_limit = giveup_limit;
}
}
-diff --git a/arch/x86/platform/uv/uv_time.c b/arch/x86/platform/uv/uv_time.c
-index b333fc45f9ec..8b85916e6986 100644
---- a/arch/x86/platform/uv/uv_time.c
-+++ b/arch/x86/platform/uv/uv_time.c
-@@ -57,7 +57,7 @@ static DEFINE_PER_CPU(struct clock_event_device, cpu_ced);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/x86/platform/uv/uv_time.c linux-4.14/arch/x86/platform/uv/uv_time.c
+--- linux-4.14.orig/arch/x86/platform/uv/uv_time.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/x86/platform/uv/uv_time.c 2018-09-05 11:05:07.000000000 +0200
+@@ -57,7 +57,7 @@
/* There is one of these allocated per node */
struct uv_rtc_timer_head {
/* next cpu waiting for timer, local node relative: */
int next_cpu;
/* number of cpus on this node: */
-@@ -177,7 +177,7 @@ static __init int uv_rtc_allocate_timers(void)
+@@ -177,7 +177,7 @@
uv_rtc_deallocate_timers();
return -ENOMEM;
}
head->ncpus = uv_blade_nr_possible_cpus(bid);
head->next_cpu = -1;
blade_info[bid] = head;
-@@ -231,7 +231,7 @@ static int uv_rtc_set_timer(int cpu, u64 expires)
+@@ -231,7 +231,7 @@
unsigned long flags;
int next_cpu;
next_cpu = head->next_cpu;
*t = expires;
-@@ -243,12 +243,12 @@ static int uv_rtc_set_timer(int cpu, u64 expires)
+@@ -243,12 +243,12 @@
if (uv_setup_intr(cpu, expires)) {
*t = ULLONG_MAX;
uv_rtc_find_next_timer(head, pnode);
return 0;
}
-@@ -267,7 +267,7 @@ static int uv_rtc_unset_timer(int cpu, int force)
+@@ -267,7 +267,7 @@
unsigned long flags;
int rc = 0;
if ((head->next_cpu == bcpu && uv_read_rtc(NULL) >= *t) || force)
rc = 1;
-@@ -279,7 +279,7 @@ static int uv_rtc_unset_timer(int cpu, int force)
+@@ -279,7 +279,7 @@
uv_rtc_find_next_timer(head, pnode);
}
return rc;
}
-@@ -299,13 +299,18 @@ static int uv_rtc_unset_timer(int cpu, int force)
- static cycle_t uv_read_rtc(struct clocksource *cs)
+@@ -299,13 +299,17 @@
+ static u64 uv_read_rtc(struct clocksource *cs)
{
unsigned long offset;
-+ cycle_t cycles;
++ u64 cycles;
+ preempt_disable();
if (uv_get_min_hub_revision_id() == 1)
else
offset = (uv_blade_processor_id() * L1_CACHE_BYTES) % PAGE_SIZE;
-- return (cycle_t)uv_read_local_mmr(UVH_RTC | offset);
-+ cycles = (cycle_t)uv_read_local_mmr(UVH_RTC | offset);
+- return (u64)uv_read_local_mmr(UVH_RTC | offset);
++ cycles = (u64)uv_read_local_mmr(UVH_RTC | offset);
+ preempt_enable();
-+
+ return cycles;
}
/*
-diff --git a/block/blk-core.c b/block/blk-core.c
-index 14d7c0740dc0..dfd905bea77c 100644
---- a/block/blk-core.c
-+++ b/block/blk-core.c
-@@ -125,6 +125,9 @@ void blk_rq_init(struct request_queue *q, struct request *rq)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/arch/xtensa/include/asm/spinlock_types.h linux-4.14/arch/xtensa/include/asm/spinlock_types.h
+--- linux-4.14.orig/arch/xtensa/include/asm/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/arch/xtensa/include/asm/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -2,10 +2,6 @@
+ #ifndef __ASM_SPINLOCK_TYPES_H
+ #define __ASM_SPINLOCK_TYPES_H
+
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ typedef struct {
+ volatile unsigned int slock;
+ } arch_spinlock_t;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/block/blk-core.c linux-4.14/block/blk-core.c
+--- linux-4.14.orig/block/blk-core.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/block/blk-core.c 2018-09-05 11:05:07.000000000 +0200
+@@ -116,6 +116,9 @@
INIT_LIST_HEAD(&rq->queuelist);
INIT_LIST_HEAD(&rq->timeout_list);
rq->cpu = -1;
rq->q = q;
rq->__sector = (sector_t) -1;
-@@ -233,7 +236,7 @@ EXPORT_SYMBOL(blk_start_queue_async);
- **/
+@@ -280,7 +283,7 @@
void blk_start_queue(struct request_queue *q)
{
-- WARN_ON(!irqs_disabled());
-+ WARN_ON_NONRT(!irqs_disabled());
+ lockdep_assert_held(q->queue_lock);
+- WARN_ON(!in_interrupt() && !irqs_disabled());
++ WARN_ON_NONRT(!in_interrupt() && !irqs_disabled());
+ WARN_ON_ONCE(q->mq_ops);
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
- __blk_run_queue(q);
-@@ -659,7 +662,7 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
- if (nowait)
- return -EBUSY;
-
-- ret = wait_event_interruptible(q->mq_freeze_wq,
-+ ret = swait_event_interruptible(q->mq_freeze_wq,
- !atomic_read(&q->mq_freeze_depth) ||
- blk_queue_dying(q));
- if (blk_queue_dying(q))
-@@ -679,7 +682,7 @@ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
+@@ -808,12 +811,21 @@
+ percpu_ref_put(&q->q_usage_counter);
+ }
+
++static void blk_queue_usage_counter_release_swork(struct swork_event *sev)
++{
++ struct request_queue *q =
++ container_of(sev, struct request_queue, mq_pcpu_wake);
++
++ wake_up_all(&q->mq_freeze_wq);
++}
++
+ static void blk_queue_usage_counter_release(struct percpu_ref *ref)
+ {
struct request_queue *q =
container_of(ref, struct request_queue, q_usage_counter);
- wake_up_all(&q->mq_freeze_wq);
-+ swake_up_all(&q->mq_freeze_wq);
++ if (wq_has_sleeper(&q->mq_freeze_wq))
++ swork_queue(&q->mq_pcpu_wake);
}
static void blk_rq_timed_out_timer(unsigned long data)
-@@ -748,7 +751,7 @@ struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
- q->bypass_depth = 1;
+@@ -890,6 +902,7 @@
__set_bit(QUEUE_FLAG_BYPASS, &q->queue_flags);
-- init_waitqueue_head(&q->mq_freeze_wq);
-+ init_swait_queue_head(&q->mq_freeze_wq);
+ init_waitqueue_head(&q->mq_freeze_wq);
++ INIT_SWORK(&q->mq_pcpu_wake, blk_queue_usage_counter_release_swork);
/*
* Init percpu_ref in atomic mode so that it's faster to shutdown.
-@@ -3177,7 +3180,7 @@ static void queue_unplugged(struct request_queue *q, unsigned int depth,
+@@ -3308,7 +3321,7 @@
blk_run_queue_async(q);
else
__blk_run_queue(q);
}
static void flush_plug_callbacks(struct blk_plug *plug, bool from_schedule)
-@@ -3225,7 +3228,6 @@ EXPORT_SYMBOL(blk_check_plugged);
+@@ -3356,7 +3369,6 @@
void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{
struct request_queue *q;
struct request *rq;
LIST_HEAD(list);
unsigned int depth;
-@@ -3245,11 +3247,6 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+@@ -3376,11 +3388,6 @@
q = NULL;
depth = 0;
while (!list_empty(&list)) {
rq = list_entry_rq(list.next);
list_del_init(&rq->queuelist);
-@@ -3262,7 +3259,7 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+@@ -3393,7 +3400,7 @@
queue_unplugged(q, depth, from_schedule);
q = rq->q;
depth = 0;
}
/*
-@@ -3289,8 +3286,6 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+@@ -3420,8 +3427,6 @@
*/
if (q)
queue_unplugged(q, depth, from_schedule);
}
void blk_finish_plug(struct blk_plug *plug)
-diff --git a/block/blk-ioc.c b/block/blk-ioc.c
-index 381cb50a673c..dc8785233d94 100644
---- a/block/blk-ioc.c
-+++ b/block/blk-ioc.c
-@@ -7,6 +7,7 @@
- #include <linux/bio.h>
+@@ -3631,6 +3636,8 @@
+ if (!kblockd_workqueue)
+ panic("Failed to create kblockd\n");
+
++ BUG_ON(swork_get());
++
+ request_cachep = kmem_cache_create("blkdev_requests",
+ sizeof(struct request), 0, SLAB_PANIC, NULL);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/block/blk-ioc.c linux-4.14/block/blk-ioc.c
+--- linux-4.14.orig/block/blk-ioc.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/block/blk-ioc.c 2018-09-05 11:05:07.000000000 +0200
+@@ -9,6 +9,7 @@
#include <linux/blkdev.h>
#include <linux/slab.h>
+ #include <linux/sched/task.h>
+#include <linux/delay.h>
#include "blk.h"
-@@ -109,7 +110,7 @@ static void ioc_release_fn(struct work_struct *work)
+@@ -118,7 +119,7 @@
spin_unlock(q->queue_lock);
} else {
spin_unlock_irqrestore(&ioc->lock, flags);
spin_lock_irqsave_nested(&ioc->lock, flags, 1);
}
}
-@@ -187,7 +188,7 @@ void put_io_context_active(struct io_context *ioc)
- spin_unlock(icq->q->queue_lock);
- } else {
- spin_unlock_irqrestore(&ioc->lock, flags);
-- cpu_relax();
-+ cpu_chill();
- goto retry;
+@@ -202,7 +203,7 @@
+ spin_unlock(icq->q->queue_lock);
+ } else {
+ spin_unlock_irqrestore(&ioc->lock, flags);
+- cpu_relax();
++ cpu_chill();
+ goto retry;
+ }
}
- }
-diff --git a/block/blk-mq.c b/block/blk-mq.c
-index 81caceb96c3c..b12b0ab005a9 100644
---- a/block/blk-mq.c
-+++ b/block/blk-mq.c
-@@ -72,7 +72,7 @@ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start);
-
- static void blk_mq_freeze_queue_wait(struct request_queue *q)
- {
-- wait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
-+ swait_event(q->mq_freeze_wq, percpu_ref_is_zero(&q->q_usage_counter));
- }
-
- /*
-@@ -110,7 +110,7 @@ void blk_mq_unfreeze_queue(struct request_queue *q)
- WARN_ON_ONCE(freeze_depth < 0);
- if (!freeze_depth) {
- percpu_ref_reinit(&q->q_usage_counter);
-- wake_up_all(&q->mq_freeze_wq);
-+ swake_up_all(&q->mq_freeze_wq);
- }
- }
- EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
-@@ -129,7 +129,7 @@ void blk_mq_wake_waiters(struct request_queue *q)
- * dying, we need to ensure that processes currently waiting on
- * the queue are notified as well.
- */
-- wake_up_all(&q->mq_freeze_wq);
-+ swake_up_all(&q->mq_freeze_wq);
- }
-
- bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
-@@ -177,6 +177,9 @@ static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx,
- rq->resid_len = 0;
- rq->sense = NULL;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/block/blk-mq.c linux-4.14/block/blk-mq.c
+--- linux-4.14.orig/block/blk-mq.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/block/blk-mq.c 2018-09-05 11:05:07.000000000 +0200
+@@ -339,6 +339,9 @@
+ /* tag was already set */
+ rq->extra_len = 0;
+#ifdef CONFIG_PREEMPT_RT_FULL
+ INIT_WORK(&rq->work, __blk_mq_complete_request_remote_work);
INIT_LIST_HEAD(&rq->timeout_list);
rq->timeout = 0;
-@@ -345,6 +348,17 @@ void blk_mq_end_request(struct request *rq, int error)
+@@ -533,12 +536,24 @@
}
EXPORT_SYMBOL(blk_mq_end_request);
static void __blk_mq_complete_request_remote(void *data)
{
struct request *rq = data;
-@@ -352,6 +366,8 @@ static void __blk_mq_complete_request_remote(void *data)
+
rq->q->softirq_done_fn(rq);
}
-
+#endif
-+
- static void blk_mq_ipi_complete_request(struct request *rq)
+
+ static void __blk_mq_complete_request(struct request *rq)
{
- struct blk_mq_ctx *ctx = rq->mq_ctx;
-@@ -363,19 +379,23 @@ static void blk_mq_ipi_complete_request(struct request *rq)
+@@ -558,19 +573,27 @@
return;
}
if (cpu != ctx->cpu && !shared && cpu_online(ctx->cpu)) {
+#ifdef CONFIG_PREEMPT_RT_FULL
++ /*
++ * We could force QUEUE_FLAG_SAME_FORCE then we would not get in
++ * here. But we could try to invoke it one the CPU like this.
++ */
+ schedule_work_on(ctx->cpu, &rq->work);
+#else
rq->csd.func = __blk_mq_complete_request_remote;
+ put_cpu_light();
}
- static void __blk_mq_complete_request(struct request *rq)
-@@ -915,14 +935,14 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
+ /**
+@@ -1238,14 +1261,14 @@
return;
if (!async && !(hctx->flags & BLK_MQ_F_BLOCKING)) {
+ put_cpu_light();
}
- kblockd_schedule_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work);
-diff --git a/block/blk-mq.h b/block/blk-mq.h
-index e5d25249028c..1e846b842eab 100644
---- a/block/blk-mq.h
-+++ b/block/blk-mq.h
-@@ -72,12 +72,12 @@ static inline struct blk_mq_ctx *__blk_mq_get_ctx(struct request_queue *q,
+ kblockd_schedule_delayed_work_on(blk_mq_hctx_next_cpu(hctx),
+@@ -2863,10 +2886,9 @@
+ kt = nsecs;
+
+ mode = HRTIMER_MODE_REL;
+- hrtimer_init_on_stack(&hs.timer, CLOCK_MONOTONIC, mode);
++ hrtimer_init_sleeper_on_stack(&hs, CLOCK_MONOTONIC, mode, current);
+ hrtimer_set_expires(&hs.timer, kt);
+
+- hrtimer_init_sleeper(&hs, current);
+ do {
+ if (test_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags))
+ break;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/block/blk-mq.h linux-4.14/block/blk-mq.h
+--- linux-4.14.orig/block/blk-mq.h 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/block/blk-mq.h 2018-09-05 11:05:07.000000000 +0200
+@@ -98,12 +98,12 @@
*/
static inline struct blk_mq_ctx *blk_mq_get_ctx(struct request_queue *q)
{
}
struct blk_mq_alloc_data {
-diff --git a/block/blk-softirq.c b/block/blk-softirq.c
-index 06cf9807f49a..c40342643ca0 100644
---- a/block/blk-softirq.c
-+++ b/block/blk-softirq.c
-@@ -51,6 +51,7 @@ static void trigger_softirq(void *data)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/block/blk-softirq.c linux-4.14/block/blk-softirq.c
+--- linux-4.14.orig/block/blk-softirq.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/block/blk-softirq.c 2018-09-05 11:05:07.000000000 +0200
+@@ -53,6 +53,7 @@
raise_softirq_irqoff(BLOCK_SOFTIRQ);
local_irq_restore(flags);
}
/*
-@@ -89,6 +90,7 @@ static int blk_softirq_cpu_dead(unsigned int cpu)
+@@ -91,6 +92,7 @@
this_cpu_ptr(&blk_cpu_done));
raise_softirq_irqoff(BLOCK_SOFTIRQ);
local_irq_enable();
return 0;
}
-@@ -141,6 +143,7 @@ void __blk_complete_request(struct request *req)
+@@ -143,6 +145,7 @@
goto do_local;
local_irq_restore(flags);
}
/**
-diff --git a/block/bounce.c b/block/bounce.c
-index 1cb5dd3a5da1..2f1ec8a67cbe 100644
---- a/block/bounce.c
-+++ b/block/bounce.c
-@@ -55,11 +55,11 @@ static void bounce_copy_vec(struct bio_vec *to, unsigned char *vfrom)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/block/bounce.c linux-4.14/block/bounce.c
+--- linux-4.14.orig/block/bounce.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/block/bounce.c 2018-09-05 11:05:07.000000000 +0200
+@@ -66,11 +66,11 @@
unsigned long flags;
unsigned char *vto;
}
#else /* CONFIG_HIGHMEM */
-diff --git a/crypto/algapi.c b/crypto/algapi.c
-index df939b54b09f..efe5e06adcf7 100644
---- a/crypto/algapi.c
-+++ b/crypto/algapi.c
-@@ -718,13 +718,13 @@ EXPORT_SYMBOL_GPL(crypto_spawn_tfm2);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/crypto/algapi.c linux-4.14/crypto/algapi.c
+--- linux-4.14.orig/crypto/algapi.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/crypto/algapi.c 2018-09-05 11:05:07.000000000 +0200
+@@ -731,13 +731,13 @@
int crypto_register_notifier(struct notifier_block *nb)
{
}
EXPORT_SYMBOL_GPL(crypto_unregister_notifier);
-diff --git a/crypto/api.c b/crypto/api.c
-index bbc147cb5dec..bc1a848f02ec 100644
---- a/crypto/api.c
-+++ b/crypto/api.c
-@@ -31,7 +31,7 @@ EXPORT_SYMBOL_GPL(crypto_alg_list);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/crypto/api.c linux-4.14/crypto/api.c
+--- linux-4.14.orig/crypto/api.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/crypto/api.c 2018-09-05 11:05:07.000000000 +0200
+@@ -31,7 +31,7 @@
DECLARE_RWSEM(crypto_alg_sem);
EXPORT_SYMBOL_GPL(crypto_alg_sem);
EXPORT_SYMBOL_GPL(crypto_chain);
static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg);
-@@ -236,10 +236,10 @@ int crypto_probing_notify(unsigned long val, void *v)
+@@ -236,10 +236,10 @@
{
int ok;
}
return ok;
-diff --git a/crypto/internal.h b/crypto/internal.h
-index 7eefcdb00227..0ecc7f5a2f40 100644
---- a/crypto/internal.h
-+++ b/crypto/internal.h
-@@ -47,7 +47,7 @@ struct crypto_larval {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/crypto/internal.h linux-4.14/crypto/internal.h
+--- linux-4.14.orig/crypto/internal.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/crypto/internal.h 2018-09-05 11:05:07.000000000 +0200
+@@ -47,7 +47,7 @@
extern struct list_head crypto_alg_list;
extern struct rw_semaphore crypto_alg_sem;
#ifdef CONFIG_PROC_FS
void __init crypto_init_proc(void);
-@@ -146,7 +146,7 @@ static inline int crypto_is_moribund(struct crypto_alg *alg)
+@@ -143,7 +143,7 @@
static inline void crypto_notify(unsigned long val, void *v)
{
}
#endif /* _CRYPTO_INTERNAL_H */
-diff --git a/drivers/acpi/acpica/acglobal.h b/drivers/acpi/acpica/acglobal.h
-index 750fa824d42c..441edf51484a 100644
---- a/drivers/acpi/acpica/acglobal.h
-+++ b/drivers/acpi/acpica/acglobal.h
-@@ -116,7 +116,7 @@ ACPI_GLOBAL(u8, acpi_gbl_global_lock_pending);
- * interrupt level
- */
- ACPI_GLOBAL(acpi_spinlock, acpi_gbl_gpe_lock); /* For GPE data structs and registers */
--ACPI_GLOBAL(acpi_spinlock, acpi_gbl_hardware_lock); /* For ACPI H/W except GPE registers */
-+ACPI_GLOBAL(acpi_raw_spinlock, acpi_gbl_hardware_lock); /* For ACPI H/W except GPE registers */
- ACPI_GLOBAL(acpi_spinlock, acpi_gbl_reference_count_lock);
-
- /* Mutex for _OSI support */
-diff --git a/drivers/acpi/acpica/hwregs.c b/drivers/acpi/acpica/hwregs.c
-index 3b7fb99362b6..696bf8e62afb 100644
---- a/drivers/acpi/acpica/hwregs.c
-+++ b/drivers/acpi/acpica/hwregs.c
-@@ -363,14 +363,14 @@ acpi_status acpi_hw_clear_acpi_status(void)
- ACPI_BITMASK_ALL_FIXED_STATUS,
- ACPI_FORMAT_UINT64(acpi_gbl_xpm1a_status.address)));
-
-- lock_flags = acpi_os_acquire_lock(acpi_gbl_hardware_lock);
-+ raw_spin_lock_irqsave(acpi_gbl_hardware_lock, lock_flags);
-
- /* Clear the fixed events in PM1 A/B */
-
- status = acpi_hw_register_write(ACPI_REGISTER_PM1_STATUS,
- ACPI_BITMASK_ALL_FIXED_STATUS);
-
-- acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags);
-+ raw_spin_unlock_irqrestore(acpi_gbl_hardware_lock, lock_flags);
-
- if (ACPI_FAILURE(status)) {
- goto exit;
-diff --git a/drivers/acpi/acpica/hwxface.c b/drivers/acpi/acpica/hwxface.c
-index 98c26ff39409..6e236f2ea791 100644
---- a/drivers/acpi/acpica/hwxface.c
-+++ b/drivers/acpi/acpica/hwxface.c
-@@ -373,7 +373,7 @@ acpi_status acpi_write_bit_register(u32 register_id, u32 value)
- return_ACPI_STATUS(AE_BAD_PARAMETER);
- }
-
-- lock_flags = acpi_os_acquire_lock(acpi_gbl_hardware_lock);
-+ raw_spin_lock_irqsave(acpi_gbl_hardware_lock, lock_flags);
-
- /*
- * At this point, we know that the parent register is one of the
-@@ -434,7 +434,7 @@ acpi_status acpi_write_bit_register(u32 register_id, u32 value)
-
- unlock_and_exit:
-
-- acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags);
-+ raw_spin_unlock_irqrestore(acpi_gbl_hardware_lock, lock_flags);
- return_ACPI_STATUS(status);
- }
-
-diff --git a/drivers/acpi/acpica/utmutex.c b/drivers/acpi/acpica/utmutex.c
-index 15073375bd00..357e7ca5a587 100644
---- a/drivers/acpi/acpica/utmutex.c
-+++ b/drivers/acpi/acpica/utmutex.c
-@@ -88,7 +88,7 @@ acpi_status acpi_ut_mutex_initialize(void)
- return_ACPI_STATUS (status);
- }
-
-- status = acpi_os_create_lock (&acpi_gbl_hardware_lock);
-+ status = acpi_os_create_raw_lock (&acpi_gbl_hardware_lock);
- if (ACPI_FAILURE (status)) {
- return_ACPI_STATUS (status);
- }
-@@ -145,7 +145,7 @@ void acpi_ut_mutex_terminate(void)
- /* Delete the spinlocks */
-
- acpi_os_delete_lock(acpi_gbl_gpe_lock);
-- acpi_os_delete_lock(acpi_gbl_hardware_lock);
-+ acpi_os_delete_raw_lock(acpi_gbl_hardware_lock);
- acpi_os_delete_lock(acpi_gbl_reference_count_lock);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/Documentation/trace/events.txt linux-4.14/Documentation/trace/events.txt
+--- linux-4.14.orig/Documentation/trace/events.txt 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/Documentation/trace/events.txt 2018-09-05 11:05:07.000000000 +0200
+@@ -517,1550 +517,4 @@
+ totals derived from one or more trace event format fields and/or
+ event counts (hitcount).
+
+- The format of a hist trigger is as follows:
+-
+- hist:keys=<field1[,field2,...]>[:values=<field1[,field2,...]>]
+- [:sort=<field1[,field2,...]>][:size=#entries][:pause][:continue]
+- [:clear][:name=histname1] [if <filter>]
+-
+- When a matching event is hit, an entry is added to a hash table
+- using the key(s) and value(s) named. Keys and values correspond to
+- fields in the event's format description. Values must correspond to
+- numeric fields - on an event hit, the value(s) will be added to a
+- sum kept for that field. The special string 'hitcount' can be used
+- in place of an explicit value field - this is simply a count of
+- event hits. If 'values' isn't specified, an implicit 'hitcount'
+- value will be automatically created and used as the only value.
+- Keys can be any field, or the special string 'stacktrace', which
+- will use the event's kernel stacktrace as the key. The keywords
+- 'keys' or 'key' can be used to specify keys, and the keywords
+- 'values', 'vals', or 'val' can be used to specify values. Compound
+- keys consisting of up to two fields can be specified by the 'keys'
+- keyword. Hashing a compound key produces a unique entry in the
+- table for each unique combination of component keys, and can be
+- useful for providing more fine-grained summaries of event data.
+- Additionally, sort keys consisting of up to two fields can be
+- specified by the 'sort' keyword. If more than one field is
+- specified, the result will be a 'sort within a sort': the first key
+- is taken to be the primary sort key and the second the secondary
+- key. If a hist trigger is given a name using the 'name' parameter,
+- its histogram data will be shared with other triggers of the same
+- name, and trigger hits will update this common data. Only triggers
+- with 'compatible' fields can be combined in this way; triggers are
+- 'compatible' if the fields named in the trigger share the same
+- number and type of fields and those fields also have the same names.
+- Note that any two events always share the compatible 'hitcount' and
+- 'stacktrace' fields and can therefore be combined using those
+- fields, however pointless that may be.
+-
+- 'hist' triggers add a 'hist' file to each event's subdirectory.
+- Reading the 'hist' file for the event will dump the hash table in
+- its entirety to stdout. If there are multiple hist triggers
+- attached to an event, there will be a table for each trigger in the
+- output. The table displayed for a named trigger will be the same as
+- any other instance having the same name. Each printed hash table
+- entry is a simple list of the keys and values comprising the entry;
+- keys are printed first and are delineated by curly braces, and are
+- followed by the set of value fields for the entry. By default,
+- numeric fields are displayed as base-10 integers. This can be
+- modified by appending any of the following modifiers to the field
+- name:
+-
+- .hex display a number as a hex value
+- .sym display an address as a symbol
+- .sym-offset display an address as a symbol and offset
+- .syscall display a syscall id as a system call name
+- .execname display a common_pid as a program name
+-
+- Note that in general the semantics of a given field aren't
+- interpreted when applying a modifier to it, but there are some
+- restrictions to be aware of in this regard:
+-
+- - only the 'hex' modifier can be used for values (because values
+- are essentially sums, and the other modifiers don't make sense
+- in that context).
+- - the 'execname' modifier can only be used on a 'common_pid'. The
+- reason for this is that the execname is simply the 'comm' value
+- saved for the 'current' process when an event was triggered,
+- which is the same as the common_pid value saved by the event
+- tracing code. Trying to apply that comm value to other pid
+- values wouldn't be correct, and typically events that care save
+- pid-specific comm fields in the event itself.
+-
+- A typical usage scenario would be the following to enable a hist
+- trigger, read its current contents, and then turn it off:
+-
+- # echo 'hist:keys=skbaddr.hex:vals=len' > \
+- /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+-
+- # echo '!hist:keys=skbaddr.hex:vals=len' > \
+- /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+-
+- The trigger file itself can be read to show the details of the
+- currently attached hist trigger. This information is also displayed
+- at the top of the 'hist' file when read.
+-
+- By default, the size of the hash table is 2048 entries. The 'size'
+- parameter can be used to specify more or fewer than that. The units
+- are in terms of hashtable entries - if a run uses more entries than
+- specified, the results will show the number of 'drops', the number
+- of hits that were ignored. The size should be a power of 2 between
+- 128 and 131072 (any non- power-of-2 number specified will be rounded
+- up).
+-
+- The 'sort' parameter can be used to specify a value field to sort
+- on. The default if unspecified is 'hitcount' and the default sort
+- order is 'ascending'. To sort in the opposite direction, append
+- .descending' to the sort key.
+-
+- The 'pause' parameter can be used to pause an existing hist trigger
+- or to start a hist trigger but not log any events until told to do
+- so. 'continue' or 'cont' can be used to start or restart a paused
+- hist trigger.
+-
+- The 'clear' parameter will clear the contents of a running hist
+- trigger and leave its current paused/active state.
+-
+- Note that the 'pause', 'cont', and 'clear' parameters should be
+- applied using 'append' shell operator ('>>') if applied to an
+- existing trigger, rather than via the '>' operator, which will cause
+- the trigger to be removed through truncation.
+-
+-- enable_hist/disable_hist
+-
+- The enable_hist and disable_hist triggers can be used to have one
+- event conditionally start and stop another event's already-attached
+- hist trigger. Any number of enable_hist and disable_hist triggers
+- can be attached to a given event, allowing that event to kick off
+- and stop aggregations on a host of other events.
+-
+- The format is very similar to the enable/disable_event triggers:
+-
+- enable_hist:<system>:<event>[:count]
+- disable_hist:<system>:<event>[:count]
+-
+- Instead of enabling or disabling the tracing of the target event
+- into the trace buffer as the enable/disable_event triggers do, the
+- enable/disable_hist triggers enable or disable the aggregation of
+- the target event into a hash table.
+-
+- A typical usage scenario for the enable_hist/disable_hist triggers
+- would be to first set up a paused hist trigger on some event,
+- followed by an enable_hist/disable_hist pair that turns the hist
+- aggregation on and off when conditions of interest are hit:
+-
+- # echo 'hist:keys=skbaddr.hex:vals=len:pause' > \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+-
+- # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+-
+- # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+-
+- The above sets up an initially paused hist trigger which is unpaused
+- and starts aggregating events when a given program is executed, and
+- which stops aggregating when the process exits and the hist trigger
+- is paused again.
+-
+- The examples below provide a more concrete illustration of the
+- concepts and typical usage patterns discussed above.
+-
+-
+-6.2 'hist' trigger examples
+----------------------------
+-
+- The first set of examples creates aggregations using the kmalloc
+- event. The fields that can be used for the hist trigger are listed
+- in the kmalloc event's format file:
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/format
+- name: kmalloc
+- ID: 374
+- format:
+- field:unsigned short common_type; offset:0; size:2; signed:0;
+- field:unsigned char common_flags; offset:2; size:1; signed:0;
+- field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
+- field:int common_pid; offset:4; size:4; signed:1;
+-
+- field:unsigned long call_site; offset:8; size:8; signed:0;
+- field:const void * ptr; offset:16; size:8; signed:0;
+- field:size_t bytes_req; offset:24; size:8; signed:0;
+- field:size_t bytes_alloc; offset:32; size:8; signed:0;
+- field:gfp_t gfp_flags; offset:40; size:4; signed:0;
+-
+- We'll start by creating a hist trigger that generates a simple table
+- that lists the total number of bytes requested for each function in
+- the kernel that made one or more calls to kmalloc:
+-
+- # echo 'hist:key=call_site:val=bytes_req' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- This tells the tracing system to create a 'hist' trigger using the
+- call_site field of the kmalloc event as the key for the table, which
+- just means that each unique call_site address will have an entry
+- created for it in the table. The 'val=bytes_req' parameter tells
+- the hist trigger that for each unique entry (call_site) in the
+- table, it should keep a running total of the number of bytes
+- requested by that call_site.
+-
+- We'll let it run for awhile and then dump the contents of the 'hist'
+- file in the kmalloc event's subdirectory (for readability, a number
+- of entries have been omitted):
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+- # trigger info: hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
+-
+- { call_site: 18446744072106379007 } hitcount: 1 bytes_req: 176
+- { call_site: 18446744071579557049 } hitcount: 1 bytes_req: 1024
+- { call_site: 18446744071580608289 } hitcount: 1 bytes_req: 16384
+- { call_site: 18446744071581827654 } hitcount: 1 bytes_req: 24
+- { call_site: 18446744071580700980 } hitcount: 1 bytes_req: 8
+- { call_site: 18446744071579359876 } hitcount: 1 bytes_req: 152
+- { call_site: 18446744071580795365 } hitcount: 3 bytes_req: 144
+- { call_site: 18446744071581303129 } hitcount: 3 bytes_req: 144
+- { call_site: 18446744071580713234 } hitcount: 4 bytes_req: 2560
+- { call_site: 18446744071580933750 } hitcount: 4 bytes_req: 736
+- .
+- .
+- .
+- { call_site: 18446744072106047046 } hitcount: 69 bytes_req: 5576
+- { call_site: 18446744071582116407 } hitcount: 73 bytes_req: 2336
+- { call_site: 18446744072106054684 } hitcount: 136 bytes_req: 140504
+- { call_site: 18446744072106224230 } hitcount: 136 bytes_req: 19584
+- { call_site: 18446744072106078074 } hitcount: 153 bytes_req: 2448
+- { call_site: 18446744072106062406 } hitcount: 153 bytes_req: 36720
+- { call_site: 18446744071582507929 } hitcount: 153 bytes_req: 37088
+- { call_site: 18446744072102520590 } hitcount: 273 bytes_req: 10920
+- { call_site: 18446744071582143559 } hitcount: 358 bytes_req: 716
+- { call_site: 18446744072106465852 } hitcount: 417 bytes_req: 56712
+- { call_site: 18446744072102523378 } hitcount: 485 bytes_req: 27160
+- { call_site: 18446744072099568646 } hitcount: 1676 bytes_req: 33520
+-
+- Totals:
+- Hits: 4610
+- Entries: 45
+- Dropped: 0
+-
+- The output displays a line for each entry, beginning with the key
+- specified in the trigger, followed by the value(s) also specified in
+- the trigger. At the beginning of the output is a line that displays
+- the trigger info, which can also be displayed by reading the
+- 'trigger' file:
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+- hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
+-
+- At the end of the output are a few lines that display the overall
+- totals for the run. The 'Hits' field shows the total number of
+- times the event trigger was hit, the 'Entries' field shows the total
+- number of used entries in the hash table, and the 'Dropped' field
+- shows the number of hits that were dropped because the number of
+- used entries for the run exceeded the maximum number of entries
+- allowed for the table (normally 0, but if not a hint that you may
+- want to increase the size of the table using the 'size' parameter).
+-
+- Notice in the above output that there's an extra field, 'hitcount',
+- which wasn't specified in the trigger. Also notice that in the
+- trigger info output, there's a parameter, 'sort=hitcount', which
+- wasn't specified in the trigger either. The reason for that is that
+- every trigger implicitly keeps a count of the total number of hits
+- attributed to a given entry, called the 'hitcount'. That hitcount
+- information is explicitly displayed in the output, and in the
+- absence of a user-specified sort parameter, is used as the default
+- sort field.
+-
+- The value 'hitcount' can be used in place of an explicit value in
+- the 'values' parameter if you don't really need to have any
+- particular field summed and are mainly interested in hit
+- frequencies.
+-
+- To turn the hist trigger off, simply call up the trigger in the
+- command history and re-execute it with a '!' prepended:
+-
+- # echo '!hist:key=call_site:val=bytes_req' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- Finally, notice that the call_site as displayed in the output above
+- isn't really very useful. It's an address, but normally addresses
+- are displayed in hex. To have a numeric field displayed as a hex
+- value, simply append '.hex' to the field name in the trigger:
+-
+- # echo 'hist:key=call_site.hex:val=bytes_req' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+- # trigger info: hist:keys=call_site.hex:vals=bytes_req:sort=hitcount:size=2048 [active]
+-
+- { call_site: ffffffffa026b291 } hitcount: 1 bytes_req: 433
+- { call_site: ffffffffa07186ff } hitcount: 1 bytes_req: 176
+- { call_site: ffffffff811ae721 } hitcount: 1 bytes_req: 16384
+- { call_site: ffffffff811c5134 } hitcount: 1 bytes_req: 8
+- { call_site: ffffffffa04a9ebb } hitcount: 1 bytes_req: 511
+- { call_site: ffffffff8122e0a6 } hitcount: 1 bytes_req: 12
+- { call_site: ffffffff8107da84 } hitcount: 1 bytes_req: 152
+- { call_site: ffffffff812d8246 } hitcount: 1 bytes_req: 24
+- { call_site: ffffffff811dc1e5 } hitcount: 3 bytes_req: 144
+- { call_site: ffffffffa02515e8 } hitcount: 3 bytes_req: 648
+- { call_site: ffffffff81258159 } hitcount: 3 bytes_req: 144
+- { call_site: ffffffff811c80f4 } hitcount: 4 bytes_req: 544
+- .
+- .
+- .
+- { call_site: ffffffffa06c7646 } hitcount: 106 bytes_req: 8024
+- { call_site: ffffffffa06cb246 } hitcount: 132 bytes_req: 31680
+- { call_site: ffffffffa06cef7a } hitcount: 132 bytes_req: 2112
+- { call_site: ffffffff8137e399 } hitcount: 132 bytes_req: 23232
+- { call_site: ffffffffa06c941c } hitcount: 185 bytes_req: 171360
+- { call_site: ffffffffa06f2a66 } hitcount: 185 bytes_req: 26640
+- { call_site: ffffffffa036a70e } hitcount: 265 bytes_req: 10600
+- { call_site: ffffffff81325447 } hitcount: 292 bytes_req: 584
+- { call_site: ffffffffa072da3c } hitcount: 446 bytes_req: 60656
+- { call_site: ffffffffa036b1f2 } hitcount: 526 bytes_req: 29456
+- { call_site: ffffffffa0099c06 } hitcount: 1780 bytes_req: 35600
+-
+- Totals:
+- Hits: 4775
+- Entries: 46
+- Dropped: 0
+-
+- Even that's only marginally more useful - while hex values do look
+- more like addresses, what users are typically more interested in
+- when looking at text addresses are the corresponding symbols
+- instead. To have an address displayed as symbolic value instead,
+- simply append '.sym' or '.sym-offset' to the field name in the
+- trigger:
+-
+- # echo 'hist:key=call_site.sym:val=bytes_req' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+- # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=hitcount:size=2048 [active]
+-
+- { call_site: [ffffffff810adcb9] syslog_print_all } hitcount: 1 bytes_req: 1024
+- { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
+- { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
+- { call_site: [ffffffff8154acbe] usb_alloc_urb } hitcount: 1 bytes_req: 192
+- { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
+- { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
+- { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
+- { call_site: [ffffffff811febd5] fsnotify_alloc_group } hitcount: 2 bytes_req: 528
+- { call_site: [ffffffff81440f58] __tty_buffer_request_room } hitcount: 2 bytes_req: 2624
+- { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 2 bytes_req: 96
+- { call_site: [ffffffffa05e19af] ieee80211_start_tx_ba_session [mac80211] } hitcount: 2 bytes_req: 464
+- { call_site: [ffffffff81672406] tcp_get_metrics } hitcount: 2 bytes_req: 304
+- { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
+- { call_site: [ffffffff81089b05] sched_create_group } hitcount: 2 bytes_req: 1424
+- .
+- .
+- .
+- { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1185 bytes_req: 123240
+- { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 1185 bytes_req: 104280
+- { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 1402 bytes_req: 190672
+- { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 1518 bytes_req: 146208
+- { call_site: [ffffffffa029070e] drm_vma_node_allow [drm] } hitcount: 1746 bytes_req: 69840
+- { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 2021 bytes_req: 792312
+- { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 2592 bytes_req: 145152
+- { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2629 bytes_req: 378576
+- { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2629 bytes_req: 3783248
+- { call_site: [ffffffff81325607] apparmor_file_alloc_security } hitcount: 5192 bytes_req: 10384
+- { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 5529 bytes_req: 110584
+- { call_site: [ffffffff8131ebf7] aa_alloc_task_context } hitcount: 21943 bytes_req: 702176
+- { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 55759 bytes_req: 5074265
+-
+- Totals:
+- Hits: 109928
+- Entries: 71
+- Dropped: 0
+-
+- Because the default sort key above is 'hitcount', the above shows a
+- the list of call_sites by increasing hitcount, so that at the bottom
+- we see the functions that made the most kmalloc calls during the
+- run. If instead we we wanted to see the top kmalloc callers in
+- terms of the number of bytes requested rather than the number of
+- calls, and we wanted the top caller to appear at the top, we can use
+- the 'sort' parameter, along with the 'descending' modifier:
+-
+- # echo 'hist:key=call_site.sym:val=bytes_req:sort=bytes_req.descending' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+- # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
+-
+- { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2186 bytes_req: 3397464
+- { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1790 bytes_req: 712176
+- { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 8132 bytes_req: 513135
+- { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 106 bytes_req: 440128
+- { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2186 bytes_req: 314784
+- { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 2174 bytes_req: 208992
+- { call_site: [ffffffff811ae8e1] __kmalloc } hitcount: 8 bytes_req: 131072
+- { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 859 bytes_req: 116824
+- { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 1834 bytes_req: 102704
+- { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 972 bytes_req: 101088
+- { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 972 bytes_req: 85536
+- { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 3333 bytes_req: 66664
+- { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 209 bytes_req: 61632
+- .
+- .
+- .
+- { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
+- { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
+- { call_site: [ffffffff812d8406] copy_semundo } hitcount: 2 bytes_req: 48
+- { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 1 bytes_req: 48
+- { call_site: [ffffffffa027121a] drm_getmagic [drm] } hitcount: 1 bytes_req: 48
+- { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
+- { call_site: [ffffffff811c52f4] bprm_change_interp } hitcount: 2 bytes_req: 16
+- { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
+- { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
+- { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
+-
+- Totals:
+- Hits: 32133
+- Entries: 81
+- Dropped: 0
+-
+- To display the offset and size information in addition to the symbol
+- name, just use 'sym-offset' instead:
+-
+- # echo 'hist:key=call_site.sym-offset:val=bytes_req:sort=bytes_req.descending' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+- # trigger info: hist:keys=call_site.sym-offset:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
+-
+- { call_site: [ffffffffa046041c] i915_gem_execbuffer2+0x6c/0x2c0 [i915] } hitcount: 4569 bytes_req: 3163720
+- { call_site: [ffffffffa0489a66] intel_ring_begin+0xc6/0x1f0 [i915] } hitcount: 4569 bytes_req: 657936
+- { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23+0x694/0x1020 [i915] } hitcount: 1519 bytes_req: 472936
+- { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23+0x516/0x1020 [i915] } hitcount: 3050 bytes_req: 211832
+- { call_site: [ffffffff811e2a1b] seq_buf_alloc+0x1b/0x50 } hitcount: 34 bytes_req: 148384
+- { call_site: [ffffffffa04a580c] intel_crtc_page_flip+0xbc/0x870 [i915] } hitcount: 1385 bytes_req: 144040
+- { call_site: [ffffffff811ae8e1] __kmalloc+0x191/0x1b0 } hitcount: 8 bytes_req: 131072
+- { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl+0x282/0x360 [drm] } hitcount: 1385 bytes_req: 121880
+- { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc+0x32/0x100 [drm] } hitcount: 1848 bytes_req: 103488
+- { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state+0x2c/0xa0 [i915] } hitcount: 461 bytes_req: 62696
+- { call_site: [ffffffffa029070e] drm_vma_node_allow+0x2e/0xd0 [drm] } hitcount: 1541 bytes_req: 61640
+- { call_site: [ffffffff815f8d7b] sk_prot_alloc+0xcb/0x1b0 } hitcount: 57 bytes_req: 57456
+- .
+- .
+- .
+- { call_site: [ffffffff8109524a] alloc_fair_sched_group+0x5a/0x1a0 } hitcount: 2 bytes_req: 128
+- { call_site: [ffffffffa027b921] drm_vm_open_locked+0x31/0xa0 [drm] } hitcount: 3 bytes_req: 96
+- { call_site: [ffffffff8122e266] proc_self_follow_link+0x76/0xb0 } hitcount: 8 bytes_req: 96
+- { call_site: [ffffffff81213e80] load_elf_binary+0x240/0x1650 } hitcount: 3 bytes_req: 84
+- { call_site: [ffffffff8154bc62] usb_control_msg+0x42/0x110 } hitcount: 1 bytes_req: 8
+- { call_site: [ffffffffa00bf6fe] hidraw_send_report+0x7e/0x1a0 [hid] } hitcount: 1 bytes_req: 7
+- { call_site: [ffffffffa00bf1ca] hidraw_report_event+0x8a/0x120 [hid] } hitcount: 1 bytes_req: 7
+-
+- Totals:
+- Hits: 26098
+- Entries: 64
+- Dropped: 0
+-
+- We can also add multiple fields to the 'values' parameter. For
+- example, we might want to see the total number of bytes allocated
+- alongside bytes requested, and display the result sorted by bytes
+- allocated in a descending order:
+-
+- # echo 'hist:keys=call_site.sym:values=bytes_req,bytes_alloc:sort=bytes_alloc.descending' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+- # trigger info: hist:keys=call_site.sym:vals=bytes_req,bytes_alloc:sort=bytes_alloc.descending:size=2048 [active]
+-
+- { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 7403 bytes_req: 4084360 bytes_alloc: 5958016
+- { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 541 bytes_req: 2213968 bytes_alloc: 2228224
+- { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 7404 bytes_req: 1066176 bytes_alloc: 1421568
+- { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1565 bytes_req: 557368 bytes_alloc: 1037760
+- { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 9557 bytes_req: 595778 bytes_alloc: 695744
+- { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 5839 bytes_req: 430680 bytes_alloc: 470400
+- { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 2388 bytes_req: 324768 bytes_alloc: 458496
+- { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 3911 bytes_req: 219016 bytes_alloc: 250304
+- { call_site: [ffffffff815f8d7b] sk_prot_alloc } hitcount: 235 bytes_req: 236880 bytes_alloc: 240640
+- { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 557 bytes_req: 169024 bytes_alloc: 221760
+- { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 9378 bytes_req: 187548 bytes_alloc: 206312
+- { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1519 bytes_req: 157976 bytes_alloc: 194432
+- .
+- .
+- .
+- { call_site: [ffffffff8109bd3b] sched_autogroup_create_attach } hitcount: 2 bytes_req: 144 bytes_alloc: 192
+- { call_site: [ffffffff81097ee8] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+- { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+- { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+- { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+- { call_site: [ffffffff81213e80] load_elf_binary } hitcount: 3 bytes_req: 84 bytes_alloc: 96
+- { call_site: [ffffffff81079a2e] kthread_create_on_node } hitcount: 1 bytes_req: 56 bytes_alloc: 64
+- { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
+- { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8 bytes_alloc: 8
+- { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
+-
+- Totals:
+- Hits: 66598
+- Entries: 65
+- Dropped: 0
+-
+- Finally, to finish off our kmalloc example, instead of simply having
+- the hist trigger display symbolic call_sites, we can have the hist
+- trigger additionally display the complete set of kernel stack traces
+- that led to each call_site. To do that, we simply use the special
+- value 'stacktrace' for the key parameter:
+-
+- # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
+- /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+-
+- The above trigger will use the kernel stack trace in effect when an
+- event is triggered as the key for the hash table. This allows the
+- enumeration of every kernel callpath that led up to a particular
+- event, along with a running total of any of the event fields for
+- that event. Here we tally bytes requested and bytes allocated for
+- every callpath in the system that led up to a kmalloc (in this case
+- every callpath to a kmalloc for a kernel compile):
+-
+- # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+- # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
+-
+- { stacktrace:
+- __kmalloc_track_caller+0x10b/0x1a0
+- kmemdup+0x20/0x50
+- hidraw_report_event+0x8a/0x120 [hid]
+- hid_report_raw_event+0x3ea/0x440 [hid]
+- hid_input_report+0x112/0x190 [hid]
+- hid_irq_in+0xc2/0x260 [usbhid]
+- __usb_hcd_giveback_urb+0x72/0x120
+- usb_giveback_urb_bh+0x9e/0xe0
+- tasklet_hi_action+0xf8/0x100
+- __do_softirq+0x114/0x2c0
+- irq_exit+0xa5/0xb0
+- do_IRQ+0x5a/0xf0
+- ret_from_intr+0x0/0x30
+- cpuidle_enter+0x17/0x20
+- cpu_startup_entry+0x315/0x3e0
+- rest_init+0x7c/0x80
+- } hitcount: 3 bytes_req: 21 bytes_alloc: 24
+- { stacktrace:
+- __kmalloc_track_caller+0x10b/0x1a0
+- kmemdup+0x20/0x50
+- hidraw_report_event+0x8a/0x120 [hid]
+- hid_report_raw_event+0x3ea/0x440 [hid]
+- hid_input_report+0x112/0x190 [hid]
+- hid_irq_in+0xc2/0x260 [usbhid]
+- __usb_hcd_giveback_urb+0x72/0x120
+- usb_giveback_urb_bh+0x9e/0xe0
+- tasklet_hi_action+0xf8/0x100
+- __do_softirq+0x114/0x2c0
+- irq_exit+0xa5/0xb0
+- do_IRQ+0x5a/0xf0
+- ret_from_intr+0x0/0x30
+- } hitcount: 3 bytes_req: 21 bytes_alloc: 24
+- { stacktrace:
+- kmem_cache_alloc_trace+0xeb/0x150
+- aa_alloc_task_context+0x27/0x40
+- apparmor_cred_prepare+0x1f/0x50
+- security_prepare_creds+0x16/0x20
+- prepare_creds+0xdf/0x1a0
+- SyS_capset+0xb5/0x200
+- system_call_fastpath+0x12/0x6a
+- } hitcount: 1 bytes_req: 32 bytes_alloc: 32
+- .
+- .
+- .
+- { stacktrace:
+- __kmalloc+0x11b/0x1b0
+- i915_gem_execbuffer2+0x6c/0x2c0 [i915]
+- drm_ioctl+0x349/0x670 [drm]
+- do_vfs_ioctl+0x2f0/0x4f0
+- SyS_ioctl+0x81/0xa0
+- system_call_fastpath+0x12/0x6a
+- } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808
+- { stacktrace:
+- __kmalloc+0x11b/0x1b0
+- load_elf_phdrs+0x76/0xa0
+- load_elf_binary+0x102/0x1650
+- search_binary_handler+0x97/0x1d0
+- do_execveat_common.isra.34+0x551/0x6e0
+- SyS_execve+0x3a/0x50
+- return_from_execve+0x0/0x23
+- } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048
+- { stacktrace:
+- kmem_cache_alloc_trace+0xeb/0x150
+- apparmor_file_alloc_security+0x27/0x40
+- security_file_alloc+0x16/0x20
+- get_empty_filp+0x93/0x1c0
+- path_openat+0x31/0x5f0
+- do_filp_open+0x3a/0x90
+- do_sys_open+0x128/0x220
+- SyS_open+0x1e/0x20
+- system_call_fastpath+0x12/0x6a
+- } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376
+- { stacktrace:
+- __kmalloc+0x11b/0x1b0
+- seq_buf_alloc+0x1b/0x50
+- seq_read+0x2cc/0x370
+- proc_reg_read+0x3d/0x80
+- __vfs_read+0x28/0xe0
+- vfs_read+0x86/0x140
+- SyS_read+0x46/0xb0
+- system_call_fastpath+0x12/0x6a
+- } hitcount: 19133 bytes_req: 78368768 bytes_alloc: 78368768
+-
+- Totals:
+- Hits: 6085872
+- Entries: 253
+- Dropped: 0
+-
+- If you key a hist trigger on common_pid, in order for example to
+- gather and display sorted totals for each process, you can use the
+- special .execname modifier to display the executable names for the
+- processes in the table rather than raw pids. The example below
+- keeps a per-process sum of total bytes read:
+-
+- # echo 'hist:key=common_pid.execname:val=count:sort=count.descending' > \
+- /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/hist
+- # trigger info: hist:keys=common_pid.execname:vals=count:sort=count.descending:size=2048 [active]
+-
+- { common_pid: gnome-terminal [ 3196] } hitcount: 280 count: 1093512
+- { common_pid: Xorg [ 1309] } hitcount: 525 count: 256640
+- { common_pid: compiz [ 2889] } hitcount: 59 count: 254400
+- { common_pid: bash [ 8710] } hitcount: 3 count: 66369
+- { common_pid: dbus-daemon-lau [ 8703] } hitcount: 49 count: 47739
+- { common_pid: irqbalance [ 1252] } hitcount: 27 count: 27648
+- { common_pid: 01ifupdown [ 8705] } hitcount: 3 count: 17216
+- { common_pid: dbus-daemon [ 772] } hitcount: 10 count: 12396
+- { common_pid: Socket Thread [ 8342] } hitcount: 11 count: 11264
+- { common_pid: nm-dhcp-client. [ 8701] } hitcount: 6 count: 7424
+- { common_pid: gmain [ 1315] } hitcount: 18 count: 6336
+- .
+- .
+- .
+- { common_pid: postgres [ 1892] } hitcount: 2 count: 32
+- { common_pid: postgres [ 1891] } hitcount: 2 count: 32
+- { common_pid: gmain [ 8704] } hitcount: 2 count: 32
+- { common_pid: upstart-dbus-br [ 2740] } hitcount: 21 count: 21
+- { common_pid: nm-dispatcher.a [ 8696] } hitcount: 1 count: 16
+- { common_pid: indicator-datet [ 2904] } hitcount: 1 count: 16
+- { common_pid: gdbus [ 2998] } hitcount: 1 count: 16
+- { common_pid: rtkit-daemon [ 2052] } hitcount: 1 count: 8
+- { common_pid: init [ 1] } hitcount: 2 count: 2
+-
+- Totals:
+- Hits: 2116
+- Entries: 51
+- Dropped: 0
+-
+- Similarly, if you key a hist trigger on syscall id, for example to
+- gather and display a list of systemwide syscall hits, you can use
+- the special .syscall modifier to display the syscall names rather
+- than raw ids. The example below keeps a running total of syscall
+- counts for the system during the run:
+-
+- # echo 'hist:key=id.syscall:val=hitcount' > \
+- /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+- # trigger info: hist:keys=id.syscall:vals=hitcount:sort=hitcount:size=2048 [active]
+-
+- { id: sys_fsync [ 74] } hitcount: 1
+- { id: sys_newuname [ 63] } hitcount: 1
+- { id: sys_prctl [157] } hitcount: 1
+- { id: sys_statfs [137] } hitcount: 1
+- { id: sys_symlink [ 88] } hitcount: 1
+- { id: sys_sendmmsg [307] } hitcount: 1
+- { id: sys_semctl [ 66] } hitcount: 1
+- { id: sys_readlink [ 89] } hitcount: 3
+- { id: sys_bind [ 49] } hitcount: 3
+- { id: sys_getsockname [ 51] } hitcount: 3
+- { id: sys_unlink [ 87] } hitcount: 3
+- { id: sys_rename [ 82] } hitcount: 4
+- { id: unknown_syscall [ 58] } hitcount: 4
+- { id: sys_connect [ 42] } hitcount: 4
+- { id: sys_getpid [ 39] } hitcount: 4
+- .
+- .
+- .
+- { id: sys_rt_sigprocmask [ 14] } hitcount: 952
+- { id: sys_futex [202] } hitcount: 1534
+- { id: sys_write [ 1] } hitcount: 2689
+- { id: sys_setitimer [ 38] } hitcount: 2797
+- { id: sys_read [ 0] } hitcount: 3202
+- { id: sys_select [ 23] } hitcount: 3773
+- { id: sys_writev [ 20] } hitcount: 4531
+- { id: sys_poll [ 7] } hitcount: 8314
+- { id: sys_recvmsg [ 47] } hitcount: 13738
+- { id: sys_ioctl [ 16] } hitcount: 21843
+-
+- Totals:
+- Hits: 67612
+- Entries: 72
+- Dropped: 0
+-
+- The syscall counts above provide a rough overall picture of system
+- call activity on the system; we can see for example that the most
+- popular system call on this system was the 'sys_ioctl' system call.
+-
+- We can use 'compound' keys to refine that number and provide some
+- further insight as to which processes exactly contribute to the
+- overall ioctl count.
+-
+- The command below keeps a hitcount for every unique combination of
+- system call id and pid - the end result is essentially a table
+- that keeps a per-pid sum of system call hits. The results are
+- sorted using the system call id as the primary key, and the
+- hitcount sum as the secondary key:
+-
+- # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount' > \
+- /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+- # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 [active]
+-
+- { id: sys_read [ 0], common_pid: rtkit-daemon [ 1877] } hitcount: 1
+- { id: sys_read [ 0], common_pid: gdbus [ 2976] } hitcount: 1
+- { id: sys_read [ 0], common_pid: console-kit-dae [ 3400] } hitcount: 1
+- { id: sys_read [ 0], common_pid: postgres [ 1865] } hitcount: 1
+- { id: sys_read [ 0], common_pid: deja-dup-monito [ 3543] } hitcount: 2
+- { id: sys_read [ 0], common_pid: NetworkManager [ 890] } hitcount: 2
+- { id: sys_read [ 0], common_pid: evolution-calen [ 3048] } hitcount: 2
+- { id: sys_read [ 0], common_pid: postgres [ 1864] } hitcount: 2
+- { id: sys_read [ 0], common_pid: nm-applet [ 3022] } hitcount: 2
+- { id: sys_read [ 0], common_pid: whoopsie [ 1212] } hitcount: 2
+- .
+- .
+- .
+- { id: sys_ioctl [ 16], common_pid: bash [ 8479] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: bash [ 3472] } hitcount: 12
+- { id: sys_ioctl [ 16], common_pid: gnome-terminal [ 3199] } hitcount: 16
+- { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 1808
+- { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 5580
+- .
+- .
+- .
+- { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2690] } hitcount: 3
+- { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2688] } hitcount: 16
+- { id: sys_inotify_add_watch [254], common_pid: gmain [ 975] } hitcount: 2
+- { id: sys_inotify_add_watch [254], common_pid: gmain [ 3204] } hitcount: 4
+- { id: sys_inotify_add_watch [254], common_pid: gmain [ 2888] } hitcount: 4
+- { id: sys_inotify_add_watch [254], common_pid: gmain [ 3003] } hitcount: 4
+- { id: sys_inotify_add_watch [254], common_pid: gmain [ 2873] } hitcount: 4
+- { id: sys_inotify_add_watch [254], common_pid: gmain [ 3196] } hitcount: 6
+- { id: sys_openat [257], common_pid: java [ 2623] } hitcount: 2
+- { id: sys_eventfd2 [290], common_pid: ibus-ui-gtk3 [ 2760] } hitcount: 4
+- { id: sys_eventfd2 [290], common_pid: compiz [ 2994] } hitcount: 6
+-
+- Totals:
+- Hits: 31536
+- Entries: 323
+- Dropped: 0
+-
+- The above list does give us a breakdown of the ioctl syscall by
+- pid, but it also gives us quite a bit more than that, which we
+- don't really care about at the moment. Since we know the syscall
+- id for sys_ioctl (16, displayed next to the sys_ioctl name), we
+- can use that to filter out all the other syscalls:
+-
+- # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount if id == 16' > \
+- /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+- # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 if id == 16 [active]
+-
+- { id: sys_ioctl [ 16], common_pid: gmain [ 2769] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: evolution-addre [ 8571] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: gmain [ 3003] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: gmain [ 2781] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: gmain [ 2829] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: bash [ 8726] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: bash [ 8508] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: gmain [ 2970] } hitcount: 1
+- { id: sys_ioctl [ 16], common_pid: gmain [ 2768] } hitcount: 1
+- .
+- .
+- .
+- { id: sys_ioctl [ 16], common_pid: pool [ 8559] } hitcount: 45
+- { id: sys_ioctl [ 16], common_pid: pool [ 8555] } hitcount: 48
+- { id: sys_ioctl [ 16], common_pid: pool [ 8551] } hitcount: 48
+- { id: sys_ioctl [ 16], common_pid: avahi-daemon [ 896] } hitcount: 66
+- { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 26674
+- { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 73443
+-
+- Totals:
+- Hits: 101162
+- Entries: 103
+- Dropped: 0
+-
+- The above output shows that 'compiz' and 'Xorg' are far and away
+- the heaviest ioctl callers (which might lead to questions about
+- whether they really need to be making all those calls and to
+- possible avenues for further investigation.)
+-
+- The compound key examples used a key and a sum value (hitcount) to
+- sort the output, but we can just as easily use two keys instead.
+- Here's an example where we use a compound key composed of the the
+- common_pid and size event fields. Sorting with pid as the primary
+- key and 'size' as the secondary key allows us to display an
+- ordered summary of the recvfrom sizes, with counts, received by
+- each process:
+-
+- # echo 'hist:key=common_pid.execname,size:val=hitcount:sort=common_pid,size' > \
+- /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/hist
+- # trigger info: hist:keys=common_pid.execname,size:vals=hitcount:sort=common_pid.execname,size:size=2048 [active]
+-
+- { common_pid: smbd [ 784], size: 4 } hitcount: 1
+- { common_pid: dnsmasq [ 1412], size: 4096 } hitcount: 672
+- { common_pid: postgres [ 1796], size: 1000 } hitcount: 6
+- { common_pid: postgres [ 1867], size: 1000 } hitcount: 10
+- { common_pid: bamfdaemon [ 2787], size: 28 } hitcount: 2
+- { common_pid: bamfdaemon [ 2787], size: 14360 } hitcount: 1
+- { common_pid: compiz [ 2994], size: 8 } hitcount: 1
+- { common_pid: compiz [ 2994], size: 20 } hitcount: 11
+- { common_pid: gnome-terminal [ 3199], size: 4 } hitcount: 2
+- { common_pid: firefox [ 8817], size: 4 } hitcount: 1
+- { common_pid: firefox [ 8817], size: 8 } hitcount: 5
+- { common_pid: firefox [ 8817], size: 588 } hitcount: 2
+- { common_pid: firefox [ 8817], size: 628 } hitcount: 1
+- { common_pid: firefox [ 8817], size: 6944 } hitcount: 1
+- { common_pid: firefox [ 8817], size: 408880 } hitcount: 2
+- { common_pid: firefox [ 8822], size: 8 } hitcount: 2
+- { common_pid: firefox [ 8822], size: 160 } hitcount: 2
+- { common_pid: firefox [ 8822], size: 320 } hitcount: 2
+- { common_pid: firefox [ 8822], size: 352 } hitcount: 1
+- .
+- .
+- .
+- { common_pid: pool [ 8923], size: 1960 } hitcount: 10
+- { common_pid: pool [ 8923], size: 2048 } hitcount: 10
+- { common_pid: pool [ 8924], size: 1960 } hitcount: 10
+- { common_pid: pool [ 8924], size: 2048 } hitcount: 10
+- { common_pid: pool [ 8928], size: 1964 } hitcount: 4
+- { common_pid: pool [ 8928], size: 1965 } hitcount: 2
+- { common_pid: pool [ 8928], size: 2048 } hitcount: 6
+- { common_pid: pool [ 8929], size: 1982 } hitcount: 1
+- { common_pid: pool [ 8929], size: 2048 } hitcount: 1
+-
+- Totals:
+- Hits: 2016
+- Entries: 224
+- Dropped: 0
+-
+- The above example also illustrates the fact that although a compound
+- key is treated as a single entity for hashing purposes, the sub-keys
+- it's composed of can be accessed independently.
+-
+- The next example uses a string field as the hash key and
+- demonstrates how you can manually pause and continue a hist trigger.
+- In this example, we'll aggregate fork counts and don't expect a
+- large number of entries in the hash table, so we'll drop it to a
+- much smaller number, say 256:
+-
+- # echo 'hist:key=child_comm:val=hitcount:size=256' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+- # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
+-
+- { child_comm: dconf worker } hitcount: 1
+- { child_comm: ibus-daemon } hitcount: 1
+- { child_comm: whoopsie } hitcount: 1
+- { child_comm: smbd } hitcount: 1
+- { child_comm: gdbus } hitcount: 1
+- { child_comm: kthreadd } hitcount: 1
+- { child_comm: dconf worker } hitcount: 1
+- { child_comm: evolution-alarm } hitcount: 2
+- { child_comm: Socket Thread } hitcount: 2
+- { child_comm: postgres } hitcount: 2
+- { child_comm: bash } hitcount: 3
+- { child_comm: compiz } hitcount: 3
+- { child_comm: evolution-sourc } hitcount: 4
+- { child_comm: dhclient } hitcount: 4
+- { child_comm: pool } hitcount: 5
+- { child_comm: nm-dispatcher.a } hitcount: 8
+- { child_comm: firefox } hitcount: 8
+- { child_comm: dbus-daemon } hitcount: 8
+- { child_comm: glib-pacrunner } hitcount: 10
+- { child_comm: evolution } hitcount: 23
+-
+- Totals:
+- Hits: 89
+- Entries: 20
+- Dropped: 0
+-
+- If we want to pause the hist trigger, we can simply append :pause to
+- the command that started the trigger. Notice that the trigger info
+- displays as [paused]:
+-
+- # echo 'hist:key=child_comm:val=hitcount:size=256:pause' >> \
+- /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+- # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [paused]
+-
+- { child_comm: dconf worker } hitcount: 1
+- { child_comm: kthreadd } hitcount: 1
+- { child_comm: dconf worker } hitcount: 1
+- { child_comm: gdbus } hitcount: 1
+- { child_comm: ibus-daemon } hitcount: 1
+- { child_comm: Socket Thread } hitcount: 2
+- { child_comm: evolution-alarm } hitcount: 2
+- { child_comm: smbd } hitcount: 2
+- { child_comm: bash } hitcount: 3
+- { child_comm: whoopsie } hitcount: 3
+- { child_comm: compiz } hitcount: 3
+- { child_comm: evolution-sourc } hitcount: 4
+- { child_comm: pool } hitcount: 5
+- { child_comm: postgres } hitcount: 6
+- { child_comm: firefox } hitcount: 8
+- { child_comm: dhclient } hitcount: 10
+- { child_comm: emacs } hitcount: 12
+- { child_comm: dbus-daemon } hitcount: 20
+- { child_comm: nm-dispatcher.a } hitcount: 20
+- { child_comm: evolution } hitcount: 35
+- { child_comm: glib-pacrunner } hitcount: 59
+-
+- Totals:
+- Hits: 199
+- Entries: 21
+- Dropped: 0
+-
+- To manually continue having the trigger aggregate events, append
+- :cont instead. Notice that the trigger info displays as [active]
+- again, and the data has changed:
+-
+- # echo 'hist:key=child_comm:val=hitcount:size=256:cont' >> \
+- /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+-
+- # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+- # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
+-
+- { child_comm: dconf worker } hitcount: 1
+- { child_comm: dconf worker } hitcount: 1
+- { child_comm: kthreadd } hitcount: 1
+- { child_comm: gdbus } hitcount: 1
+- { child_comm: ibus-daemon } hitcount: 1
+- { child_comm: Socket Thread } hitcount: 2
+- { child_comm: evolution-alarm } hitcount: 2
+- { child_comm: smbd } hitcount: 2
+- { child_comm: whoopsie } hitcount: 3
+- { child_comm: compiz } hitcount: 3
+- { child_comm: evolution-sourc } hitcount: 4
+- { child_comm: bash } hitcount: 5
+- { child_comm: pool } hitcount: 5
+- { child_comm: postgres } hitcount: 6
+- { child_comm: firefox } hitcount: 8
+- { child_comm: dhclient } hitcount: 11
+- { child_comm: emacs } hitcount: 12
+- { child_comm: dbus-daemon } hitcount: 22
+- { child_comm: nm-dispatcher.a } hitcount: 22
+- { child_comm: evolution } hitcount: 35
+- { child_comm: glib-pacrunner } hitcount: 59
+-
+- Totals:
+- Hits: 206
+- Entries: 21
+- Dropped: 0
+-
+- The previous example showed how to start and stop a hist trigger by
+- appending 'pause' and 'continue' to the hist trigger command. A
+- hist trigger can also be started in a paused state by initially
+- starting the trigger with ':pause' appended. This allows you to
+- start the trigger only when you're ready to start collecting data
+- and not before. For example, you could start the trigger in a
+- paused state, then unpause it and do something you want to measure,
+- then pause the trigger again when done.
+-
+- Of course, doing this manually can be difficult and error-prone, but
+- it is possible to automatically start and stop a hist trigger based
+- on some condition, via the enable_hist and disable_hist triggers.
+-
+- For example, suppose we wanted to take a look at the relative
+- weights in terms of skb length for each callpath that leads to a
+- netif_receieve_skb event when downloading a decent-sized file using
+- wget.
+-
+- First we set up an initially paused stacktrace trigger on the
+- netif_receive_skb event:
+-
+- # echo 'hist:key=stacktrace:vals=len:pause' > \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+-
+- Next, we set up an 'enable_hist' trigger on the sched_process_exec
+- event, with an 'if filename==/usr/bin/wget' filter. The effect of
+- this new trigger is that it will 'unpause' the hist trigger we just
+- set up on netif_receive_skb if and only if it sees a
+- sched_process_exec event with a filename of '/usr/bin/wget'. When
+- that happens, all netif_receive_skb events are aggregated into a
+- hash table keyed on stacktrace:
+-
+- # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+-
+- The aggregation continues until the netif_receive_skb is paused
+- again, which is what the following disable_hist event does by
+- creating a similar setup on the sched_process_exit event, using the
+- filter 'comm==wget':
+-
+- # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+-
+- Whenever a process exits and the comm field of the disable_hist
+- trigger filter matches 'comm==wget', the netif_receive_skb hist
+- trigger is disabled.
+-
+- The overall effect is that netif_receive_skb events are aggregated
+- into the hash table for only the duration of the wget. Executing a
+- wget command and then listing the 'hist' file will display the
+- output generated by the wget command:
+-
+- $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
+-
+- # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+- # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
+-
+- { stacktrace:
+- __netif_receive_skb_core+0x46d/0x990
+- __netif_receive_skb+0x18/0x60
+- netif_receive_skb_internal+0x23/0x90
+- napi_gro_receive+0xc8/0x100
+- ieee80211_deliver_skb+0xd6/0x270 [mac80211]
+- ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
+- ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
+- ieee80211_rx+0x31d/0x900 [mac80211]
+- iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
+- iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
+- iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
+- irq_thread_fn+0x20/0x50
+- irq_thread+0x11f/0x150
+- kthread+0xd2/0xf0
+- ret_from_fork+0x42/0x70
+- } hitcount: 85 len: 28884
+- { stacktrace:
+- __netif_receive_skb_core+0x46d/0x990
+- __netif_receive_skb+0x18/0x60
+- netif_receive_skb_internal+0x23/0x90
+- napi_gro_complete+0xa4/0xe0
+- dev_gro_receive+0x23a/0x360
+- napi_gro_receive+0x30/0x100
+- ieee80211_deliver_skb+0xd6/0x270 [mac80211]
+- ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
+- ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
+- ieee80211_rx+0x31d/0x900 [mac80211]
+- iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
+- iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
+- iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
+- irq_thread_fn+0x20/0x50
+- irq_thread+0x11f/0x150
+- kthread+0xd2/0xf0
+- } hitcount: 98 len: 664329
+- { stacktrace:
+- __netif_receive_skb_core+0x46d/0x990
+- __netif_receive_skb+0x18/0x60
+- process_backlog+0xa8/0x150
+- net_rx_action+0x15d/0x340
+- __do_softirq+0x114/0x2c0
+- do_softirq_own_stack+0x1c/0x30
+- do_softirq+0x65/0x70
+- __local_bh_enable_ip+0xb5/0xc0
+- ip_finish_output+0x1f4/0x840
+- ip_output+0x6b/0xc0
+- ip_local_out_sk+0x31/0x40
+- ip_send_skb+0x1a/0x50
+- udp_send_skb+0x173/0x2a0
+- udp_sendmsg+0x2bf/0x9f0
+- inet_sendmsg+0x64/0xa0
+- sock_sendmsg+0x3d/0x50
+- } hitcount: 115 len: 13030
+- { stacktrace:
+- __netif_receive_skb_core+0x46d/0x990
+- __netif_receive_skb+0x18/0x60
+- netif_receive_skb_internal+0x23/0x90
+- napi_gro_complete+0xa4/0xe0
+- napi_gro_flush+0x6d/0x90
+- iwl_pcie_irq_handler+0x92a/0x12f0 [iwlwifi]
+- irq_thread_fn+0x20/0x50
+- irq_thread+0x11f/0x150
+- kthread+0xd2/0xf0
+- ret_from_fork+0x42/0x70
+- } hitcount: 934 len: 5512212
+-
+- Totals:
+- Hits: 1232
+- Entries: 4
+- Dropped: 0
+-
+- The above shows all the netif_receive_skb callpaths and their total
+- lengths for the duration of the wget command.
+-
+- The 'clear' hist trigger param can be used to clear the hash table.
+- Suppose we wanted to try another run of the previous example but
+- this time also wanted to see the complete list of events that went
+- into the histogram. In order to avoid having to set everything up
+- again, we can just clear the histogram first:
+-
+- # echo 'hist:key=stacktrace:vals=len:clear' >> \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+-
+- Just to verify that it is in fact cleared, here's what we now see in
+- the hist file:
+-
+- # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+- # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
+-
+- Totals:
+- Hits: 0
+- Entries: 0
+- Dropped: 0
+-
+- Since we want to see the detailed list of every netif_receive_skb
+- event occurring during the new run, which are in fact the same
+- events being aggregated into the hash table, we add some additional
+- 'enable_event' events to the triggering sched_process_exec and
+- sched_process_exit events as such:
+-
+- # echo 'enable_event:net:netif_receive_skb if filename==/usr/bin/wget' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+-
+- # echo 'disable_event:net:netif_receive_skb if comm==wget' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+-
+- If you read the trigger files for the sched_process_exec and
+- sched_process_exit triggers, you should see two triggers for each:
+- one enabling/disabling the hist aggregation and the other
+- enabling/disabling the logging of events:
+-
+- # cat /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+- enable_event:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
+- enable_hist:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
+-
+- # cat /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+- enable_event:net:netif_receive_skb:unlimited if comm==wget
+- disable_hist:net:netif_receive_skb:unlimited if comm==wget
+-
+- In other words, whenever either of the sched_process_exec or
+- sched_process_exit events is hit and matches 'wget', it enables or
+- disables both the histogram and the event log, and what you end up
+- with is a hash table and set of events just covering the specified
+- duration. Run the wget command again:
+-
+- $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
+-
+- Displaying the 'hist' file should show something similar to what you
+- saw in the last run, but this time you should also see the
+- individual events in the trace file:
+-
+- # cat /sys/kernel/debug/tracing/trace
+-
+- # tracer: nop
+- #
+- # entries-in-buffer/entries-written: 183/1426 #P:4
+- #
+- # _-----=> irqs-off
+- # / _----=> need-resched
+- # | / _---=> hardirq/softirq
+- # || / _--=> preempt-depth
+- # ||| / delay
+- # TASK-PID CPU# |||| TIMESTAMP FUNCTION
+- # | | | |||| | |
+- wget-15108 [000] ..s1 31769.606929: netif_receive_skb: dev=lo skbaddr=ffff88009c353100 len=60
+- wget-15108 [000] ..s1 31769.606999: netif_receive_skb: dev=lo skbaddr=ffff88009c353200 len=60
+- dnsmasq-1382 [000] ..s1 31769.677652: netif_receive_skb: dev=lo skbaddr=ffff88009c352b00 len=130
+- dnsmasq-1382 [000] ..s1 31769.685917: netif_receive_skb: dev=lo skbaddr=ffff88009c352200 len=138
+- ##### CPU 2 buffer started ####
+- irq/29-iwlwifi-559 [002] ..s. 31772.031529: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433d00 len=2948
+- irq/29-iwlwifi-559 [002] ..s. 31772.031572: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432200 len=1500
+- irq/29-iwlwifi-559 [002] ..s. 31772.032196: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433100 len=2948
+- irq/29-iwlwifi-559 [002] ..s. 31772.032761: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433000 len=2948
+- irq/29-iwlwifi-559 [002] ..s. 31772.033220: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432e00 len=1500
+- .
+- .
+- .
+-
+- The following example demonstrates how multiple hist triggers can be
+- attached to a given event. This capability can be useful for
+- creating a set of different summaries derived from the same set of
+- events, or for comparing the effects of different filters, among
+- other things.
+-
+- # echo 'hist:keys=skbaddr.hex:vals=len if len < 0' >> \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+- # echo 'hist:keys=skbaddr.hex:vals=len if len > 4096' >> \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+- # echo 'hist:keys=skbaddr.hex:vals=len if len == 256' >> \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+- # echo 'hist:keys=skbaddr.hex:vals=len' >> \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+- # echo 'hist:keys=len:vals=common_preempt_count' >> \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+-
+- The above set of commands create four triggers differing only in
+- their filters, along with a completely different though fairly
+- nonsensical trigger. Note that in order to append multiple hist
+- triggers to the same file, you should use the '>>' operator to
+- append them ('>' will also add the new hist trigger, but will remove
+- any existing hist triggers beforehand).
+-
+- Displaying the contents of the 'hist' file for the event shows the
+- contents of all five histograms:
+-
+- # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+-
+- # event histogram
+- #
+- # trigger info: hist:keys=len:vals=hitcount,common_preempt_count:sort=hitcount:size=2048 [active]
+- #
+-
+- { len: 176 } hitcount: 1 common_preempt_count: 0
+- { len: 223 } hitcount: 1 common_preempt_count: 0
+- { len: 4854 } hitcount: 1 common_preempt_count: 0
+- { len: 395 } hitcount: 1 common_preempt_count: 0
+- { len: 177 } hitcount: 1 common_preempt_count: 0
+- { len: 446 } hitcount: 1 common_preempt_count: 0
+- { len: 1601 } hitcount: 1 common_preempt_count: 0
+- .
+- .
+- .
+- { len: 1280 } hitcount: 66 common_preempt_count: 0
+- { len: 116 } hitcount: 81 common_preempt_count: 40
+- { len: 708 } hitcount: 112 common_preempt_count: 0
+- { len: 46 } hitcount: 221 common_preempt_count: 0
+- { len: 1264 } hitcount: 458 common_preempt_count: 0
+-
+- Totals:
+- Hits: 1428
+- Entries: 147
+- Dropped: 0
+-
+-
+- # event histogram
+- #
+- # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
+- #
+-
+- { skbaddr: ffff8800baee5e00 } hitcount: 1 len: 130
+- { skbaddr: ffff88005f3d5600 } hitcount: 1 len: 1280
+- { skbaddr: ffff88005f3d4900 } hitcount: 1 len: 1280
+- { skbaddr: ffff88009fed6300 } hitcount: 1 len: 115
+- { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 115
+- { skbaddr: ffff88008cdb1900 } hitcount: 1 len: 46
+- { skbaddr: ffff880064b5ef00 } hitcount: 1 len: 118
+- { skbaddr: ffff880044e3c700 } hitcount: 1 len: 60
+- { skbaddr: ffff880100065900 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d46bd500 } hitcount: 1 len: 116
+- { skbaddr: ffff88005f3d5f00 } hitcount: 1 len: 1280
+- { skbaddr: ffff880100064700 } hitcount: 1 len: 365
+- { skbaddr: ffff8800badb6f00 } hitcount: 1 len: 60
+- .
+- .
+- .
+- { skbaddr: ffff88009fe0be00 } hitcount: 27 len: 24677
+- { skbaddr: ffff88009fe0a400 } hitcount: 27 len: 23052
+- { skbaddr: ffff88009fe0b700 } hitcount: 31 len: 25589
+- { skbaddr: ffff88009fe0b600 } hitcount: 32 len: 27326
+- { skbaddr: ffff88006a462800 } hitcount: 68 len: 71678
+- { skbaddr: ffff88006a463700 } hitcount: 70 len: 72678
+- { skbaddr: ffff88006a462b00 } hitcount: 71 len: 77589
+- { skbaddr: ffff88006a463600 } hitcount: 73 len: 71307
+- { skbaddr: ffff88006a462200 } hitcount: 81 len: 81032
+-
+- Totals:
+- Hits: 1451
+- Entries: 318
+- Dropped: 0
+-
+-
+- # event histogram
+- #
+- # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len == 256 [active]
+- #
+-
+-
+- Totals:
+- Hits: 0
+- Entries: 0
+- Dropped: 0
+-
+-
+- # event histogram
+- #
+- # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len > 4096 [active]
+- #
+-
+- { skbaddr: ffff88009fd2c300 } hitcount: 1 len: 7212
+- { skbaddr: ffff8800d2bcce00 } hitcount: 1 len: 7212
+- { skbaddr: ffff8800d2bcd700 } hitcount: 1 len: 7212
+- { skbaddr: ffff8800d2bcda00 } hitcount: 1 len: 21492
+- { skbaddr: ffff8800ae2e2d00 } hitcount: 1 len: 7212
+- { skbaddr: ffff8800d2bcdb00 } hitcount: 1 len: 7212
+- { skbaddr: ffff88006a4df500 } hitcount: 1 len: 4854
+- { skbaddr: ffff88008ce47b00 } hitcount: 1 len: 18636
+- { skbaddr: ffff8800ae2e2200 } hitcount: 1 len: 12924
+- { skbaddr: ffff88005f3e1000 } hitcount: 1 len: 4356
+- { skbaddr: ffff8800d2bcdc00 } hitcount: 2 len: 24420
+- { skbaddr: ffff8800d2bcc200 } hitcount: 2 len: 12996
+-
+- Totals:
+- Hits: 14
+- Entries: 12
+- Dropped: 0
+-
+-
+- # event histogram
+- #
+- # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len < 0 [active]
+- #
+-
+-
+- Totals:
+- Hits: 0
+- Entries: 0
+- Dropped: 0
+-
+- Named triggers can be used to have triggers share a common set of
+- histogram data. This capability is mostly useful for combining the
+- output of events generated by tracepoints contained inside inline
+- functions, but names can be used in a hist trigger on any event.
+- For example, these two triggers when hit will update the same 'len'
+- field in the shared 'foo' histogram data:
+-
+- # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
+- /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+- # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
+- /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+-
+- You can see that they're updating common histogram data by reading
+- each event's hist files at the same time:
+-
+- # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist;
+- cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+-
+- # event histogram
+- #
+- # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
+- #
+-
+- { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
+- { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
+- { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
+- { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
+- { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
+- { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
+- { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
+- { skbaddr: ffff880064505000 } hitcount: 1 len: 46
+- { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
+- { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
+- { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
+- { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
+- { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
+- { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
+- { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
+- { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
+- { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
+- { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
+- { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
+- { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
+- { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
+- { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
+- { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
+- { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
+- { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
+- { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
+- { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
+- { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
+- { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
+- { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
+- { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
+- { skbaddr: ffff880064504400 } hitcount: 4 len: 184
+- { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
+- { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
+- { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
+- { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
+- { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
+-
+- Totals:
+- Hits: 81
+- Entries: 42
+- Dropped: 0
+- # event histogram
+- #
+- # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
+- #
+-
+- { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
+- { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
+- { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
+- { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
+- { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
+- { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
+- { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
+- { skbaddr: ffff880064505000 } hitcount: 1 len: 46
+- { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
+- { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
+- { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
+- { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
+- { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
+- { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
+- { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
+- { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
+- { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
+- { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
+- { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
+- { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
+- { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
+- { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
+- { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
+- { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
+- { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
+- { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
+- { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
+- { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
+- { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
+- { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
+- { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
+- { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
+- { skbaddr: ffff880064504400 } hitcount: 4 len: 184
+- { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
+- { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
+- { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
+- { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
+- { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
+-
+- Totals:
+- Hits: 81
+- Entries: 42
+- Dropped: 0
+-
+- And here's an example that shows how to combine histogram data from
+- any two events even if they don't share any 'compatible' fields
+- other than 'hitcount' and 'stacktrace'. These commands create a
+- couple of triggers named 'bar' using those fields:
+-
+- # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
+- /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+- # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
+- /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+-
+- And displaying the output of either shows some interesting if
+- somewhat confusing output:
+-
+- # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+- # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+-
+- # event histogram
+- #
+- # trigger info: hist:name=bar:keys=stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]
+- #
+-
+- { stacktrace:
+- _do_fork+0x18e/0x330
+- kernel_thread+0x29/0x30
+- kthreadd+0x154/0x1b0
+- ret_from_fork+0x3f/0x70
+- } hitcount: 1
+- { stacktrace:
+- netif_rx_internal+0xb2/0xd0
+- netif_rx_ni+0x20/0x70
+- dev_loopback_xmit+0xaa/0xd0
+- ip_mc_output+0x126/0x240
+- ip_local_out_sk+0x31/0x40
+- igmp_send_report+0x1e9/0x230
+- igmp_timer_expire+0xe9/0x120
+- call_timer_fn+0x39/0xf0
+- run_timer_softirq+0x1e1/0x290
+- __do_softirq+0xfd/0x290
+- irq_exit+0x98/0xb0
+- smp_apic_timer_interrupt+0x4a/0x60
+- apic_timer_interrupt+0x6d/0x80
+- cpuidle_enter+0x17/0x20
+- call_cpuidle+0x3b/0x60
+- cpu_startup_entry+0x22d/0x310
+- } hitcount: 1
+- { stacktrace:
+- netif_rx_internal+0xb2/0xd0
+- netif_rx_ni+0x20/0x70
+- dev_loopback_xmit+0xaa/0xd0
+- ip_mc_output+0x17f/0x240
+- ip_local_out_sk+0x31/0x40
+- ip_send_skb+0x1a/0x50
+- udp_send_skb+0x13e/0x270
+- udp_sendmsg+0x2bf/0x980
+- inet_sendmsg+0x67/0xa0
+- sock_sendmsg+0x38/0x50
+- SYSC_sendto+0xef/0x170
+- SyS_sendto+0xe/0x10
+- entry_SYSCALL_64_fastpath+0x12/0x6a
+- } hitcount: 2
+- { stacktrace:
+- netif_rx_internal+0xb2/0xd0
+- netif_rx+0x1c/0x60
+- loopback_xmit+0x6c/0xb0
+- dev_hard_start_xmit+0x219/0x3a0
+- __dev_queue_xmit+0x415/0x4f0
+- dev_queue_xmit_sk+0x13/0x20
+- ip_finish_output2+0x237/0x340
+- ip_finish_output+0x113/0x1d0
+- ip_output+0x66/0xc0
+- ip_local_out_sk+0x31/0x40
+- ip_send_skb+0x1a/0x50
+- udp_send_skb+0x16d/0x270
+- udp_sendmsg+0x2bf/0x980
+- inet_sendmsg+0x67/0xa0
+- sock_sendmsg+0x38/0x50
+- ___sys_sendmsg+0x14e/0x270
+- } hitcount: 76
+- { stacktrace:
+- netif_rx_internal+0xb2/0xd0
+- netif_rx+0x1c/0x60
+- loopback_xmit+0x6c/0xb0
+- dev_hard_start_xmit+0x219/0x3a0
+- __dev_queue_xmit+0x415/0x4f0
+- dev_queue_xmit_sk+0x13/0x20
+- ip_finish_output2+0x237/0x340
+- ip_finish_output+0x113/0x1d0
+- ip_output+0x66/0xc0
+- ip_local_out_sk+0x31/0x40
+- ip_send_skb+0x1a/0x50
+- udp_send_skb+0x16d/0x270
+- udp_sendmsg+0x2bf/0x980
+- inet_sendmsg+0x67/0xa0
+- sock_sendmsg+0x38/0x50
+- ___sys_sendmsg+0x269/0x270
+- } hitcount: 77
+- { stacktrace:
+- netif_rx_internal+0xb2/0xd0
+- netif_rx+0x1c/0x60
+- loopback_xmit+0x6c/0xb0
+- dev_hard_start_xmit+0x219/0x3a0
+- __dev_queue_xmit+0x415/0x4f0
+- dev_queue_xmit_sk+0x13/0x20
+- ip_finish_output2+0x237/0x340
+- ip_finish_output+0x113/0x1d0
+- ip_output+0x66/0xc0
+- ip_local_out_sk+0x31/0x40
+- ip_send_skb+0x1a/0x50
+- udp_send_skb+0x16d/0x270
+- udp_sendmsg+0x2bf/0x980
+- inet_sendmsg+0x67/0xa0
+- sock_sendmsg+0x38/0x50
+- SYSC_sendto+0xef/0x170
+- } hitcount: 88
+- { stacktrace:
+- _do_fork+0x18e/0x330
+- SyS_clone+0x19/0x20
+- entry_SYSCALL_64_fastpath+0x12/0x6a
+- } hitcount: 244
+-
+- Totals:
+- Hits: 489
+- Entries: 7
+- Dropped: 0
++ See Documentation/trace/histogram.txt for details and examples.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/Documentation/trace/ftrace.txt linux-4.14/Documentation/trace/ftrace.txt
+--- linux-4.14.orig/Documentation/trace/ftrace.txt 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/Documentation/trace/ftrace.txt 2018-09-05 11:05:07.000000000 +0200
+@@ -539,6 +539,30 @@
+
+ See events.txt for more information.
+
++ timestamp_mode:
++
++ Certain tracers may change the timestamp mode used when
++ logging trace events into the event buffer. Events with
++ different modes can coexist within a buffer but the mode in
++ effect when an event is logged determines which timestamp mode
++ is used for that event. The default timestamp mode is
++ 'delta'.
++
++ Usual timestamp modes for tracing:
++
++ # cat timestamp_mode
++ [delta] absolute
++
++ The timestamp mode with the square brackets around it is the
++ one in effect.
++
++ delta: Default timestamp mode - timestamp is a delta against
++ a per-buffer timestamp.
++
++ absolute: The timestamp is a full timestamp, not a delta
++ against some other value. As such it takes up more
++ space and is less efficient.
++
+ hwlat_detector:
+
+ Directory for the Hardware Latency Detector.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/Documentation/trace/histogram.txt linux-4.14/Documentation/trace/histogram.txt
+--- linux-4.14.orig/Documentation/trace/histogram.txt 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/Documentation/trace/histogram.txt 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,1995 @@
++ Event Histograms
++
++ Documentation written by Tom Zanussi
++
++1. Introduction
++===============
++
++ Histogram triggers are special event triggers that can be used to
++ aggregate trace event data into histograms. For information on
++ trace events and event triggers, see Documentation/trace/events.txt.
++
++
++2. Histogram Trigger Command
++============================
++
++ A histogram trigger command is an event trigger command that
++ aggregates event hits into a hash table keyed on one or more trace
++ event format fields (or stacktrace) and a set of running totals
++ derived from one or more trace event format fields and/or event
++ counts (hitcount).
++
++ The format of a hist trigger is as follows:
++
++ hist:keys=<field1[,field2,...]>[:values=<field1[,field2,...]>]
++ [:sort=<field1[,field2,...]>][:size=#entries][:pause][:continue]
++ [:clear][:name=histname1] [if <filter>]
++
++ When a matching event is hit, an entry is added to a hash table
++ using the key(s) and value(s) named. Keys and values correspond to
++ fields in the event's format description. Values must correspond to
++ numeric fields - on an event hit, the value(s) will be added to a
++ sum kept for that field. The special string 'hitcount' can be used
++ in place of an explicit value field - this is simply a count of
++ event hits. If 'values' isn't specified, an implicit 'hitcount'
++ value will be automatically created and used as the only value.
++ Keys can be any field, or the special string 'stacktrace', which
++ will use the event's kernel stacktrace as the key. The keywords
++ 'keys' or 'key' can be used to specify keys, and the keywords
++ 'values', 'vals', or 'val' can be used to specify values. Compound
++ keys consisting of up to two fields can be specified by the 'keys'
++ keyword. Hashing a compound key produces a unique entry in the
++ table for each unique combination of component keys, and can be
++ useful for providing more fine-grained summaries of event data.
++ Additionally, sort keys consisting of up to two fields can be
++ specified by the 'sort' keyword. If more than one field is
++ specified, the result will be a 'sort within a sort': the first key
++ is taken to be the primary sort key and the second the secondary
++ key. If a hist trigger is given a name using the 'name' parameter,
++ its histogram data will be shared with other triggers of the same
++ name, and trigger hits will update this common data. Only triggers
++ with 'compatible' fields can be combined in this way; triggers are
++ 'compatible' if the fields named in the trigger share the same
++ number and type of fields and those fields also have the same names.
++ Note that any two events always share the compatible 'hitcount' and
++ 'stacktrace' fields and can therefore be combined using those
++ fields, however pointless that may be.
++
++ 'hist' triggers add a 'hist' file to each event's subdirectory.
++ Reading the 'hist' file for the event will dump the hash table in
++ its entirety to stdout. If there are multiple hist triggers
++ attached to an event, there will be a table for each trigger in the
++ output. The table displayed for a named trigger will be the same as
++ any other instance having the same name. Each printed hash table
++ entry is a simple list of the keys and values comprising the entry;
++ keys are printed first and are delineated by curly braces, and are
++ followed by the set of value fields for the entry. By default,
++ numeric fields are displayed as base-10 integers. This can be
++ modified by appending any of the following modifiers to the field
++ name:
++
++ .hex display a number as a hex value
++ .sym display an address as a symbol
++ .sym-offset display an address as a symbol and offset
++ .syscall display a syscall id as a system call name
++ .execname display a common_pid as a program name
++ .log2 display log2 value rather than raw number
++ .usecs display a common_timestamp in microseconds
++
++ Note that in general the semantics of a given field aren't
++ interpreted when applying a modifier to it, but there are some
++ restrictions to be aware of in this regard:
++
++ - only the 'hex' modifier can be used for values (because values
++ are essentially sums, and the other modifiers don't make sense
++ in that context).
++ - the 'execname' modifier can only be used on a 'common_pid'. The
++ reason for this is that the execname is simply the 'comm' value
++ saved for the 'current' process when an event was triggered,
++ which is the same as the common_pid value saved by the event
++ tracing code. Trying to apply that comm value to other pid
++ values wouldn't be correct, and typically events that care save
++ pid-specific comm fields in the event itself.
++
++ A typical usage scenario would be the following to enable a hist
++ trigger, read its current contents, and then turn it off:
++
++ # echo 'hist:keys=skbaddr.hex:vals=len' > \
++ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
++
++ # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
++
++ # echo '!hist:keys=skbaddr.hex:vals=len' > \
++ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
++
++ The trigger file itself can be read to show the details of the
++ currently attached hist trigger. This information is also displayed
++ at the top of the 'hist' file when read.
++
++ By default, the size of the hash table is 2048 entries. The 'size'
++ parameter can be used to specify more or fewer than that. The units
++ are in terms of hashtable entries - if a run uses more entries than
++ specified, the results will show the number of 'drops', the number
++ of hits that were ignored. The size should be a power of 2 between
++ 128 and 131072 (any non- power-of-2 number specified will be rounded
++ up).
++
++ The 'sort' parameter can be used to specify a value field to sort
++ on. The default if unspecified is 'hitcount' and the default sort
++ order is 'ascending'. To sort in the opposite direction, append
++ .descending' to the sort key.
++
++ The 'pause' parameter can be used to pause an existing hist trigger
++ or to start a hist trigger but not log any events until told to do
++ so. 'continue' or 'cont' can be used to start or restart a paused
++ hist trigger.
++
++ The 'clear' parameter will clear the contents of a running hist
++ trigger and leave its current paused/active state.
++
++ Note that the 'pause', 'cont', and 'clear' parameters should be
++ applied using 'append' shell operator ('>>') if applied to an
++ existing trigger, rather than via the '>' operator, which will cause
++ the trigger to be removed through truncation.
++
++- enable_hist/disable_hist
++
++ The enable_hist and disable_hist triggers can be used to have one
++ event conditionally start and stop another event's already-attached
++ hist trigger. Any number of enable_hist and disable_hist triggers
++ can be attached to a given event, allowing that event to kick off
++ and stop aggregations on a host of other events.
++
++ The format is very similar to the enable/disable_event triggers:
++
++ enable_hist:<system>:<event>[:count]
++ disable_hist:<system>:<event>[:count]
++
++ Instead of enabling or disabling the tracing of the target event
++ into the trace buffer as the enable/disable_event triggers do, the
++ enable/disable_hist triggers enable or disable the aggregation of
++ the target event into a hash table.
++
++ A typical usage scenario for the enable_hist/disable_hist triggers
++ would be to first set up a paused hist trigger on some event,
++ followed by an enable_hist/disable_hist pair that turns the hist
++ aggregation on and off when conditions of interest are hit:
++
++ # echo 'hist:keys=skbaddr.hex:vals=len:pause' > \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++
++ # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
++
++ # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
++
++ The above sets up an initially paused hist trigger which is unpaused
++ and starts aggregating events when a given program is executed, and
++ which stops aggregating when the process exits and the hist trigger
++ is paused again.
++
++ The examples below provide a more concrete illustration of the
++ concepts and typical usage patterns discussed above.
++
++ 'special' event fields
++ ------------------------
++
++ There are a number of 'special event fields' available for use as
++ keys or values in a hist trigger. These look like and behave as if
++ they were actual event fields, but aren't really part of the event's
++ field definition or format file. They are however available for any
++ event, and can be used anywhere an actual event field could be.
++ They are:
++
++ common_timestamp u64 - timestamp (from ring buffer) associated
++ with the event, in nanoseconds. May be
++ modified by .usecs to have timestamps
++ interpreted as microseconds.
++ cpu int - the cpu on which the event occurred.
++
++ Extended error information
++ --------------------------
++
++ For some error conditions encountered when invoking a hist trigger
++ command, extended error information is available via the
++ corresponding event's 'hist' file. Reading the hist file after an
++ error will display more detailed information about what went wrong,
++ if information is available. This extended error information will
++ be available until the next hist trigger command for that event.
++
++ If available for a given error condition, the extended error
++ information and usage takes the following form:
++
++ # echo xxx > /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
++ echo: write error: Invalid argument
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_wakeup/hist
++ ERROR: Couldn't yyy: zzz
++ Last command: xxx
++
++6.2 'hist' trigger examples
++---------------------------
++
++ The first set of examples creates aggregations using the kmalloc
++ event. The fields that can be used for the hist trigger are listed
++ in the kmalloc event's format file:
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/format
++ name: kmalloc
++ ID: 374
++ format:
++ field:unsigned short common_type; offset:0; size:2; signed:0;
++ field:unsigned char common_flags; offset:2; size:1; signed:0;
++ field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
++ field:int common_pid; offset:4; size:4; signed:1;
++
++ field:unsigned long call_site; offset:8; size:8; signed:0;
++ field:const void * ptr; offset:16; size:8; signed:0;
++ field:size_t bytes_req; offset:24; size:8; signed:0;
++ field:size_t bytes_alloc; offset:32; size:8; signed:0;
++ field:gfp_t gfp_flags; offset:40; size:4; signed:0;
++
++ We'll start by creating a hist trigger that generates a simple table
++ that lists the total number of bytes requested for each function in
++ the kernel that made one or more calls to kmalloc:
++
++ # echo 'hist:key=call_site:val=bytes_req' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ This tells the tracing system to create a 'hist' trigger using the
++ call_site field of the kmalloc event as the key for the table, which
++ just means that each unique call_site address will have an entry
++ created for it in the table. The 'val=bytes_req' parameter tells
++ the hist trigger that for each unique entry (call_site) in the
++ table, it should keep a running total of the number of bytes
++ requested by that call_site.
++
++ We'll let it run for awhile and then dump the contents of the 'hist'
++ file in the kmalloc event's subdirectory (for readability, a number
++ of entries have been omitted):
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
++ # trigger info: hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
++
++ { call_site: 18446744072106379007 } hitcount: 1 bytes_req: 176
++ { call_site: 18446744071579557049 } hitcount: 1 bytes_req: 1024
++ { call_site: 18446744071580608289 } hitcount: 1 bytes_req: 16384
++ { call_site: 18446744071581827654 } hitcount: 1 bytes_req: 24
++ { call_site: 18446744071580700980 } hitcount: 1 bytes_req: 8
++ { call_site: 18446744071579359876 } hitcount: 1 bytes_req: 152
++ { call_site: 18446744071580795365 } hitcount: 3 bytes_req: 144
++ { call_site: 18446744071581303129 } hitcount: 3 bytes_req: 144
++ { call_site: 18446744071580713234 } hitcount: 4 bytes_req: 2560
++ { call_site: 18446744071580933750 } hitcount: 4 bytes_req: 736
++ .
++ .
++ .
++ { call_site: 18446744072106047046 } hitcount: 69 bytes_req: 5576
++ { call_site: 18446744071582116407 } hitcount: 73 bytes_req: 2336
++ { call_site: 18446744072106054684 } hitcount: 136 bytes_req: 140504
++ { call_site: 18446744072106224230 } hitcount: 136 bytes_req: 19584
++ { call_site: 18446744072106078074 } hitcount: 153 bytes_req: 2448
++ { call_site: 18446744072106062406 } hitcount: 153 bytes_req: 36720
++ { call_site: 18446744071582507929 } hitcount: 153 bytes_req: 37088
++ { call_site: 18446744072102520590 } hitcount: 273 bytes_req: 10920
++ { call_site: 18446744071582143559 } hitcount: 358 bytes_req: 716
++ { call_site: 18446744072106465852 } hitcount: 417 bytes_req: 56712
++ { call_site: 18446744072102523378 } hitcount: 485 bytes_req: 27160
++ { call_site: 18446744072099568646 } hitcount: 1676 bytes_req: 33520
++
++ Totals:
++ Hits: 4610
++ Entries: 45
++ Dropped: 0
++
++ The output displays a line for each entry, beginning with the key
++ specified in the trigger, followed by the value(s) also specified in
++ the trigger. At the beginning of the output is a line that displays
++ the trigger info, which can also be displayed by reading the
++ 'trigger' file:
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++ hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
++
++ At the end of the output are a few lines that display the overall
++ totals for the run. The 'Hits' field shows the total number of
++ times the event trigger was hit, the 'Entries' field shows the total
++ number of used entries in the hash table, and the 'Dropped' field
++ shows the number of hits that were dropped because the number of
++ used entries for the run exceeded the maximum number of entries
++ allowed for the table (normally 0, but if not a hint that you may
++ want to increase the size of the table using the 'size' parameter).
++
++ Notice in the above output that there's an extra field, 'hitcount',
++ which wasn't specified in the trigger. Also notice that in the
++ trigger info output, there's a parameter, 'sort=hitcount', which
++ wasn't specified in the trigger either. The reason for that is that
++ every trigger implicitly keeps a count of the total number of hits
++ attributed to a given entry, called the 'hitcount'. That hitcount
++ information is explicitly displayed in the output, and in the
++ absence of a user-specified sort parameter, is used as the default
++ sort field.
++
++ The value 'hitcount' can be used in place of an explicit value in
++ the 'values' parameter if you don't really need to have any
++ particular field summed and are mainly interested in hit
++ frequencies.
++
++ To turn the hist trigger off, simply call up the trigger in the
++ command history and re-execute it with a '!' prepended:
++
++ # echo '!hist:key=call_site:val=bytes_req' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ Finally, notice that the call_site as displayed in the output above
++ isn't really very useful. It's an address, but normally addresses
++ are displayed in hex. To have a numeric field displayed as a hex
++ value, simply append '.hex' to the field name in the trigger:
++
++ # echo 'hist:key=call_site.hex:val=bytes_req' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
++ # trigger info: hist:keys=call_site.hex:vals=bytes_req:sort=hitcount:size=2048 [active]
++
++ { call_site: ffffffffa026b291 } hitcount: 1 bytes_req: 433
++ { call_site: ffffffffa07186ff } hitcount: 1 bytes_req: 176
++ { call_site: ffffffff811ae721 } hitcount: 1 bytes_req: 16384
++ { call_site: ffffffff811c5134 } hitcount: 1 bytes_req: 8
++ { call_site: ffffffffa04a9ebb } hitcount: 1 bytes_req: 511
++ { call_site: ffffffff8122e0a6 } hitcount: 1 bytes_req: 12
++ { call_site: ffffffff8107da84 } hitcount: 1 bytes_req: 152
++ { call_site: ffffffff812d8246 } hitcount: 1 bytes_req: 24
++ { call_site: ffffffff811dc1e5 } hitcount: 3 bytes_req: 144
++ { call_site: ffffffffa02515e8 } hitcount: 3 bytes_req: 648
++ { call_site: ffffffff81258159 } hitcount: 3 bytes_req: 144
++ { call_site: ffffffff811c80f4 } hitcount: 4 bytes_req: 544
++ .
++ .
++ .
++ { call_site: ffffffffa06c7646 } hitcount: 106 bytes_req: 8024
++ { call_site: ffffffffa06cb246 } hitcount: 132 bytes_req: 31680
++ { call_site: ffffffffa06cef7a } hitcount: 132 bytes_req: 2112
++ { call_site: ffffffff8137e399 } hitcount: 132 bytes_req: 23232
++ { call_site: ffffffffa06c941c } hitcount: 185 bytes_req: 171360
++ { call_site: ffffffffa06f2a66 } hitcount: 185 bytes_req: 26640
++ { call_site: ffffffffa036a70e } hitcount: 265 bytes_req: 10600
++ { call_site: ffffffff81325447 } hitcount: 292 bytes_req: 584
++ { call_site: ffffffffa072da3c } hitcount: 446 bytes_req: 60656
++ { call_site: ffffffffa036b1f2 } hitcount: 526 bytes_req: 29456
++ { call_site: ffffffffa0099c06 } hitcount: 1780 bytes_req: 35600
++
++ Totals:
++ Hits: 4775
++ Entries: 46
++ Dropped: 0
++
++ Even that's only marginally more useful - while hex values do look
++ more like addresses, what users are typically more interested in
++ when looking at text addresses are the corresponding symbols
++ instead. To have an address displayed as symbolic value instead,
++ simply append '.sym' or '.sym-offset' to the field name in the
++ trigger:
++
++ # echo 'hist:key=call_site.sym:val=bytes_req' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
++ # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=hitcount:size=2048 [active]
++
++ { call_site: [ffffffff810adcb9] syslog_print_all } hitcount: 1 bytes_req: 1024
++ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
++ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
++ { call_site: [ffffffff8154acbe] usb_alloc_urb } hitcount: 1 bytes_req: 192
++ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
++ { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
++ { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
++ { call_site: [ffffffff811febd5] fsnotify_alloc_group } hitcount: 2 bytes_req: 528
++ { call_site: [ffffffff81440f58] __tty_buffer_request_room } hitcount: 2 bytes_req: 2624
++ { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 2 bytes_req: 96
++ { call_site: [ffffffffa05e19af] ieee80211_start_tx_ba_session [mac80211] } hitcount: 2 bytes_req: 464
++ { call_site: [ffffffff81672406] tcp_get_metrics } hitcount: 2 bytes_req: 304
++ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
++ { call_site: [ffffffff81089b05] sched_create_group } hitcount: 2 bytes_req: 1424
++ .
++ .
++ .
++ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1185 bytes_req: 123240
++ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 1185 bytes_req: 104280
++ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 1402 bytes_req: 190672
++ { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 1518 bytes_req: 146208
++ { call_site: [ffffffffa029070e] drm_vma_node_allow [drm] } hitcount: 1746 bytes_req: 69840
++ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 2021 bytes_req: 792312
++ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 2592 bytes_req: 145152
++ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2629 bytes_req: 378576
++ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2629 bytes_req: 3783248
++ { call_site: [ffffffff81325607] apparmor_file_alloc_security } hitcount: 5192 bytes_req: 10384
++ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 5529 bytes_req: 110584
++ { call_site: [ffffffff8131ebf7] aa_alloc_task_context } hitcount: 21943 bytes_req: 702176
++ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 55759 bytes_req: 5074265
++
++ Totals:
++ Hits: 109928
++ Entries: 71
++ Dropped: 0
++
++ Because the default sort key above is 'hitcount', the above shows a
++ the list of call_sites by increasing hitcount, so that at the bottom
++ we see the functions that made the most kmalloc calls during the
++ run. If instead we we wanted to see the top kmalloc callers in
++ terms of the number of bytes requested rather than the number of
++ calls, and we wanted the top caller to appear at the top, we can use
++ the 'sort' parameter, along with the 'descending' modifier:
++
++ # echo 'hist:key=call_site.sym:val=bytes_req:sort=bytes_req.descending' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
++ # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
++
++ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2186 bytes_req: 3397464
++ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1790 bytes_req: 712176
++ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 8132 bytes_req: 513135
++ { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 106 bytes_req: 440128
++ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2186 bytes_req: 314784
++ { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 2174 bytes_req: 208992
++ { call_site: [ffffffff811ae8e1] __kmalloc } hitcount: 8 bytes_req: 131072
++ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 859 bytes_req: 116824
++ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 1834 bytes_req: 102704
++ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 972 bytes_req: 101088
++ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 972 bytes_req: 85536
++ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 3333 bytes_req: 66664
++ { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 209 bytes_req: 61632
++ .
++ .
++ .
++ { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
++ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
++ { call_site: [ffffffff812d8406] copy_semundo } hitcount: 2 bytes_req: 48
++ { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 1 bytes_req: 48
++ { call_site: [ffffffffa027121a] drm_getmagic [drm] } hitcount: 1 bytes_req: 48
++ { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
++ { call_site: [ffffffff811c52f4] bprm_change_interp } hitcount: 2 bytes_req: 16
++ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
++ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
++ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
++
++ Totals:
++ Hits: 32133
++ Entries: 81
++ Dropped: 0
++
++ To display the offset and size information in addition to the symbol
++ name, just use 'sym-offset' instead:
++
++ # echo 'hist:key=call_site.sym-offset:val=bytes_req:sort=bytes_req.descending' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
++ # trigger info: hist:keys=call_site.sym-offset:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
++
++ { call_site: [ffffffffa046041c] i915_gem_execbuffer2+0x6c/0x2c0 [i915] } hitcount: 4569 bytes_req: 3163720
++ { call_site: [ffffffffa0489a66] intel_ring_begin+0xc6/0x1f0 [i915] } hitcount: 4569 bytes_req: 657936
++ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23+0x694/0x1020 [i915] } hitcount: 1519 bytes_req: 472936
++ { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23+0x516/0x1020 [i915] } hitcount: 3050 bytes_req: 211832
++ { call_site: [ffffffff811e2a1b] seq_buf_alloc+0x1b/0x50 } hitcount: 34 bytes_req: 148384
++ { call_site: [ffffffffa04a580c] intel_crtc_page_flip+0xbc/0x870 [i915] } hitcount: 1385 bytes_req: 144040
++ { call_site: [ffffffff811ae8e1] __kmalloc+0x191/0x1b0 } hitcount: 8 bytes_req: 131072
++ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl+0x282/0x360 [drm] } hitcount: 1385 bytes_req: 121880
++ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc+0x32/0x100 [drm] } hitcount: 1848 bytes_req: 103488
++ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state+0x2c/0xa0 [i915] } hitcount: 461 bytes_req: 62696
++ { call_site: [ffffffffa029070e] drm_vma_node_allow+0x2e/0xd0 [drm] } hitcount: 1541 bytes_req: 61640
++ { call_site: [ffffffff815f8d7b] sk_prot_alloc+0xcb/0x1b0 } hitcount: 57 bytes_req: 57456
++ .
++ .
++ .
++ { call_site: [ffffffff8109524a] alloc_fair_sched_group+0x5a/0x1a0 } hitcount: 2 bytes_req: 128
++ { call_site: [ffffffffa027b921] drm_vm_open_locked+0x31/0xa0 [drm] } hitcount: 3 bytes_req: 96
++ { call_site: [ffffffff8122e266] proc_self_follow_link+0x76/0xb0 } hitcount: 8 bytes_req: 96
++ { call_site: [ffffffff81213e80] load_elf_binary+0x240/0x1650 } hitcount: 3 bytes_req: 84
++ { call_site: [ffffffff8154bc62] usb_control_msg+0x42/0x110 } hitcount: 1 bytes_req: 8
++ { call_site: [ffffffffa00bf6fe] hidraw_send_report+0x7e/0x1a0 [hid] } hitcount: 1 bytes_req: 7
++ { call_site: [ffffffffa00bf1ca] hidraw_report_event+0x8a/0x120 [hid] } hitcount: 1 bytes_req: 7
++
++ Totals:
++ Hits: 26098
++ Entries: 64
++ Dropped: 0
++
++ We can also add multiple fields to the 'values' parameter. For
++ example, we might want to see the total number of bytes allocated
++ alongside bytes requested, and display the result sorted by bytes
++ allocated in a descending order:
++
++ # echo 'hist:keys=call_site.sym:values=bytes_req,bytes_alloc:sort=bytes_alloc.descending' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
++ # trigger info: hist:keys=call_site.sym:vals=bytes_req,bytes_alloc:sort=bytes_alloc.descending:size=2048 [active]
++
++ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 7403 bytes_req: 4084360 bytes_alloc: 5958016
++ { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 541 bytes_req: 2213968 bytes_alloc: 2228224
++ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 7404 bytes_req: 1066176 bytes_alloc: 1421568
++ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1565 bytes_req: 557368 bytes_alloc: 1037760
++ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 9557 bytes_req: 595778 bytes_alloc: 695744
++ { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 5839 bytes_req: 430680 bytes_alloc: 470400
++ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 2388 bytes_req: 324768 bytes_alloc: 458496
++ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 3911 bytes_req: 219016 bytes_alloc: 250304
++ { call_site: [ffffffff815f8d7b] sk_prot_alloc } hitcount: 235 bytes_req: 236880 bytes_alloc: 240640
++ { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 557 bytes_req: 169024 bytes_alloc: 221760
++ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 9378 bytes_req: 187548 bytes_alloc: 206312
++ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1519 bytes_req: 157976 bytes_alloc: 194432
++ .
++ .
++ .
++ { call_site: [ffffffff8109bd3b] sched_autogroup_create_attach } hitcount: 2 bytes_req: 144 bytes_alloc: 192
++ { call_site: [ffffffff81097ee8] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
++ { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
++ { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
++ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
++ { call_site: [ffffffff81213e80] load_elf_binary } hitcount: 3 bytes_req: 84 bytes_alloc: 96
++ { call_site: [ffffffff81079a2e] kthread_create_on_node } hitcount: 1 bytes_req: 56 bytes_alloc: 64
++ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
++ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8 bytes_alloc: 8
++ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
++
++ Totals:
++ Hits: 66598
++ Entries: 65
++ Dropped: 0
++
++ Finally, to finish off our kmalloc example, instead of simply having
++ the hist trigger display symbolic call_sites, we can have the hist
++ trigger additionally display the complete set of kernel stack traces
++ that led to each call_site. To do that, we simply use the special
++ value 'stacktrace' for the key parameter:
++
++ # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
++ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
++
++ The above trigger will use the kernel stack trace in effect when an
++ event is triggered as the key for the hash table. This allows the
++ enumeration of every kernel callpath that led up to a particular
++ event, along with a running total of any of the event fields for
++ that event. Here we tally bytes requested and bytes allocated for
++ every callpath in the system that led up to a kmalloc (in this case
++ every callpath to a kmalloc for a kernel compile):
++
++ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
++ # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
++
++ { stacktrace:
++ __kmalloc_track_caller+0x10b/0x1a0
++ kmemdup+0x20/0x50
++ hidraw_report_event+0x8a/0x120 [hid]
++ hid_report_raw_event+0x3ea/0x440 [hid]
++ hid_input_report+0x112/0x190 [hid]
++ hid_irq_in+0xc2/0x260 [usbhid]
++ __usb_hcd_giveback_urb+0x72/0x120
++ usb_giveback_urb_bh+0x9e/0xe0
++ tasklet_hi_action+0xf8/0x100
++ __do_softirq+0x114/0x2c0
++ irq_exit+0xa5/0xb0
++ do_IRQ+0x5a/0xf0
++ ret_from_intr+0x0/0x30
++ cpuidle_enter+0x17/0x20
++ cpu_startup_entry+0x315/0x3e0
++ rest_init+0x7c/0x80
++ } hitcount: 3 bytes_req: 21 bytes_alloc: 24
++ { stacktrace:
++ __kmalloc_track_caller+0x10b/0x1a0
++ kmemdup+0x20/0x50
++ hidraw_report_event+0x8a/0x120 [hid]
++ hid_report_raw_event+0x3ea/0x440 [hid]
++ hid_input_report+0x112/0x190 [hid]
++ hid_irq_in+0xc2/0x260 [usbhid]
++ __usb_hcd_giveback_urb+0x72/0x120
++ usb_giveback_urb_bh+0x9e/0xe0
++ tasklet_hi_action+0xf8/0x100
++ __do_softirq+0x114/0x2c0
++ irq_exit+0xa5/0xb0
++ do_IRQ+0x5a/0xf0
++ ret_from_intr+0x0/0x30
++ } hitcount: 3 bytes_req: 21 bytes_alloc: 24
++ { stacktrace:
++ kmem_cache_alloc_trace+0xeb/0x150
++ aa_alloc_task_context+0x27/0x40
++ apparmor_cred_prepare+0x1f/0x50
++ security_prepare_creds+0x16/0x20
++ prepare_creds+0xdf/0x1a0
++ SyS_capset+0xb5/0x200
++ system_call_fastpath+0x12/0x6a
++ } hitcount: 1 bytes_req: 32 bytes_alloc: 32
++ .
++ .
++ .
++ { stacktrace:
++ __kmalloc+0x11b/0x1b0
++ i915_gem_execbuffer2+0x6c/0x2c0 [i915]
++ drm_ioctl+0x349/0x670 [drm]
++ do_vfs_ioctl+0x2f0/0x4f0
++ SyS_ioctl+0x81/0xa0
++ system_call_fastpath+0x12/0x6a
++ } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808
++ { stacktrace:
++ __kmalloc+0x11b/0x1b0
++ load_elf_phdrs+0x76/0xa0
++ load_elf_binary+0x102/0x1650
++ search_binary_handler+0x97/0x1d0
++ do_execveat_common.isra.34+0x551/0x6e0
++ SyS_execve+0x3a/0x50
++ return_from_execve+0x0/0x23
++ } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048
++ { stacktrace:
++ kmem_cache_alloc_trace+0xeb/0x150
++ apparmor_file_alloc_security+0x27/0x40
++ security_file_alloc+0x16/0x20
++ get_empty_filp+0x93/0x1c0
++ path_openat+0x31/0x5f0
++ do_filp_open+0x3a/0x90
++ do_sys_open+0x128/0x220
++ SyS_open+0x1e/0x20
++ system_call_fastpath+0x12/0x6a
++ } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376
++ { stacktrace:
++ __kmalloc+0x11b/0x1b0
++ seq_buf_alloc+0x1b/0x50
++ seq_read+0x2cc/0x370
++ proc_reg_read+0x3d/0x80
++ __vfs_read+0x28/0xe0
++ vfs_read+0x86/0x140
++ SyS_read+0x46/0xb0
++ system_call_fastpath+0x12/0x6a
++ } hitcount: 19133 bytes_req: 78368768 bytes_alloc: 78368768
++
++ Totals:
++ Hits: 6085872
++ Entries: 253
++ Dropped: 0
++
++ If you key a hist trigger on common_pid, in order for example to
++ gather and display sorted totals for each process, you can use the
++ special .execname modifier to display the executable names for the
++ processes in the table rather than raw pids. The example below
++ keeps a per-process sum of total bytes read:
++
++ # echo 'hist:key=common_pid.execname:val=count:sort=count.descending' > \
++ /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
++
++ # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/hist
++ # trigger info: hist:keys=common_pid.execname:vals=count:sort=count.descending:size=2048 [active]
++
++ { common_pid: gnome-terminal [ 3196] } hitcount: 280 count: 1093512
++ { common_pid: Xorg [ 1309] } hitcount: 525 count: 256640
++ { common_pid: compiz [ 2889] } hitcount: 59 count: 254400
++ { common_pid: bash [ 8710] } hitcount: 3 count: 66369
++ { common_pid: dbus-daemon-lau [ 8703] } hitcount: 49 count: 47739
++ { common_pid: irqbalance [ 1252] } hitcount: 27 count: 27648
++ { common_pid: 01ifupdown [ 8705] } hitcount: 3 count: 17216
++ { common_pid: dbus-daemon [ 772] } hitcount: 10 count: 12396
++ { common_pid: Socket Thread [ 8342] } hitcount: 11 count: 11264
++ { common_pid: nm-dhcp-client. [ 8701] } hitcount: 6 count: 7424
++ { common_pid: gmain [ 1315] } hitcount: 18 count: 6336
++ .
++ .
++ .
++ { common_pid: postgres [ 1892] } hitcount: 2 count: 32
++ { common_pid: postgres [ 1891] } hitcount: 2 count: 32
++ { common_pid: gmain [ 8704] } hitcount: 2 count: 32
++ { common_pid: upstart-dbus-br [ 2740] } hitcount: 21 count: 21
++ { common_pid: nm-dispatcher.a [ 8696] } hitcount: 1 count: 16
++ { common_pid: indicator-datet [ 2904] } hitcount: 1 count: 16
++ { common_pid: gdbus [ 2998] } hitcount: 1 count: 16
++ { common_pid: rtkit-daemon [ 2052] } hitcount: 1 count: 8
++ { common_pid: init [ 1] } hitcount: 2 count: 2
++
++ Totals:
++ Hits: 2116
++ Entries: 51
++ Dropped: 0
++
++ Similarly, if you key a hist trigger on syscall id, for example to
++ gather and display a list of systemwide syscall hits, you can use
++ the special .syscall modifier to display the syscall names rather
++ than raw ids. The example below keeps a running total of syscall
++ counts for the system during the run:
++
++ # echo 'hist:key=id.syscall:val=hitcount' > \
++ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
++
++ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
++ # trigger info: hist:keys=id.syscall:vals=hitcount:sort=hitcount:size=2048 [active]
++
++ { id: sys_fsync [ 74] } hitcount: 1
++ { id: sys_newuname [ 63] } hitcount: 1
++ { id: sys_prctl [157] } hitcount: 1
++ { id: sys_statfs [137] } hitcount: 1
++ { id: sys_symlink [ 88] } hitcount: 1
++ { id: sys_sendmmsg [307] } hitcount: 1
++ { id: sys_semctl [ 66] } hitcount: 1
++ { id: sys_readlink [ 89] } hitcount: 3
++ { id: sys_bind [ 49] } hitcount: 3
++ { id: sys_getsockname [ 51] } hitcount: 3
++ { id: sys_unlink [ 87] } hitcount: 3
++ { id: sys_rename [ 82] } hitcount: 4
++ { id: unknown_syscall [ 58] } hitcount: 4
++ { id: sys_connect [ 42] } hitcount: 4
++ { id: sys_getpid [ 39] } hitcount: 4
++ .
++ .
++ .
++ { id: sys_rt_sigprocmask [ 14] } hitcount: 952
++ { id: sys_futex [202] } hitcount: 1534
++ { id: sys_write [ 1] } hitcount: 2689
++ { id: sys_setitimer [ 38] } hitcount: 2797
++ { id: sys_read [ 0] } hitcount: 3202
++ { id: sys_select [ 23] } hitcount: 3773
++ { id: sys_writev [ 20] } hitcount: 4531
++ { id: sys_poll [ 7] } hitcount: 8314
++ { id: sys_recvmsg [ 47] } hitcount: 13738
++ { id: sys_ioctl [ 16] } hitcount: 21843
++
++ Totals:
++ Hits: 67612
++ Entries: 72
++ Dropped: 0
++
++ The syscall counts above provide a rough overall picture of system
++ call activity on the system; we can see for example that the most
++ popular system call on this system was the 'sys_ioctl' system call.
++
++ We can use 'compound' keys to refine that number and provide some
++ further insight as to which processes exactly contribute to the
++ overall ioctl count.
++
++ The command below keeps a hitcount for every unique combination of
++ system call id and pid - the end result is essentially a table
++ that keeps a per-pid sum of system call hits. The results are
++ sorted using the system call id as the primary key, and the
++ hitcount sum as the secondary key:
++
++ # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount' > \
++ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
++
++ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
++ # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 [active]
++
++ { id: sys_read [ 0], common_pid: rtkit-daemon [ 1877] } hitcount: 1
++ { id: sys_read [ 0], common_pid: gdbus [ 2976] } hitcount: 1
++ { id: sys_read [ 0], common_pid: console-kit-dae [ 3400] } hitcount: 1
++ { id: sys_read [ 0], common_pid: postgres [ 1865] } hitcount: 1
++ { id: sys_read [ 0], common_pid: deja-dup-monito [ 3543] } hitcount: 2
++ { id: sys_read [ 0], common_pid: NetworkManager [ 890] } hitcount: 2
++ { id: sys_read [ 0], common_pid: evolution-calen [ 3048] } hitcount: 2
++ { id: sys_read [ 0], common_pid: postgres [ 1864] } hitcount: 2
++ { id: sys_read [ 0], common_pid: nm-applet [ 3022] } hitcount: 2
++ { id: sys_read [ 0], common_pid: whoopsie [ 1212] } hitcount: 2
++ .
++ .
++ .
++ { id: sys_ioctl [ 16], common_pid: bash [ 8479] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: bash [ 3472] } hitcount: 12
++ { id: sys_ioctl [ 16], common_pid: gnome-terminal [ 3199] } hitcount: 16
++ { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 1808
++ { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 5580
++ .
++ .
++ .
++ { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2690] } hitcount: 3
++ { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2688] } hitcount: 16
++ { id: sys_inotify_add_watch [254], common_pid: gmain [ 975] } hitcount: 2
++ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3204] } hitcount: 4
++ { id: sys_inotify_add_watch [254], common_pid: gmain [ 2888] } hitcount: 4
++ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3003] } hitcount: 4
++ { id: sys_inotify_add_watch [254], common_pid: gmain [ 2873] } hitcount: 4
++ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3196] } hitcount: 6
++ { id: sys_openat [257], common_pid: java [ 2623] } hitcount: 2
++ { id: sys_eventfd2 [290], common_pid: ibus-ui-gtk3 [ 2760] } hitcount: 4
++ { id: sys_eventfd2 [290], common_pid: compiz [ 2994] } hitcount: 6
++
++ Totals:
++ Hits: 31536
++ Entries: 323
++ Dropped: 0
++
++ The above list does give us a breakdown of the ioctl syscall by
++ pid, but it also gives us quite a bit more than that, which we
++ don't really care about at the moment. Since we know the syscall
++ id for sys_ioctl (16, displayed next to the sys_ioctl name), we
++ can use that to filter out all the other syscalls:
++
++ # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount if id == 16' > \
++ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
++
++ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
++ # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 if id == 16 [active]
++
++ { id: sys_ioctl [ 16], common_pid: gmain [ 2769] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: evolution-addre [ 8571] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: gmain [ 3003] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: gmain [ 2781] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: gmain [ 2829] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: bash [ 8726] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: bash [ 8508] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: gmain [ 2970] } hitcount: 1
++ { id: sys_ioctl [ 16], common_pid: gmain [ 2768] } hitcount: 1
++ .
++ .
++ .
++ { id: sys_ioctl [ 16], common_pid: pool [ 8559] } hitcount: 45
++ { id: sys_ioctl [ 16], common_pid: pool [ 8555] } hitcount: 48
++ { id: sys_ioctl [ 16], common_pid: pool [ 8551] } hitcount: 48
++ { id: sys_ioctl [ 16], common_pid: avahi-daemon [ 896] } hitcount: 66
++ { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 26674
++ { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 73443
++
++ Totals:
++ Hits: 101162
++ Entries: 103
++ Dropped: 0
++
++ The above output shows that 'compiz' and 'Xorg' are far and away
++ the heaviest ioctl callers (which might lead to questions about
++ whether they really need to be making all those calls and to
++ possible avenues for further investigation.)
++
++ The compound key examples used a key and a sum value (hitcount) to
++ sort the output, but we can just as easily use two keys instead.
++ Here's an example where we use a compound key composed of the the
++ common_pid and size event fields. Sorting with pid as the primary
++ key and 'size' as the secondary key allows us to display an
++ ordered summary of the recvfrom sizes, with counts, received by
++ each process:
++
++ # echo 'hist:key=common_pid.execname,size:val=hitcount:sort=common_pid,size' > \
++ /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/trigger
++
++ # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/hist
++ # trigger info: hist:keys=common_pid.execname,size:vals=hitcount:sort=common_pid.execname,size:size=2048 [active]
++
++ { common_pid: smbd [ 784], size: 4 } hitcount: 1
++ { common_pid: dnsmasq [ 1412], size: 4096 } hitcount: 672
++ { common_pid: postgres [ 1796], size: 1000 } hitcount: 6
++ { common_pid: postgres [ 1867], size: 1000 } hitcount: 10
++ { common_pid: bamfdaemon [ 2787], size: 28 } hitcount: 2
++ { common_pid: bamfdaemon [ 2787], size: 14360 } hitcount: 1
++ { common_pid: compiz [ 2994], size: 8 } hitcount: 1
++ { common_pid: compiz [ 2994], size: 20 } hitcount: 11
++ { common_pid: gnome-terminal [ 3199], size: 4 } hitcount: 2
++ { common_pid: firefox [ 8817], size: 4 } hitcount: 1
++ { common_pid: firefox [ 8817], size: 8 } hitcount: 5
++ { common_pid: firefox [ 8817], size: 588 } hitcount: 2
++ { common_pid: firefox [ 8817], size: 628 } hitcount: 1
++ { common_pid: firefox [ 8817], size: 6944 } hitcount: 1
++ { common_pid: firefox [ 8817], size: 408880 } hitcount: 2
++ { common_pid: firefox [ 8822], size: 8 } hitcount: 2
++ { common_pid: firefox [ 8822], size: 160 } hitcount: 2
++ { common_pid: firefox [ 8822], size: 320 } hitcount: 2
++ { common_pid: firefox [ 8822], size: 352 } hitcount: 1
++ .
++ .
++ .
++ { common_pid: pool [ 8923], size: 1960 } hitcount: 10
++ { common_pid: pool [ 8923], size: 2048 } hitcount: 10
++ { common_pid: pool [ 8924], size: 1960 } hitcount: 10
++ { common_pid: pool [ 8924], size: 2048 } hitcount: 10
++ { common_pid: pool [ 8928], size: 1964 } hitcount: 4
++ { common_pid: pool [ 8928], size: 1965 } hitcount: 2
++ { common_pid: pool [ 8928], size: 2048 } hitcount: 6
++ { common_pid: pool [ 8929], size: 1982 } hitcount: 1
++ { common_pid: pool [ 8929], size: 2048 } hitcount: 1
++
++ Totals:
++ Hits: 2016
++ Entries: 224
++ Dropped: 0
++
++ The above example also illustrates the fact that although a compound
++ key is treated as a single entity for hashing purposes, the sub-keys
++ it's composed of can be accessed independently.
++
++ The next example uses a string field as the hash key and
++ demonstrates how you can manually pause and continue a hist trigger.
++ In this example, we'll aggregate fork counts and don't expect a
++ large number of entries in the hash table, so we'll drop it to a
++ much smaller number, say 256:
++
++ # echo 'hist:key=child_comm:val=hitcount:size=256' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
++ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
++
++ { child_comm: dconf worker } hitcount: 1
++ { child_comm: ibus-daemon } hitcount: 1
++ { child_comm: whoopsie } hitcount: 1
++ { child_comm: smbd } hitcount: 1
++ { child_comm: gdbus } hitcount: 1
++ { child_comm: kthreadd } hitcount: 1
++ { child_comm: dconf worker } hitcount: 1
++ { child_comm: evolution-alarm } hitcount: 2
++ { child_comm: Socket Thread } hitcount: 2
++ { child_comm: postgres } hitcount: 2
++ { child_comm: bash } hitcount: 3
++ { child_comm: compiz } hitcount: 3
++ { child_comm: evolution-sourc } hitcount: 4
++ { child_comm: dhclient } hitcount: 4
++ { child_comm: pool } hitcount: 5
++ { child_comm: nm-dispatcher.a } hitcount: 8
++ { child_comm: firefox } hitcount: 8
++ { child_comm: dbus-daemon } hitcount: 8
++ { child_comm: glib-pacrunner } hitcount: 10
++ { child_comm: evolution } hitcount: 23
++
++ Totals:
++ Hits: 89
++ Entries: 20
++ Dropped: 0
++
++ If we want to pause the hist trigger, we can simply append :pause to
++ the command that started the trigger. Notice that the trigger info
++ displays as [paused]:
++
++ # echo 'hist:key=child_comm:val=hitcount:size=256:pause' >> \
++ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
++ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [paused]
++
++ { child_comm: dconf worker } hitcount: 1
++ { child_comm: kthreadd } hitcount: 1
++ { child_comm: dconf worker } hitcount: 1
++ { child_comm: gdbus } hitcount: 1
++ { child_comm: ibus-daemon } hitcount: 1
++ { child_comm: Socket Thread } hitcount: 2
++ { child_comm: evolution-alarm } hitcount: 2
++ { child_comm: smbd } hitcount: 2
++ { child_comm: bash } hitcount: 3
++ { child_comm: whoopsie } hitcount: 3
++ { child_comm: compiz } hitcount: 3
++ { child_comm: evolution-sourc } hitcount: 4
++ { child_comm: pool } hitcount: 5
++ { child_comm: postgres } hitcount: 6
++ { child_comm: firefox } hitcount: 8
++ { child_comm: dhclient } hitcount: 10
++ { child_comm: emacs } hitcount: 12
++ { child_comm: dbus-daemon } hitcount: 20
++ { child_comm: nm-dispatcher.a } hitcount: 20
++ { child_comm: evolution } hitcount: 35
++ { child_comm: glib-pacrunner } hitcount: 59
++
++ Totals:
++ Hits: 199
++ Entries: 21
++ Dropped: 0
++
++ To manually continue having the trigger aggregate events, append
++ :cont instead. Notice that the trigger info displays as [active]
++ again, and the data has changed:
++
++ # echo 'hist:key=child_comm:val=hitcount:size=256:cont' >> \
++ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
++ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
++
++ { child_comm: dconf worker } hitcount: 1
++ { child_comm: dconf worker } hitcount: 1
++ { child_comm: kthreadd } hitcount: 1
++ { child_comm: gdbus } hitcount: 1
++ { child_comm: ibus-daemon } hitcount: 1
++ { child_comm: Socket Thread } hitcount: 2
++ { child_comm: evolution-alarm } hitcount: 2
++ { child_comm: smbd } hitcount: 2
++ { child_comm: whoopsie } hitcount: 3
++ { child_comm: compiz } hitcount: 3
++ { child_comm: evolution-sourc } hitcount: 4
++ { child_comm: bash } hitcount: 5
++ { child_comm: pool } hitcount: 5
++ { child_comm: postgres } hitcount: 6
++ { child_comm: firefox } hitcount: 8
++ { child_comm: dhclient } hitcount: 11
++ { child_comm: emacs } hitcount: 12
++ { child_comm: dbus-daemon } hitcount: 22
++ { child_comm: nm-dispatcher.a } hitcount: 22
++ { child_comm: evolution } hitcount: 35
++ { child_comm: glib-pacrunner } hitcount: 59
++
++ Totals:
++ Hits: 206
++ Entries: 21
++ Dropped: 0
++
++ The previous example showed how to start and stop a hist trigger by
++ appending 'pause' and 'continue' to the hist trigger command. A
++ hist trigger can also be started in a paused state by initially
++ starting the trigger with ':pause' appended. This allows you to
++ start the trigger only when you're ready to start collecting data
++ and not before. For example, you could start the trigger in a
++ paused state, then unpause it and do something you want to measure,
++ then pause the trigger again when done.
++
++ Of course, doing this manually can be difficult and error-prone, but
++ it is possible to automatically start and stop a hist trigger based
++ on some condition, via the enable_hist and disable_hist triggers.
++
++ For example, suppose we wanted to take a look at the relative
++ weights in terms of skb length for each callpath that leads to a
++ netif_receieve_skb event when downloading a decent-sized file using
++ wget.
++
++ First we set up an initially paused stacktrace trigger on the
++ netif_receive_skb event:
++
++ # echo 'hist:key=stacktrace:vals=len:pause' > \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++
++ Next, we set up an 'enable_hist' trigger on the sched_process_exec
++ event, with an 'if filename==/usr/bin/wget' filter. The effect of
++ this new trigger is that it will 'unpause' the hist trigger we just
++ set up on netif_receive_skb if and only if it sees a
++ sched_process_exec event with a filename of '/usr/bin/wget'. When
++ that happens, all netif_receive_skb events are aggregated into a
++ hash table keyed on stacktrace:
++
++ # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
++
++ The aggregation continues until the netif_receive_skb is paused
++ again, which is what the following disable_hist event does by
++ creating a similar setup on the sched_process_exit event, using the
++ filter 'comm==wget':
++
++ # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
++
++ Whenever a process exits and the comm field of the disable_hist
++ trigger filter matches 'comm==wget', the netif_receive_skb hist
++ trigger is disabled.
++
++ The overall effect is that netif_receive_skb events are aggregated
++ into the hash table for only the duration of the wget. Executing a
++ wget command and then listing the 'hist' file will display the
++ output generated by the wget command:
++
++ $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
++
++ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
++ # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
++
++ { stacktrace:
++ __netif_receive_skb_core+0x46d/0x990
++ __netif_receive_skb+0x18/0x60
++ netif_receive_skb_internal+0x23/0x90
++ napi_gro_receive+0xc8/0x100
++ ieee80211_deliver_skb+0xd6/0x270 [mac80211]
++ ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
++ ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
++ ieee80211_rx+0x31d/0x900 [mac80211]
++ iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
++ iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
++ iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
++ irq_thread_fn+0x20/0x50
++ irq_thread+0x11f/0x150
++ kthread+0xd2/0xf0
++ ret_from_fork+0x42/0x70
++ } hitcount: 85 len: 28884
++ { stacktrace:
++ __netif_receive_skb_core+0x46d/0x990
++ __netif_receive_skb+0x18/0x60
++ netif_receive_skb_internal+0x23/0x90
++ napi_gro_complete+0xa4/0xe0
++ dev_gro_receive+0x23a/0x360
++ napi_gro_receive+0x30/0x100
++ ieee80211_deliver_skb+0xd6/0x270 [mac80211]
++ ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
++ ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
++ ieee80211_rx+0x31d/0x900 [mac80211]
++ iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
++ iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
++ iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
++ irq_thread_fn+0x20/0x50
++ irq_thread+0x11f/0x150
++ kthread+0xd2/0xf0
++ } hitcount: 98 len: 664329
++ { stacktrace:
++ __netif_receive_skb_core+0x46d/0x990
++ __netif_receive_skb+0x18/0x60
++ process_backlog+0xa8/0x150
++ net_rx_action+0x15d/0x340
++ __do_softirq+0x114/0x2c0
++ do_softirq_own_stack+0x1c/0x30
++ do_softirq+0x65/0x70
++ __local_bh_enable_ip+0xb5/0xc0
++ ip_finish_output+0x1f4/0x840
++ ip_output+0x6b/0xc0
++ ip_local_out_sk+0x31/0x40
++ ip_send_skb+0x1a/0x50
++ udp_send_skb+0x173/0x2a0
++ udp_sendmsg+0x2bf/0x9f0
++ inet_sendmsg+0x64/0xa0
++ sock_sendmsg+0x3d/0x50
++ } hitcount: 115 len: 13030
++ { stacktrace:
++ __netif_receive_skb_core+0x46d/0x990
++ __netif_receive_skb+0x18/0x60
++ netif_receive_skb_internal+0x23/0x90
++ napi_gro_complete+0xa4/0xe0
++ napi_gro_flush+0x6d/0x90
++ iwl_pcie_irq_handler+0x92a/0x12f0 [iwlwifi]
++ irq_thread_fn+0x20/0x50
++ irq_thread+0x11f/0x150
++ kthread+0xd2/0xf0
++ ret_from_fork+0x42/0x70
++ } hitcount: 934 len: 5512212
++
++ Totals:
++ Hits: 1232
++ Entries: 4
++ Dropped: 0
++
++ The above shows all the netif_receive_skb callpaths and their total
++ lengths for the duration of the wget command.
++
++ The 'clear' hist trigger param can be used to clear the hash table.
++ Suppose we wanted to try another run of the previous example but
++ this time also wanted to see the complete list of events that went
++ into the histogram. In order to avoid having to set everything up
++ again, we can just clear the histogram first:
++
++ # echo 'hist:key=stacktrace:vals=len:clear' >> \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++
++ Just to verify that it is in fact cleared, here's what we now see in
++ the hist file:
++
++ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
++ # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
++
++ Totals:
++ Hits: 0
++ Entries: 0
++ Dropped: 0
++
++ Since we want to see the detailed list of every netif_receive_skb
++ event occurring during the new run, which are in fact the same
++ events being aggregated into the hash table, we add some additional
++ 'enable_event' events to the triggering sched_process_exec and
++ sched_process_exit events as such:
++
++ # echo 'enable_event:net:netif_receive_skb if filename==/usr/bin/wget' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
++
++ # echo 'disable_event:net:netif_receive_skb if comm==wget' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
++
++ If you read the trigger files for the sched_process_exec and
++ sched_process_exit triggers, you should see two triggers for each:
++ one enabling/disabling the hist aggregation and the other
++ enabling/disabling the logging of events:
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
++ enable_event:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
++ enable_hist:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
++ enable_event:net:netif_receive_skb:unlimited if comm==wget
++ disable_hist:net:netif_receive_skb:unlimited if comm==wget
++
++ In other words, whenever either of the sched_process_exec or
++ sched_process_exit events is hit and matches 'wget', it enables or
++ disables both the histogram and the event log, and what you end up
++ with is a hash table and set of events just covering the specified
++ duration. Run the wget command again:
++
++ $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
++
++ Displaying the 'hist' file should show something similar to what you
++ saw in the last run, but this time you should also see the
++ individual events in the trace file:
++
++ # cat /sys/kernel/debug/tracing/trace
++
++ # tracer: nop
++ #
++ # entries-in-buffer/entries-written: 183/1426 #P:4
++ #
++ # _-----=> irqs-off
++ # / _----=> need-resched
++ # | / _---=> hardirq/softirq
++ # || / _--=> preempt-depth
++ # ||| / delay
++ # TASK-PID CPU# |||| TIMESTAMP FUNCTION
++ # | | | |||| | |
++ wget-15108 [000] ..s1 31769.606929: netif_receive_skb: dev=lo skbaddr=ffff88009c353100 len=60
++ wget-15108 [000] ..s1 31769.606999: netif_receive_skb: dev=lo skbaddr=ffff88009c353200 len=60
++ dnsmasq-1382 [000] ..s1 31769.677652: netif_receive_skb: dev=lo skbaddr=ffff88009c352b00 len=130
++ dnsmasq-1382 [000] ..s1 31769.685917: netif_receive_skb: dev=lo skbaddr=ffff88009c352200 len=138
++ ##### CPU 2 buffer started ####
++ irq/29-iwlwifi-559 [002] ..s. 31772.031529: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433d00 len=2948
++ irq/29-iwlwifi-559 [002] ..s. 31772.031572: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432200 len=1500
++ irq/29-iwlwifi-559 [002] ..s. 31772.032196: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433100 len=2948
++ irq/29-iwlwifi-559 [002] ..s. 31772.032761: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433000 len=2948
++ irq/29-iwlwifi-559 [002] ..s. 31772.033220: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432e00 len=1500
++ .
++ .
++ .
++
++ The following example demonstrates how multiple hist triggers can be
++ attached to a given event. This capability can be useful for
++ creating a set of different summaries derived from the same set of
++ events, or for comparing the effects of different filters, among
++ other things.
++
++ # echo 'hist:keys=skbaddr.hex:vals=len if len < 0' >> \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++ # echo 'hist:keys=skbaddr.hex:vals=len if len > 4096' >> \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++ # echo 'hist:keys=skbaddr.hex:vals=len if len == 256' >> \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++ # echo 'hist:keys=skbaddr.hex:vals=len' >> \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++ # echo 'hist:keys=len:vals=common_preempt_count' >> \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++
++ The above set of commands create four triggers differing only in
++ their filters, along with a completely different though fairly
++ nonsensical trigger. Note that in order to append multiple hist
++ triggers to the same file, you should use the '>>' operator to
++ append them ('>' will also add the new hist trigger, but will remove
++ any existing hist triggers beforehand).
++
++ Displaying the contents of the 'hist' file for the event shows the
++ contents of all five histograms:
++
++ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
++
++ # event histogram
++ #
++ # trigger info: hist:keys=len:vals=hitcount,common_preempt_count:sort=hitcount:size=2048 [active]
++ #
++
++ { len: 176 } hitcount: 1 common_preempt_count: 0
++ { len: 223 } hitcount: 1 common_preempt_count: 0
++ { len: 4854 } hitcount: 1 common_preempt_count: 0
++ { len: 395 } hitcount: 1 common_preempt_count: 0
++ { len: 177 } hitcount: 1 common_preempt_count: 0
++ { len: 446 } hitcount: 1 common_preempt_count: 0
++ { len: 1601 } hitcount: 1 common_preempt_count: 0
++ .
++ .
++ .
++ { len: 1280 } hitcount: 66 common_preempt_count: 0
++ { len: 116 } hitcount: 81 common_preempt_count: 40
++ { len: 708 } hitcount: 112 common_preempt_count: 0
++ { len: 46 } hitcount: 221 common_preempt_count: 0
++ { len: 1264 } hitcount: 458 common_preempt_count: 0
++
++ Totals:
++ Hits: 1428
++ Entries: 147
++ Dropped: 0
++
++
++ # event histogram
++ #
++ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
++ #
++
++ { skbaddr: ffff8800baee5e00 } hitcount: 1 len: 130
++ { skbaddr: ffff88005f3d5600 } hitcount: 1 len: 1280
++ { skbaddr: ffff88005f3d4900 } hitcount: 1 len: 1280
++ { skbaddr: ffff88009fed6300 } hitcount: 1 len: 115
++ { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 115
++ { skbaddr: ffff88008cdb1900 } hitcount: 1 len: 46
++ { skbaddr: ffff880064b5ef00 } hitcount: 1 len: 118
++ { skbaddr: ffff880044e3c700 } hitcount: 1 len: 60
++ { skbaddr: ffff880100065900 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d46bd500 } hitcount: 1 len: 116
++ { skbaddr: ffff88005f3d5f00 } hitcount: 1 len: 1280
++ { skbaddr: ffff880100064700 } hitcount: 1 len: 365
++ { skbaddr: ffff8800badb6f00 } hitcount: 1 len: 60
++ .
++ .
++ .
++ { skbaddr: ffff88009fe0be00 } hitcount: 27 len: 24677
++ { skbaddr: ffff88009fe0a400 } hitcount: 27 len: 23052
++ { skbaddr: ffff88009fe0b700 } hitcount: 31 len: 25589
++ { skbaddr: ffff88009fe0b600 } hitcount: 32 len: 27326
++ { skbaddr: ffff88006a462800 } hitcount: 68 len: 71678
++ { skbaddr: ffff88006a463700 } hitcount: 70 len: 72678
++ { skbaddr: ffff88006a462b00 } hitcount: 71 len: 77589
++ { skbaddr: ffff88006a463600 } hitcount: 73 len: 71307
++ { skbaddr: ffff88006a462200 } hitcount: 81 len: 81032
++
++ Totals:
++ Hits: 1451
++ Entries: 318
++ Dropped: 0
++
++
++ # event histogram
++ #
++ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len == 256 [active]
++ #
++
++
++ Totals:
++ Hits: 0
++ Entries: 0
++ Dropped: 0
++
++
++ # event histogram
++ #
++ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len > 4096 [active]
++ #
++
++ { skbaddr: ffff88009fd2c300 } hitcount: 1 len: 7212
++ { skbaddr: ffff8800d2bcce00 } hitcount: 1 len: 7212
++ { skbaddr: ffff8800d2bcd700 } hitcount: 1 len: 7212
++ { skbaddr: ffff8800d2bcda00 } hitcount: 1 len: 21492
++ { skbaddr: ffff8800ae2e2d00 } hitcount: 1 len: 7212
++ { skbaddr: ffff8800d2bcdb00 } hitcount: 1 len: 7212
++ { skbaddr: ffff88006a4df500 } hitcount: 1 len: 4854
++ { skbaddr: ffff88008ce47b00 } hitcount: 1 len: 18636
++ { skbaddr: ffff8800ae2e2200 } hitcount: 1 len: 12924
++ { skbaddr: ffff88005f3e1000 } hitcount: 1 len: 4356
++ { skbaddr: ffff8800d2bcdc00 } hitcount: 2 len: 24420
++ { skbaddr: ffff8800d2bcc200 } hitcount: 2 len: 12996
++
++ Totals:
++ Hits: 14
++ Entries: 12
++ Dropped: 0
++
++
++ # event histogram
++ #
++ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len < 0 [active]
++ #
++
++
++ Totals:
++ Hits: 0
++ Entries: 0
++ Dropped: 0
++
++ Named triggers can be used to have triggers share a common set of
++ histogram data. This capability is mostly useful for combining the
++ output of events generated by tracepoints contained inside inline
++ functions, but names can be used in a hist trigger on any event.
++ For example, these two triggers when hit will update the same 'len'
++ field in the shared 'foo' histogram data:
++
++ # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
++ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
++ # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
++ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
++
++ You can see that they're updating common histogram data by reading
++ each event's hist files at the same time:
++
++ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist;
++ cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
++
++ # event histogram
++ #
++ # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
++ #
++
++ { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
++ { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
++ { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
++ { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
++ { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
++ { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
++ { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
++ { skbaddr: ffff880064505000 } hitcount: 1 len: 46
++ { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
++ { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
++ { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
++ { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
++ { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
++ { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
++ { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
++ { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
++ { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
++ { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
++ { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
++ { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
++ { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
++ { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
++ { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
++ { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
++ { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
++ { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
++ { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
++ { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
++ { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
++ { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
++ { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
++ { skbaddr: ffff880064504400 } hitcount: 4 len: 184
++ { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
++ { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
++ { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
++ { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
++ { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
++
++ Totals:
++ Hits: 81
++ Entries: 42
++ Dropped: 0
++ # event histogram
++ #
++ # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
++ #
++
++ { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
++ { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
++ { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
++ { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
++ { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
++ { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
++ { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
++ { skbaddr: ffff880064505000 } hitcount: 1 len: 46
++ { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
++ { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
++ { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
++ { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
++ { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
++ { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
++ { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
++ { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
++ { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
++ { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
++ { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
++ { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
++ { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
++ { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
++ { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
++ { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
++ { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
++ { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
++ { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
++ { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
++ { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
++ { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
++ { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
++ { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
++ { skbaddr: ffff880064504400 } hitcount: 4 len: 184
++ { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
++ { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
++ { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
++ { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
++ { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
++
++ Totals:
++ Hits: 81
++ Entries: 42
++ Dropped: 0
++
++ And here's an example that shows how to combine histogram data from
++ any two events even if they don't share any 'compatible' fields
++ other than 'hitcount' and 'stacktrace'. These commands create a
++ couple of triggers named 'bar' using those fields:
++
++ # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
++ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
++ # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
++ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
++
++ And displaying the output of either shows some interesting if
++ somewhat confusing output:
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
++ # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
++
++ # event histogram
++ #
++ # trigger info: hist:name=bar:keys=stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]
++ #
++
++ { stacktrace:
++ _do_fork+0x18e/0x330
++ kernel_thread+0x29/0x30
++ kthreadd+0x154/0x1b0
++ ret_from_fork+0x3f/0x70
++ } hitcount: 1
++ { stacktrace:
++ netif_rx_internal+0xb2/0xd0
++ netif_rx_ni+0x20/0x70
++ dev_loopback_xmit+0xaa/0xd0
++ ip_mc_output+0x126/0x240
++ ip_local_out_sk+0x31/0x40
++ igmp_send_report+0x1e9/0x230
++ igmp_timer_expire+0xe9/0x120
++ call_timer_fn+0x39/0xf0
++ run_timer_softirq+0x1e1/0x290
++ __do_softirq+0xfd/0x290
++ irq_exit+0x98/0xb0
++ smp_apic_timer_interrupt+0x4a/0x60
++ apic_timer_interrupt+0x6d/0x80
++ cpuidle_enter+0x17/0x20
++ call_cpuidle+0x3b/0x60
++ cpu_startup_entry+0x22d/0x310
++ } hitcount: 1
++ { stacktrace:
++ netif_rx_internal+0xb2/0xd0
++ netif_rx_ni+0x20/0x70
++ dev_loopback_xmit+0xaa/0xd0
++ ip_mc_output+0x17f/0x240
++ ip_local_out_sk+0x31/0x40
++ ip_send_skb+0x1a/0x50
++ udp_send_skb+0x13e/0x270
++ udp_sendmsg+0x2bf/0x980
++ inet_sendmsg+0x67/0xa0
++ sock_sendmsg+0x38/0x50
++ SYSC_sendto+0xef/0x170
++ SyS_sendto+0xe/0x10
++ entry_SYSCALL_64_fastpath+0x12/0x6a
++ } hitcount: 2
++ { stacktrace:
++ netif_rx_internal+0xb2/0xd0
++ netif_rx+0x1c/0x60
++ loopback_xmit+0x6c/0xb0
++ dev_hard_start_xmit+0x219/0x3a0
++ __dev_queue_xmit+0x415/0x4f0
++ dev_queue_xmit_sk+0x13/0x20
++ ip_finish_output2+0x237/0x340
++ ip_finish_output+0x113/0x1d0
++ ip_output+0x66/0xc0
++ ip_local_out_sk+0x31/0x40
++ ip_send_skb+0x1a/0x50
++ udp_send_skb+0x16d/0x270
++ udp_sendmsg+0x2bf/0x980
++ inet_sendmsg+0x67/0xa0
++ sock_sendmsg+0x38/0x50
++ ___sys_sendmsg+0x14e/0x270
++ } hitcount: 76
++ { stacktrace:
++ netif_rx_internal+0xb2/0xd0
++ netif_rx+0x1c/0x60
++ loopback_xmit+0x6c/0xb0
++ dev_hard_start_xmit+0x219/0x3a0
++ __dev_queue_xmit+0x415/0x4f0
++ dev_queue_xmit_sk+0x13/0x20
++ ip_finish_output2+0x237/0x340
++ ip_finish_output+0x113/0x1d0
++ ip_output+0x66/0xc0
++ ip_local_out_sk+0x31/0x40
++ ip_send_skb+0x1a/0x50
++ udp_send_skb+0x16d/0x270
++ udp_sendmsg+0x2bf/0x980
++ inet_sendmsg+0x67/0xa0
++ sock_sendmsg+0x38/0x50
++ ___sys_sendmsg+0x269/0x270
++ } hitcount: 77
++ { stacktrace:
++ netif_rx_internal+0xb2/0xd0
++ netif_rx+0x1c/0x60
++ loopback_xmit+0x6c/0xb0
++ dev_hard_start_xmit+0x219/0x3a0
++ __dev_queue_xmit+0x415/0x4f0
++ dev_queue_xmit_sk+0x13/0x20
++ ip_finish_output2+0x237/0x340
++ ip_finish_output+0x113/0x1d0
++ ip_output+0x66/0xc0
++ ip_local_out_sk+0x31/0x40
++ ip_send_skb+0x1a/0x50
++ udp_send_skb+0x16d/0x270
++ udp_sendmsg+0x2bf/0x980
++ inet_sendmsg+0x67/0xa0
++ sock_sendmsg+0x38/0x50
++ SYSC_sendto+0xef/0x170
++ } hitcount: 88
++ { stacktrace:
++ _do_fork+0x18e/0x330
++ SyS_clone+0x19/0x20
++ entry_SYSCALL_64_fastpath+0x12/0x6a
++ } hitcount: 244
++
++ Totals:
++ Hits: 489
++ Entries: 7
++ Dropped: 0
++
++
++2.2 Inter-event hist triggers
++-----------------------------
++
++Inter-event hist triggers are hist triggers that combine values from
++one or more other events and create a histogram using that data. Data
++from an inter-event histogram can in turn become the source for
++further combined histograms, thus providing a chain of related
++histograms, which is important for some applications.
++
++The most important example of an inter-event quantity that can be used
++in this manner is latency, which is simply a difference in timestamps
++between two events. Although latency is the most important
++inter-event quantity, note that because the support is completely
++general across the trace event subsystem, any event field can be used
++in an inter-event quantity.
++
++An example of a histogram that combines data from other histograms
++into a useful chain would be a 'wakeupswitch latency' histogram that
++combines a 'wakeup latency' histogram and a 'switch latency'
++histogram.
++
++Normally, a hist trigger specification consists of a (possibly
++compound) key along with one or more numeric values, which are
++continually updated sums associated with that key. A histogram
++specification in this case consists of individual key and value
++specifications that refer to trace event fields associated with a
++single event type.
++
++The inter-event hist trigger extension allows fields from multiple
++events to be referenced and combined into a multi-event histogram
++specification. In support of this overall goal, a few enabling
++features have been added to the hist trigger support:
++
++ - In order to compute an inter-event quantity, a value from one
++ event needs to saved and then referenced from another event. This
++ requires the introduction of support for histogram 'variables'.
++
++ - The computation of inter-event quantities and their combination
++ require some minimal amount of support for applying simple
++ expressions to variables (+ and -).
++
++ - A histogram consisting of inter-event quantities isn't logically a
++ histogram on either event (so having the 'hist' file for either
++ event host the histogram output doesn't really make sense). To
++ address the idea that the histogram is associated with a
++ combination of events, support is added allowing the creation of
++ 'synthetic' events that are events derived from other events.
++ These synthetic events are full-fledged events just like any other
++ and can be used as such, as for instance to create the
++ 'combination' histograms mentioned previously.
++
++ - A set of 'actions' can be associated with histogram entries -
++ these can be used to generate the previously mentioned synthetic
++ events, but can also be used for other purposes, such as for
++ example saving context when a 'max' latency has been hit.
++
++ - Trace events don't have a 'timestamp' associated with them, but
++ there is an implicit timestamp saved along with an event in the
++ underlying ftrace ring buffer. This timestamp is now exposed as a
++ a synthetic field named 'common_timestamp' which can be used in
++ histograms as if it were any other event field; it isn't an actual
++ field in the trace format but rather is a synthesized value that
++ nonetheless can be used as if it were an actual field. By default
++ it is in units of nanoseconds; appending '.usecs' to a
++ common_timestamp field changes the units to microseconds.
++
++A note on inter-event timestamps: If common_timestamp is used in a
++histogram, the trace buffer is automatically switched over to using
++absolute timestamps and the "global" trace clock, in order to avoid
++bogus timestamp differences with other clocks that aren't coherent
++across CPUs. This can be overridden by specifying one of the other
++trace clocks instead, using the "clock=XXX" hist trigger attribute,
++where XXX is any of the clocks listed in the tracing/trace_clock
++pseudo-file.
++
++These features are described in more detail in the following sections.
++
++2.2.1 Histogram Variables
++-------------------------
++
++Variables are simply named locations used for saving and retrieving
++values between matching events. A 'matching' event is defined as an
++event that has a matching key - if a variable is saved for a histogram
++entry corresponding to that key, any subsequent event with a matching
++key can access that variable.
++
++A variable's value is normally available to any subsequent event until
++it is set to something else by a subsequent event. The one exception
++to that rule is that any variable used in an expression is essentially
++'read-once' - once it's used by an expression in a subsequent event,
++it's reset to its 'unset' state, which means it can't be used again
++unless it's set again. This ensures not only that an event doesn't
++use an uninitialized variable in a calculation, but that that variable
++is used only once and not for any unrelated subsequent match.
++
++The basic syntax for saving a variable is to simply prefix a unique
++variable name not corresponding to any keyword along with an '=' sign
++to any event field.
++
++Either keys or values can be saved and retrieved in this way. This
++creates a variable named 'ts0' for a histogram entry with the key
++'next_pid':
++
++ # echo 'hist:keys=next_pid:vals=$ts0:ts0=common_timestamp ... >> \
++ event/trigger
++
++The ts0 variable can be accessed by any subsequent event having the
++same pid as 'next_pid'.
++
++Variable references are formed by prepending the variable name with
++the '$' sign. Thus for example, the ts0 variable above would be
++referenced as '$ts0' in expressions.
++
++Because 'vals=' is used, the common_timestamp variable value above
++will also be summed as a normal histogram value would (though for a
++timestamp it makes little sense).
++
++The below shows that a key value can also be saved in the same way:
++
++ # echo 'hist:timer_pid=common_pid:key=timer_pid ...' >> event/trigger
++
++If a variable isn't a key variable or prefixed with 'vals=', the
++associated event field will be saved in a variable but won't be summed
++as a value:
++
++ # echo 'hist:keys=next_pid:ts1=common_timestamp ... >> event/trigger
++
++Multiple variables can be assigned at the same time. The below would
++result in both ts0 and b being created as variables, with both
++common_timestamp and field1 additionally being summed as values:
++
++ # echo 'hist:keys=pid:vals=$ts0,$b:ts0=common_timestamp,b=field1 ... >> \
++ event/trigger
++
++Note that variable assignments can appear either preceding or
++following their use. The command below behaves identically to the
++command above:
++
++ # echo 'hist:keys=pid:ts0=common_timestamp,b=field1:vals=$ts0,$b ... >> \
++ event/trigger
++
++Any number of variables not bound to a 'vals=' prefix can also be
++assigned by simply separating them with colons. Below is the same
++thing but without the values being summed in the histogram:
++
++ # echo 'hist:keys=pid:ts0=common_timestamp:b=field1 ... >> event/trigger
++
++Variables set as above can be referenced and used in expressions on
++another event.
++
++For example, here's how a latency can be calculated:
++
++ # echo 'hist:keys=pid,prio:ts0=common_timestamp ... >> event1/trigger
++ # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp-$ts0 ... >> event2/trigger
++
++In the first line above, the event's timetamp is saved into the
++variable ts0. In the next line, ts0 is subtracted from the second
++event's timestamp to produce the latency, which is then assigned into
++yet another variable, 'wakeup_lat'. The hist trigger below in turn
++makes use of the wakeup_lat variable to compute a combined latency
++using the same key and variable from yet another event:
++
++ # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ... >> event3/trigger
++
++2.2.2 Synthetic Events
++----------------------
++
++Synthetic events are user-defined events generated from hist trigger
++variables or fields associated with one or more other events. Their
++purpose is to provide a mechanism for displaying data spanning
++multiple events consistent with the existing and already familiar
++usage for normal events.
++
++To define a synthetic event, the user writes a simple specification
++consisting of the name of the new event along with one or more
++variables and their types, which can be any valid field type,
++separated by semicolons, to the tracing/synthetic_events file.
++
++For instance, the following creates a new event named 'wakeup_latency'
++with 3 fields: lat, pid, and prio. Each of those fields is simply a
++variable reference to a variable on another event:
++
++ # echo 'wakeup_latency \
++ u64 lat; \
++ pid_t pid; \
++ int prio' >> \
++ /sys/kernel/debug/tracing/synthetic_events
++
++Reading the tracing/synthetic_events file lists all the currently
++defined synthetic events, in this case the event defined above:
++
++ # cat /sys/kernel/debug/tracing/synthetic_events
++ wakeup_latency u64 lat; pid_t pid; int prio
++
++An existing synthetic event definition can be removed by prepending
++the command that defined it with a '!':
++
++ # echo '!wakeup_latency u64 lat pid_t pid int prio' >> \
++ /sys/kernel/debug/tracing/synthetic_events
++
++At this point, there isn't yet an actual 'wakeup_latency' event
++instantiated in the event subsytem - for this to happen, a 'hist
++trigger action' needs to be instantiated and bound to actual fields
++and variables defined on other events (see Section 6.3.3 below).
++
++Once that is done, an event instance is created, and a histogram can
++be defined using it:
++
++ # echo 'hist:keys=pid,prio,lat.log2:sort=pid,lat' >> \
++ /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
++
++The new event is created under the tracing/events/synthetic/ directory
++and looks and behaves just like any other event:
++
++ # ls /sys/kernel/debug/tracing/events/synthetic/wakeup_latency
++ enable filter format hist id trigger
++
++Like any other event, once a histogram is enabled for the event, the
++output can be displayed by reading the event's 'hist' file.
++
++2.2.3 Hist trigger 'actions'
++----------------------------
++
++A hist trigger 'action' is a function that's executed whenever a
++histogram entry is added or updated.
++
++The default 'action' if no special function is explicity specified is
++as it always has been, to simply update the set of values associated
++with an entry. Some applications, however, may want to perform
++additional actions at that point, such as generate another event, or
++compare and save a maximum.
++
++The following additional actions are available. To specify an action
++for a given event, simply specify the action between colons in the
++hist trigger specification.
++
++ - onmatch(matching.event).<synthetic_event_name>(param list)
++
++ The 'onmatch(matching.event).<synthetic_event_name>(params)' hist
++ trigger action is invoked whenever an event matches and the
++ histogram entry would be added or updated. It causes the named
++ synthetic event to be generated with the values given in the
++ 'param list'. The result is the generation of a synthetic event
++ that consists of the values contained in those variables at the
++ time the invoking event was hit.
++
++ The 'param list' consists of one or more parameters which may be
++ either variables or fields defined on either the 'matching.event'
++ or the target event. The variables or fields specified in the
++ param list may be either fully-qualified or unqualified. If a
++ variable is specified as unqualified, it must be unique between
++ the two events. A field name used as a param can be unqualified
++ if it refers to the target event, but must be fully qualified if
++ it refers to the matching event. A fully-qualified name is of the
++ form 'system.event_name.$var_name' or 'system.event_name.field'.
++
++ The 'matching.event' specification is simply the fully qualified
++ event name of the event that matches the target event for the
++ onmatch() functionality, in the form 'system.event_name'.
++
++ Finally, the number and type of variables/fields in the 'param
++ list' must match the number and types of the fields in the
++ synthetic event being generated.
++
++ As an example the below defines a simple synthetic event and uses
++ a variable defined on the sched_wakeup_new event as a parameter
++ when invoking the synthetic event. Here we define the synthetic
++ event:
++
++ # echo 'wakeup_new_test pid_t pid' >> \
++ /sys/kernel/debug/tracing/synthetic_events
++
++ # cat /sys/kernel/debug/tracing/synthetic_events
++ wakeup_new_test pid_t pid
++
++ The following hist trigger both defines the missing testpid
++ variable and specifies an onmatch() action that generates a
++ wakeup_new_test synthetic event whenever a sched_wakeup_new event
++ occurs, which because of the 'if comm == "cyclictest"' filter only
++ happens when the executable is cyclictest:
++
++ # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
++ wakeup_new_test($testpid) if comm=="cyclictest"' >> \
++ /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger
++
++ Creating and displaying a histogram based on those events is now
++ just a matter of using the fields and new synthetic event in the
++ tracing/events/synthetic directory, as usual:
++
++ # echo 'hist:keys=pid:sort=pid' >> \
++ /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/trigger
++
++ Running 'cyclictest' should cause wakeup_new events to generate
++ wakeup_new_test synthetic events which should result in histogram
++ output in the wakeup_new_test event's hist file:
++
++ # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/hist
++
++ A more typical usage would be to use two events to calculate a
++ latency. The following example uses a set of hist triggers to
++ produce a 'wakeup_latency' histogram:
++
++ First, we define a 'wakeup_latency' synthetic event:
++
++ # echo 'wakeup_latency u64 lat; pid_t pid; int prio' >> \
++ /sys/kernel/debug/tracing/synthetic_events
++
++ Next, we specify that whenever we see a sched_waking event for a
++ cyclictest thread, save the timestamp in a 'ts0' variable:
++
++ # echo 'hist:keys=$saved_pid:saved_pid=pid:ts0=common_timestamp.usecs \
++ if comm=="cyclictest"' >> \
++ /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
++
++ Then, when the corresponding thread is actually scheduled onto the
++ CPU by a sched_switch event, calculate the latency and use that
++ along with another variable and an event field to generate a
++ wakeup_latency synthetic event:
++
++ # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:\
++ onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,\
++ $saved_pid,next_prio) if next_comm=="cyclictest"' >> \
++ /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
++
++ We also need to create a histogram on the wakeup_latency synthetic
++ event in order to aggregate the generated synthetic event data:
++
++ # echo 'hist:keys=pid,prio,lat:sort=pid,lat' >> \
++ /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
++
++ Finally, once we've run cyclictest to actually generate some
++ events, we can see the output by looking at the wakeup_latency
++ synthetic event's hist file:
++
++ # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/hist
++
++ - onmax(var).save(field,.. .)
++
++ The 'onmax(var).save(field,...)' hist trigger action is invoked
++ whenever the value of 'var' associated with a histogram entry
++ exceeds the current maximum contained in that variable.
++
++ The end result is that the trace event fields specified as the
++ onmax.save() params will be saved if 'var' exceeds the current
++ maximum for that hist trigger entry. This allows context from the
++ event that exhibited the new maximum to be saved for later
++ reference. When the histogram is displayed, additional fields
++ displaying the saved values will be printed.
++
++ As an example the below defines a couple of hist triggers, one for
++ sched_waking and another for sched_switch, keyed on pid. Whenever
++ a sched_waking occurs, the timestamp is saved in the entry
++ corresponding to the current pid, and when the scheduler switches
++ back to that pid, the timestamp difference is calculated. If the
++ resulting latency, stored in wakeup_lat, exceeds the current
++ maximum latency, the values specified in the save() fields are
++ recoreded:
++
++ # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
++ if comm=="cyclictest"' >> \
++ /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
++
++ # echo 'hist:keys=next_pid:\
++ wakeup_lat=common_timestamp.usecs-$ts0:\
++ onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
++ if next_comm=="cyclictest"' >> \
++ /sys/kernel/debug/tracing/events/sched/sched_switch/trigger
++
++ When the histogram is displayed, the max value and the saved
++ values corresponding to the max are displayed following the rest
++ of the fields:
++
++ # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
++ { next_pid: 2255 } hitcount: 239
++ common_timestamp-ts0: 0
++ max: 27
++ next_comm: cyclictest
++ prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
++
++ { next_pid: 2256 } hitcount: 2355
++ common_timestamp-ts0: 0
++ max: 49 next_comm: cyclictest
++ prev_pid: 0 prev_prio: 120 prev_comm: swapper/0
++
++ Totals:
++ Hits: 12970
++ Entries: 2
++ Dropped: 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/acpi/acpica/acglobal.h linux-4.14/drivers/acpi/acpica/acglobal.h
+--- linux-4.14.orig/drivers/acpi/acpica/acglobal.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/acpi/acpica/acglobal.h 2018-09-05 11:05:07.000000000 +0200
+@@ -116,7 +116,7 @@
+ * interrupt level
+ */
+ ACPI_GLOBAL(acpi_spinlock, acpi_gbl_gpe_lock); /* For GPE data structs and registers */
+-ACPI_GLOBAL(acpi_spinlock, acpi_gbl_hardware_lock); /* For ACPI H/W except GPE registers */
++ACPI_GLOBAL(acpi_raw_spinlock, acpi_gbl_hardware_lock); /* For ACPI H/W except GPE registers */
+ ACPI_GLOBAL(acpi_spinlock, acpi_gbl_reference_count_lock);
- /* Delete the reader/writer lock */
-diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
-index 051b6158d1b7..7ad293bef6ed 100644
---- a/drivers/ata/libata-sff.c
-+++ b/drivers/ata/libata-sff.c
-@@ -678,9 +678,9 @@ unsigned int ata_sff_data_xfer_noirq(struct ata_device *dev, unsigned char *buf,
- unsigned long flags;
- unsigned int consumed;
+ /* Mutex for _OSI support */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/acpi/acpica/hwregs.c linux-4.14/drivers/acpi/acpica/hwregs.c
+--- linux-4.14.orig/drivers/acpi/acpica/hwregs.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/acpi/acpica/hwregs.c 2018-09-05 11:05:07.000000000 +0200
+@@ -428,14 +428,14 @@
+ ACPI_BITMASK_ALL_FIXED_STATUS,
+ ACPI_FORMAT_UINT64(acpi_gbl_xpm1a_status.address)));
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- consumed = ata_sff_data_xfer32(dev, buf, buflen, rw);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
+- lock_flags = acpi_os_acquire_lock(acpi_gbl_hardware_lock);
++ raw_spin_lock_irqsave(acpi_gbl_hardware_lock, lock_flags);
- return consumed;
- }
-@@ -719,7 +719,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
- unsigned long flags;
+ /* Clear the fixed events in PM1 A/B */
- /* FIXME: use a bounce buffer */
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- buf = kmap_atomic(page);
+ status = acpi_hw_register_write(ACPI_REGISTER_PM1_STATUS,
+ ACPI_BITMASK_ALL_FIXED_STATUS);
- /* do the actual data transfer */
-@@ -727,7 +727,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
- do_write);
+- acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags);
++ raw_spin_unlock_irqrestore(acpi_gbl_hardware_lock, lock_flags);
- kunmap_atomic(buf);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- } else {
- buf = page_address(page);
- ap->ops->sff_data_xfer(qc->dev, buf + offset, qc->sect_size,
-@@ -864,7 +864,7 @@ static int __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes)
- unsigned long flags;
+ if (ACPI_FAILURE(status)) {
+ goto exit;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/acpi/acpica/hwxface.c linux-4.14/drivers/acpi/acpica/hwxface.c
+--- linux-4.14.orig/drivers/acpi/acpica/hwxface.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/acpi/acpica/hwxface.c 2018-09-05 11:05:07.000000000 +0200
+@@ -373,7 +373,7 @@
+ return_ACPI_STATUS(AE_BAD_PARAMETER);
+ }
- /* FIXME: use bounce buffer */
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- buf = kmap_atomic(page);
+- lock_flags = acpi_os_acquire_lock(acpi_gbl_hardware_lock);
++ raw_spin_lock_irqsave(acpi_gbl_hardware_lock, lock_flags);
- /* do the actual data transfer */
-@@ -872,7 +872,7 @@ static int __atapi_pio_bytes(struct ata_queued_cmd *qc, unsigned int bytes)
- count, rw);
+ /*
+ * At this point, we know that the parent register is one of the
+@@ -434,7 +434,7 @@
- kunmap_atomic(buf);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- } else {
- buf = page_address(page);
- consumed = ap->ops->sff_data_xfer(dev, buf + offset,
-diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
-index 4b5cd3a7b2b6..fa8329ad79fd 100644
---- a/drivers/block/zram/zcomp.c
-+++ b/drivers/block/zram/zcomp.c
-@@ -118,12 +118,19 @@ ssize_t zcomp_available_show(const char *comp, char *buf)
+ unlock_and_exit:
+
+- acpi_os_release_lock(acpi_gbl_hardware_lock, lock_flags);
++ raw_spin_unlock_irqrestore(acpi_gbl_hardware_lock, lock_flags);
+ return_ACPI_STATUS(status);
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/acpi/acpica/utmutex.c linux-4.14/drivers/acpi/acpica/utmutex.c
+--- linux-4.14.orig/drivers/acpi/acpica/utmutex.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/acpi/acpica/utmutex.c 2018-09-05 11:05:07.000000000 +0200
+@@ -88,7 +88,7 @@
+ return_ACPI_STATUS (status);
+ }
+
+- status = acpi_os_create_lock (&acpi_gbl_hardware_lock);
++ status = acpi_os_create_raw_lock (&acpi_gbl_hardware_lock);
+ if (ACPI_FAILURE (status)) {
+ return_ACPI_STATUS (status);
+ }
+@@ -145,7 +145,7 @@
+ /* Delete the spinlocks */
+
+ acpi_os_delete_lock(acpi_gbl_gpe_lock);
+- acpi_os_delete_lock(acpi_gbl_hardware_lock);
++ acpi_os_delete_raw_lock(acpi_gbl_hardware_lock);
+ acpi_os_delete_lock(acpi_gbl_reference_count_lock);
+
+ /* Delete the reader/writer lock */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ata/libata-sff.c linux-4.14/drivers/ata/libata-sff.c
+--- linux-4.14.orig/drivers/ata/libata-sff.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ata/libata-sff.c 2018-09-05 11:05:07.000000000 +0200
+@@ -679,9 +679,9 @@
+ unsigned long flags;
+ unsigned int consumed;
+
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ consumed = ata_sff_data_xfer32(qc, buf, buflen, rw);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+
+ return consumed;
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/block/brd.c linux-4.14/drivers/block/brd.c
+--- linux-4.14.orig/drivers/block/brd.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/block/brd.c 2018-09-05 11:05:07.000000000 +0200
+@@ -60,7 +60,6 @@
+ /*
+ * Look up and return a brd's page for a given sector.
+ */
+-static DEFINE_MUTEX(brd_mutex);
+ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector)
+ {
+ pgoff_t idx;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/block/zram/zcomp.c linux-4.14/drivers/block/zram/zcomp.c
+--- linux-4.14.orig/drivers/block/zram/zcomp.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/block/zram/zcomp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -116,12 +116,20 @@
struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
{
- return *get_cpu_ptr(comp->stream);
+ struct zcomp_strm *zstrm;
+
-+ zstrm = *this_cpu_ptr(comp->stream);
++ zstrm = *get_local_ptr(comp->stream);
+ spin_lock(&zstrm->zcomp_lock);
+ return zstrm;
}
+
+ zstrm = *this_cpu_ptr(comp->stream);
+ spin_unlock(&zstrm->zcomp_lock);
++ put_local_ptr(zstrm);
}
int zcomp_compress(struct zcomp_strm *zstrm,
-@@ -174,6 +181,7 @@ static int __zcomp_cpu_notifier(struct zcomp *comp,
- pr_err("Can't allocate a compression stream\n");
- return NOTIFY_BAD;
- }
-+ spin_lock_init(&zstrm->zcomp_lock);
- *per_cpu_ptr(comp->stream, cpu) = zstrm;
- break;
- case CPU_DEAD:
-diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h
-index 478cac2ed465..f7a6efdc3285 100644
---- a/drivers/block/zram/zcomp.h
-+++ b/drivers/block/zram/zcomp.h
-@@ -14,6 +14,7 @@ struct zcomp_strm {
+@@ -171,6 +179,7 @@
+ pr_err("Can't allocate a compression stream\n");
+ return -ENOMEM;
+ }
++ spin_lock_init(&zstrm->zcomp_lock);
+ *per_cpu_ptr(comp->stream, cpu) = zstrm;
+ return 0;
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/block/zram/zcomp.h linux-4.14/drivers/block/zram/zcomp.h
+--- linux-4.14.orig/drivers/block/zram/zcomp.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/block/zram/zcomp.h 2018-09-05 11:05:07.000000000 +0200
+@@ -14,6 +14,7 @@
/* compression/decompression buffer */
void *buffer;
struct crypto_comp *tfm;
};
/* dynamic per-device compression frontend */
-diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
-index d2ef51ca9cf4..05e749736560 100644
---- a/drivers/block/zram/zram_drv.c
-+++ b/drivers/block/zram/zram_drv.c
-@@ -528,6 +528,8 @@ static struct zram_meta *zram_meta_alloc(char *pool_name, u64 disksize)
- goto out_error;
- }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/block/zram/zram_drv.c linux-4.14/drivers/block/zram/zram_drv.c
+--- linux-4.14.orig/drivers/block/zram/zram_drv.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/block/zram/zram_drv.c 2018-09-05 11:05:07.000000000 +0200
+@@ -756,6 +756,30 @@
+ static DEVICE_ATTR_RO(mm_stat);
+ static DEVICE_ATTR_RO(debug_stat);
-+ zram_meta_init_table_locks(meta, disksize);
++#ifdef CONFIG_PREEMPT_RT_BASE
++static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages)
++{
++ size_t index;
++
++ for (index = 0; index < num_pages; index++)
++ spin_lock_init(&zram->table[index].lock);
++}
++
++static void zram_slot_lock(struct zram *zram, u32 index)
++{
++ spin_lock(&zram->table[index].lock);
++ __set_bit(ZRAM_ACCESS, &zram->table[index].value);
++}
++
++static void zram_slot_unlock(struct zram *zram, u32 index)
++{
++ __clear_bit(ZRAM_ACCESS, &zram->table[index].value);
++ spin_unlock(&zram->table[index].lock);
++}
++
++#else
++static void zram_meta_init_table_locks(struct zram *zram, size_t num_pages) { }
+
- return meta;
+ static void zram_slot_lock(struct zram *zram, u32 index)
+ {
+ bit_spin_lock(ZRAM_ACCESS, &zram->table[index].value);
+@@ -765,6 +789,7 @@
+ {
+ bit_spin_unlock(ZRAM_ACCESS, &zram->table[index].value);
+ }
++#endif
- out_error:
-@@ -575,28 +577,28 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
- struct zram_meta *meta = zram->meta;
+ static void zram_meta_free(struct zram *zram, u64 disksize)
+ {
+@@ -794,6 +819,7 @@
+ return false;
+ }
+
++ zram_meta_init_table_locks(zram, num_pages);
+ return true;
+ }
+
+@@ -845,6 +871,7 @@
unsigned long handle;
unsigned int size;
+ void *src, *dst;
+ struct zcomp_strm *zstrm;
-- bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_lock_table(&meta->table[index]);
- handle = meta->table[index].handle;
- size = zram_get_obj_size(meta, index);
+ if (zram_wb_enabled(zram)) {
+ zram_slot_lock(zram, index);
+@@ -879,6 +906,7 @@
- if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_unlock_table(&meta->table[index]);
- clear_page(mem);
- return 0;
- }
+ size = zram_get_obj_size(zram, index);
+ zstrm = zcomp_stream_get(zram->comp);
- cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
+ src = zs_map_object(zram->mem_pool, handle, ZS_MM_RO);
if (size == PAGE_SIZE) {
- copy_page(mem, cmem);
+ dst = kmap_atomic(page);
+@@ -886,14 +914,13 @@
+ kunmap_atomic(dst);
+ ret = 0;
} else {
- struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp);
--
- ret = zcomp_decompress(zstrm, cmem, size, mem);
+
+ dst = kmap_atomic(page);
+ ret = zcomp_decompress(zstrm, src, size, dst);
+ kunmap_atomic(dst);
- zcomp_stream_put(zram->comp);
}
- zs_unmap_object(meta->mem_pool, handle);
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
+ zs_unmap_object(zram->mem_pool, handle);
+ zcomp_stream_put(zram->comp);
-+ zram_unlock_table(&meta->table[index]);
+ zram_slot_unlock(zram, index);
/* Should NEVER happen. Return bio error if it does. */
- if (unlikely(ret)) {
-@@ -616,14 +618,14 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
- struct zram_meta *meta = zram->meta;
- page = bvec->bv_page;
-
-- bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_lock_table(&meta->table[index]);
- if (unlikely(!meta->table[index].handle) ||
- zram_test_flag(meta, index, ZRAM_ZERO)) {
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_unlock_table(&meta->table[index]);
- handle_zero_page(bvec);
- return 0;
- }
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_unlock_table(&meta->table[index]);
-
- if (is_partial_io(bvec))
- /* Use a temporary buffer to decompress the page */
-@@ -700,10 +702,10 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
- if (user_mem)
- kunmap_atomic(user_mem);
- /* Free memory associated with this sector now. */
-- bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_lock_table(&meta->table[index]);
- zram_free_page(zram, index);
- zram_set_flag(meta, index, ZRAM_ZERO);
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_unlock_table(&meta->table[index]);
-
- atomic64_inc(&zram->stats.zero_pages);
- ret = 0;
-@@ -794,12 +796,12 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
- * Free memory associated with this sector
- * before overwriting unused sectors.
- */
-- bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_lock_table(&meta->table[index]);
- zram_free_page(zram, index);
-
- meta->table[index].handle = handle;
- zram_set_obj_size(meta, index, clen);
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_unlock_table(&meta->table[index]);
-
- /* Update stats */
- atomic64_add(clen, &zram->stats.compr_data_size);
-@@ -842,9 +844,9 @@ static void zram_bio_discard(struct zram *zram, u32 index,
- }
-
- while (n >= PAGE_SIZE) {
-- bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_lock_table(&meta->table[index]);
- zram_free_page(zram, index);
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_unlock_table(&meta->table[index]);
- atomic64_inc(&zram->stats.notify_free);
- index++;
- n -= PAGE_SIZE;
-@@ -973,9 +975,9 @@ static void zram_slot_free_notify(struct block_device *bdev,
- zram = bdev->bd_disk->private_data;
- meta = zram->meta;
-
-- bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_lock_table(&meta->table[index]);
- zram_free_page(zram, index);
-- bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-+ zram_unlock_table(&meta->table[index]);
- atomic64_inc(&zram->stats.notify_free);
- }
-
-diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
-index 74fcf10da374..fd4020c99b9e 100644
---- a/drivers/block/zram/zram_drv.h
-+++ b/drivers/block/zram/zram_drv.h
-@@ -73,6 +73,9 @@ enum zram_pageflags {
- struct zram_table_entry {
- unsigned long handle;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/block/zram/zram_drv.h linux-4.14/drivers/block/zram/zram_drv.h
+--- linux-4.14.orig/drivers/block/zram/zram_drv.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/block/zram/zram_drv.h 2018-09-05 11:05:07.000000000 +0200
+@@ -77,6 +77,9 @@
+ unsigned long element;
+ };
unsigned long value;
+#ifdef CONFIG_PREEMPT_RT_BASE
+ spinlock_t lock;
};
struct zram_stats {
-@@ -120,4 +123,42 @@ struct zram {
- */
- bool claim; /* Protected by bdev->bd_mutex */
- };
-+
-+#ifndef CONFIG_PREEMPT_RT_BASE
-+static inline void zram_lock_table(struct zram_table_entry *table)
-+{
-+ bit_spin_lock(ZRAM_ACCESS, &table->value);
-+}
-+
-+static inline void zram_unlock_table(struct zram_table_entry *table)
-+{
-+ bit_spin_unlock(ZRAM_ACCESS, &table->value);
-+}
-+
-+static inline void zram_meta_init_table_locks(struct zram_meta *meta, u64 disksize) { }
-+#else /* CONFIG_PREEMPT_RT_BASE */
-+static inline void zram_lock_table(struct zram_table_entry *table)
-+{
-+ spin_lock(&table->lock);
-+ __set_bit(ZRAM_ACCESS, &table->value);
-+}
-+
-+static inline void zram_unlock_table(struct zram_table_entry *table)
-+{
-+ __clear_bit(ZRAM_ACCESS, &table->value);
-+ spin_unlock(&table->lock);
-+}
-+
-+static inline void zram_meta_init_table_locks(struct zram_meta *meta, u64 disksize)
-+{
-+ size_t num_pages = disksize >> PAGE_SHIFT;
-+ size_t index;
-+
-+ for (index = 0; index < num_pages; index++) {
-+ spinlock_t *lock = &meta->table[index].lock;
-+ spin_lock_init(lock);
-+ }
-+}
-+#endif /* CONFIG_PREEMPT_RT_BASE */
-+
- #endif
-diff --git a/drivers/char/random.c b/drivers/char/random.c
-index d6876d506220..0c60b1e54579 100644
---- a/drivers/char/random.c
-+++ b/drivers/char/random.c
-@@ -1028,8 +1028,6 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/char/random.c linux-4.14/drivers/char/random.c
+--- linux-4.14.orig/drivers/char/random.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/drivers/char/random.c 2018-09-05 11:05:07.000000000 +0200
+@@ -265,6 +265,7 @@
+ #include <linux/syscalls.h>
+ #include <linux/completion.h>
+ #include <linux/uuid.h>
++#include <linux/locallock.h>
+ #include <crypto/chacha20.h>
+
+ #include <asm/processor.h>
+@@ -856,7 +857,7 @@
+ invalidate_batched_entropy();
+ crng_init = 1;
+ wake_up_interruptible(&crng_init_wait);
+- pr_notice("random: fast init done\n");
++ /* pr_notice("random: fast init done\n"); */
+ }
+ return 1;
+ }
+@@ -941,17 +942,21 @@
+ crng_init = 2;
+ process_random_ready_list();
+ wake_up_interruptible(&crng_init_wait);
+- pr_notice("random: crng init done\n");
++ /* pr_notice("random: crng init done\n"); */
+ if (unseeded_warning.missed) {
++#if 0
+ pr_notice("random: %d get_random_xx warning(s) missed "
+ "due to ratelimiting\n",
+ unseeded_warning.missed);
++#endif
+ unseeded_warning.missed = 0;
+ }
+ if (urandom_warning.missed) {
++#if 0
+ pr_notice("random: %d urandom warning(s) missed "
+ "due to ratelimiting\n",
+ urandom_warning.missed);
++#endif
+ urandom_warning.missed = 0;
+ }
+ }
+@@ -1122,8 +1127,6 @@
} sample;
long delta, delta2, delta3;
sample.jiffies = jiffies;
sample.cycles = random_get_entropy();
sample.num = num;
-@@ -1070,7 +1068,6 @@ static void add_timer_randomness(struct timer_rand_state *state, unsigned num)
+@@ -1164,7 +1167,6 @@
*/
credit_entropy_bits(r, min_t(int, fls(delta>>1), 11));
}
}
void add_input_randomness(unsigned int type, unsigned int code,
-@@ -1123,28 +1120,27 @@ static __u32 get_reg(struct fast_pool *f, struct pt_regs *regs)
- return *(ptr + f->reg_idx++);
+@@ -1221,28 +1223,27 @@
+ return *ptr;
}
-void add_interrupt_randomness(int irq, int irq_flags)
fast_mix(fast_pool);
add_interrupt_bench(cycles);
-diff --git a/drivers/clocksource/tcb_clksrc.c b/drivers/clocksource/tcb_clksrc.c
-index 4da2af9694a2..5b6f57f500b8 100644
---- a/drivers/clocksource/tcb_clksrc.c
-+++ b/drivers/clocksource/tcb_clksrc.c
-@@ -23,8 +23,7 @@
+@@ -2200,6 +2201,7 @@
+ * at any point prior.
+ */
+ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64);
++static DEFINE_LOCAL_IRQ_LOCK(batched_entropy_u64_lock);
+ u64 get_random_u64(void)
+ {
+ u64 ret;
+@@ -2220,7 +2222,7 @@
+ warn_unseeded_randomness(&previous);
+
+ use_lock = READ_ONCE(crng_init) < 2;
+- batch = &get_cpu_var(batched_entropy_u64);
++ batch = &get_locked_var(batched_entropy_u64_lock, batched_entropy_u64);
+ if (use_lock)
+ read_lock_irqsave(&batched_entropy_reset_lock, flags);
+ if (batch->position % ARRAY_SIZE(batch->entropy_u64) == 0) {
+@@ -2230,12 +2232,13 @@
+ ret = batch->entropy_u64[batch->position++];
+ if (use_lock)
+ read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+- put_cpu_var(batched_entropy_u64);
++ put_locked_var(batched_entropy_u64_lock, batched_entropy_u64);
+ return ret;
+ }
+ EXPORT_SYMBOL(get_random_u64);
+
+ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u32);
++static DEFINE_LOCAL_IRQ_LOCK(batched_entropy_u32_lock);
+ u32 get_random_u32(void)
+ {
+ u32 ret;
+@@ -2250,7 +2253,7 @@
+ warn_unseeded_randomness(&previous);
+
+ use_lock = READ_ONCE(crng_init) < 2;
+- batch = &get_cpu_var(batched_entropy_u32);
++ batch = &get_locked_var(batched_entropy_u32_lock, batched_entropy_u32);
+ if (use_lock)
+ read_lock_irqsave(&batched_entropy_reset_lock, flags);
+ if (batch->position % ARRAY_SIZE(batch->entropy_u32) == 0) {
+@@ -2260,7 +2263,7 @@
+ ret = batch->entropy_u32[batch->position++];
+ if (use_lock)
+ read_unlock_irqrestore(&batched_entropy_reset_lock, flags);
+- put_cpu_var(batched_entropy_u32);
++ put_locked_var(batched_entropy_u32_lock, batched_entropy_u32);
+ return ret;
+ }
+ EXPORT_SYMBOL(get_random_u32);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/char/tpm/tpm_tis.c linux-4.14/drivers/char/tpm/tpm_tis.c
+--- linux-4.14.orig/drivers/char/tpm/tpm_tis.c 2018-09-05 11:03:20.000000000 +0200
++++ linux-4.14/drivers/char/tpm/tpm_tis.c 2018-09-05 11:05:07.000000000 +0200
+@@ -52,6 +52,31 @@
+ return container_of(data, struct tpm_tis_tcg_phy, priv);
+ }
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * Flushes previous write operations to chip so that a subsequent
++ * ioread*()s won't stall a cpu.
++ */
++static inline void tpm_tis_flush(void __iomem *iobase)
++{
++ ioread8(iobase + TPM_ACCESS(0));
++}
++#else
++#define tpm_tis_flush(iobase) do { } while (0)
++#endif
++
++static inline void tpm_tis_iowrite8(u8 b, void __iomem *iobase, u32 addr)
++{
++ iowrite8(b, iobase + addr);
++ tpm_tis_flush(iobase);
++}
++
++static inline void tpm_tis_iowrite32(u32 b, void __iomem *iobase, u32 addr)
++{
++ iowrite32(b, iobase + addr);
++ tpm_tis_flush(iobase);
++}
++
+ static bool interrupts = true;
+ module_param(interrupts, bool, 0444);
+ MODULE_PARM_DESC(interrupts, "Enable interrupts");
+@@ -149,7 +174,7 @@
+ struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
+
+ while (len--)
+- iowrite8(*value++, phy->iobase + addr);
++ tpm_tis_iowrite8(*value++, phy->iobase, addr);
+
+ return 0;
+ }
+@@ -176,7 +201,7 @@
+ {
+ struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
+
+- iowrite32(value, phy->iobase + addr);
++ tpm_tis_iowrite32(value, phy->iobase, addr);
+
+ return 0;
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/clocksource/tcb_clksrc.c linux-4.14/drivers/clocksource/tcb_clksrc.c
+--- linux-4.14.orig/drivers/clocksource/tcb_clksrc.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/clocksource/tcb_clksrc.c 2018-09-05 11:05:07.000000000 +0200
+@@ -25,8 +25,7 @@
* this 32 bit free-running counter. the second channel is not used.
*
* - The third channel may be used to provide a 16-bit clockevent
*
* A boot clocksource and clockevent source are also currently needed,
* unless the relevant platforms (ARM/AT91, AVR32/AT32) are changed so
-@@ -74,6 +73,8 @@ static struct clocksource clksrc = {
+@@ -126,6 +125,8 @@
struct tc_clkevt_device {
struct clock_event_device clkevt;
struct clk *clk;
void __iomem *regs;
};
-@@ -82,15 +83,26 @@ static struct tc_clkevt_device *to_tc_clkevt(struct clock_event_device *clkevt)
+@@ -134,15 +135,26 @@
return container_of(clkevt, struct tc_clkevt_device, clkevt);
}
static int tc_shutdown(struct clock_event_device *d)
{
struct tc_clkevt_device *tcd = to_tc_clkevt(d);
-@@ -98,8 +110,14 @@ static int tc_shutdown(struct clock_event_device *d)
+@@ -150,8 +162,14 @@
- __raw_writel(0xff, regs + ATMEL_TC_REG(2, IDR));
- __raw_writel(ATMEL_TC_CLKDIS, regs + ATMEL_TC_REG(2, CCR));
+ writel(0xff, regs + ATMEL_TC_REG(2, IDR));
+ writel(ATMEL_TC_CLKDIS, regs + ATMEL_TC_REG(2, CCR));
+ return 0;
+}
+
return 0;
}
-@@ -112,9 +130,9 @@ static int tc_set_oneshot(struct clock_event_device *d)
+@@ -164,9 +182,9 @@
if (clockevent_state_oneshot(d) || clockevent_state_periodic(d))
tc_shutdown(d);
- /* slow clock, count up to RC, then irq and stop */
+ /* count up to RC, then irq and stop */
- __raw_writel(timer_clock | ATMEL_TC_CPCSTOP | ATMEL_TC_WAVE |
+ writel(timer_clock | ATMEL_TC_CPCSTOP | ATMEL_TC_WAVE |
ATMEL_TC_WAVESEL_UP_AUTO, regs + ATMEL_TC_REG(2, CMR));
- __raw_writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
-@@ -134,12 +152,12 @@ static int tc_set_periodic(struct clock_event_device *d)
+ writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
+@@ -186,12 +204,12 @@
/* By not making the gentime core emulate periodic mode on top
* of oneshot, we get lower overhead and improved accuracy.
*/
- /* slow clock, count up to RC, then irq and restart */
+ /* count up to RC, then irq and restart */
- __raw_writel(timer_clock | ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO,
+ writel(timer_clock | ATMEL_TC_WAVE | ATMEL_TC_WAVESEL_UP_AUTO,
regs + ATMEL_TC_REG(2, CMR));
-- __raw_writel((32768 + HZ / 2) / HZ, tcaddr + ATMEL_TC_REG(2, RC));
-+ __raw_writel((tcd->freq + HZ / 2) / HZ, tcaddr + ATMEL_TC_REG(2, RC));
+- writel((32768 + HZ / 2) / HZ, tcaddr + ATMEL_TC_REG(2, RC));
++ writel((tcd->freq + HZ / 2) / HZ, tcaddr + ATMEL_TC_REG(2, RC));
/* Enable clock and interrupts on RC compare */
- __raw_writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
-@@ -166,9 +184,13 @@ static struct tc_clkevt_device clkevt = {
+ writel(ATMEL_TC_CPCS, regs + ATMEL_TC_REG(2, IER));
+@@ -218,9 +236,13 @@
.features = CLOCK_EVT_FEAT_PERIODIC |
CLOCK_EVT_FEAT_ONESHOT,
/* Should be lower than at91rm9200's system timer */
.set_state_periodic = tc_set_periodic,
.set_state_oneshot = tc_set_oneshot,
},
-@@ -188,8 +210,9 @@ static irqreturn_t ch2_irq(int irq, void *handle)
+@@ -240,8 +262,9 @@
return IRQ_NONE;
}
int ret;
struct clk *t2_clk = tc->clk[2];
int irq = tc->irq[2];
-@@ -210,7 +233,11 @@ static int __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx)
+@@ -262,7 +285,11 @@
clkevt.regs = tc->regs;
clkevt.clk = t2_clk;
clkevt.clkevt.cpumask = cpumask_of(0);
-@@ -221,7 +248,7 @@ static int __init setup_clkevents(struct atmel_tc *tc, int clk32k_divisor_idx)
+@@ -273,7 +300,7 @@
return ret;
}
return ret;
}
-@@ -358,7 +385,11 @@ static int __init tcb_clksrc_init(void)
+@@ -410,7 +437,11 @@
goto err_disable_t1;
/* channel 2: periodic and oneshot timer support */
if (ret)
goto err_unregister_clksrc;
-diff --git a/drivers/clocksource/timer-atmel-pit.c b/drivers/clocksource/timer-atmel-pit.c
-index 6555821bbdae..93288849b2bd 100644
---- a/drivers/clocksource/timer-atmel-pit.c
-+++ b/drivers/clocksource/timer-atmel-pit.c
-@@ -46,6 +46,7 @@ struct pit_data {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/clocksource/timer-atmel-pit.c linux-4.14/drivers/clocksource/timer-atmel-pit.c
+--- linux-4.14.orig/drivers/clocksource/timer-atmel-pit.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/clocksource/timer-atmel-pit.c 2018-09-05 11:05:07.000000000 +0200
+@@ -46,6 +46,7 @@
u32 cycle;
u32 cnt;
unsigned int irq;
struct clk *mck;
};
-@@ -96,15 +97,29 @@ static int pit_clkevt_shutdown(struct clock_event_device *dev)
+@@ -96,15 +97,29 @@
/* disable irq, leaving the clocksource active */
pit_write(data->base, AT91_PIT_MR, (data->cycle - 1) | AT91_PIT_PITEN);
/* update clocksource counter */
data->cnt += data->cycle * PIT_PICNT(pit_read(data->base, AT91_PIT_PIVR));
-@@ -230,15 +245,6 @@ static int __init at91sam926x_pit_dt_init(struct device_node *node)
+@@ -230,15 +245,6 @@
return ret;
}
/* Set up and register clockevents */
data->clkevt.name = "pit";
data->clkevt.features = CLOCK_EVT_FEAT_PERIODIC;
-diff --git a/drivers/clocksource/timer-atmel-st.c b/drivers/clocksource/timer-atmel-st.c
-index e90ab5b63a90..9e124087c55f 100644
---- a/drivers/clocksource/timer-atmel-st.c
-+++ b/drivers/clocksource/timer-atmel-st.c
-@@ -115,18 +115,29 @@ static void clkdev32k_disable_and_flush_irq(void)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/clocksource/timer-atmel-st.c linux-4.14/drivers/clocksource/timer-atmel-st.c
+--- linux-4.14.orig/drivers/clocksource/timer-atmel-st.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/clocksource/timer-atmel-st.c 2018-09-05 11:05:07.000000000 +0200
+@@ -115,18 +115,29 @@
last_crtr = read_CRTR();
}
/*
* ALM for oneshot irqs, set by next_event()
* before 32 seconds have passed.
-@@ -139,8 +150,16 @@ static int clkevt32k_set_oneshot(struct clock_event_device *dev)
+@@ -139,8 +150,16 @@
static int clkevt32k_set_periodic(struct clock_event_device *dev)
{
/* PIT for periodic irqs; fixed rate of 1/HZ */
irqmask = AT91_ST_PITS;
regmap_write(regmap_st, AT91_ST_PIMR, timer_latch);
-@@ -198,7 +217,7 @@ static int __init atmel_st_timer_init(struct device_node *node)
+@@ -198,7 +217,7 @@
{
struct clk *sclk;
unsigned int sclk_rate, val;
regmap_st = syscon_node_to_regmap(node);
if (IS_ERR(regmap_st)) {
-@@ -212,21 +231,12 @@ static int __init atmel_st_timer_init(struct device_node *node)
+@@ -212,21 +231,12 @@
regmap_read(regmap_st, AT91_ST_SR, &val);
/* Get the interrupts property */
sclk = of_clk_get(node, 0);
if (IS_ERR(sclk)) {
pr_err("Unable to get slow clock\n");
-diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
-index a782ce87715c..19d265948526 100644
---- a/drivers/connector/cn_proc.c
-+++ b/drivers/connector/cn_proc.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/connector/cn_proc.c linux-4.14/drivers/connector/cn_proc.c
+--- linux-4.14.orig/drivers/connector/cn_proc.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/connector/cn_proc.c 2018-09-05 11:05:07.000000000 +0200
@@ -32,6 +32,7 @@
#include <linux/pid_namespace.h>
/*
* Size of a cn_msg followed by a proc_event structure. Since the
-@@ -54,10 +55,11 @@ static struct cb_id cn_proc_event_id = { CN_IDX_PROC, CN_VAL_PROC };
+@@ -54,10 +55,11 @@
/* proc_event_counts is used as the sequence number of the netlink message */
static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 };
msg->seq = __this_cpu_inc_return(proc_event_counts) - 1;
((struct proc_event *)msg->data)->cpu = smp_processor_id();
-@@ -70,7 +72,7 @@ static inline void send_msg(struct cn_msg *msg)
+@@ -70,7 +72,7 @@
*/
cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
}
void proc_fork_connector(struct task_struct *task)
-diff --git a/drivers/cpufreq/Kconfig.x86 b/drivers/cpufreq/Kconfig.x86
-index adbd1de1cea5..1fac5074f2cf 100644
---- a/drivers/cpufreq/Kconfig.x86
-+++ b/drivers/cpufreq/Kconfig.x86
-@@ -124,7 +124,7 @@ config X86_POWERNOW_K7_ACPI
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/cpufreq/Kconfig.x86 linux-4.14/drivers/cpufreq/Kconfig.x86
+--- linux-4.14.orig/drivers/cpufreq/Kconfig.x86 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/cpufreq/Kconfig.x86 2018-09-05 11:05:07.000000000 +0200
+@@ -125,7 +125,7 @@
config X86_POWERNOW_K8
tristate "AMD Opteron/Athlon64 PowerNow!"
help
This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors.
Support for K10 and newer processors is now in acpi-cpufreq.
-diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
-index a218c2e395e7..5273d8f1d5dd 100644
---- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
-+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
-@@ -1537,7 +1537,9 @@ execbuf_submit(struct i915_execbuffer_params *params,
- if (ret)
- return ret;
-
-+#ifndef CONFIG_PREEMPT_RT_BASE
- trace_i915_gem_ring_dispatch(params->request, params->dispatch_flags);
-+#endif
-
- i915_gem_execbuffer_move_to_active(vmas, params->request);
-
-diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
-index 1c237d02f30b..9e9b4404c0d7 100644
---- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
-+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
-@@ -40,7 +40,7 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
- if (!mutex_is_locked(mutex))
- return false;
-
--#if defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_MUTEX_SPIN_ON_OWNER)
-+#if (defined(CONFIG_DEBUG_MUTEXES) || defined(CONFIG_MUTEX_SPIN_ON_OWNER)) && !defined(CONFIG_PREEMPT_RT_BASE)
- return mutex->owner == task;
- #else
- /* Since UP may be pre-empted, we cannot assume that we own the lock */
-diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
-index 3fc286cd1157..252a1117b103 100644
---- a/drivers/gpu/drm/i915/i915_irq.c
-+++ b/drivers/gpu/drm/i915/i915_irq.c
-@@ -812,6 +812,7 @@ static int i915_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/gpu/drm/i915/i915_gem_timeline.c linux-4.14/drivers/gpu/drm/i915/i915_gem_timeline.c
+--- linux-4.14.orig/drivers/gpu/drm/i915/i915_gem_timeline.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/gpu/drm/i915/i915_gem_timeline.c 2018-09-05 11:05:07.000000000 +0200
+@@ -33,11 +33,8 @@
+ {
+ tl->fence_context = context;
+ tl->common = parent;
+-#ifdef CONFIG_DEBUG_SPINLOCK
+- __raw_spin_lock_init(&tl->lock.rlock, lockname, lockclass);
+-#else
+ spin_lock_init(&tl->lock);
+-#endif
++ lockdep_set_class_and_name(&tl->lock, lockclass, lockname);
+ init_request_active(&tl->last_request, NULL);
+ INIT_LIST_HEAD(&tl->requests);
+ i915_syncmap_init(&tl->sync);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/gpu/drm/i915/i915_irq.c linux-4.14/drivers/gpu/drm/i915/i915_irq.c
+--- linux-4.14.orig/drivers/gpu/drm/i915/i915_irq.c 2018-09-05 11:03:21.000000000 +0200
++++ linux-4.14/drivers/gpu/drm/i915/i915_irq.c 2018-09-05 11:05:07.000000000 +0200
+@@ -867,6 +867,7 @@
spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
/* Get optional system timestamp before query. */
if (stime)
-@@ -863,6 +864,7 @@ static int i915_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe,
+@@ -918,6 +919,7 @@
*etime = ktime_get();
/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
-diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
-index 869b29fe9ec4..c8b8788d9d36 100644
---- a/drivers/gpu/drm/i915/intel_display.c
-+++ b/drivers/gpu/drm/i915/intel_display.c
-@@ -12131,7 +12131,7 @@ void intel_check_page_flip(struct drm_i915_private *dev_priv, int pipe)
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- struct intel_flip_work *work;
-
-- WARN_ON(!in_interrupt());
-+ WARN_ON_NONRT(!in_interrupt());
-
- if (crtc == NULL)
- return;
-diff --git a/drivers/gpu/drm/i915/intel_sprite.c b/drivers/gpu/drm/i915/intel_sprite.c
-index dbed12c484c9..5c540b78e8b5 100644
---- a/drivers/gpu/drm/i915/intel_sprite.c
-+++ b/drivers/gpu/drm/i915/intel_sprite.c
-@@ -35,6 +35,7 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/gpu/drm/i915/intel_sprite.c linux-4.14/drivers/gpu/drm/i915/intel_sprite.c
+--- linux-4.14.orig/drivers/gpu/drm/i915/intel_sprite.c 2018-09-05 11:03:21.000000000 +0200
++++ linux-4.14/drivers/gpu/drm/i915/intel_sprite.c 2018-09-05 11:05:07.000000000 +0200
+@@ -36,6 +36,7 @@
#include <drm/drm_rect.h>
#include <drm/drm_atomic.h>
#include <drm/drm_plane_helper.h>
#include "intel_drv.h"
#include "intel_frontbuffer.h"
#include <drm/i915_drm.h>
-@@ -65,6 +66,8 @@ int intel_usecs_to_scanlines(const struct drm_display_mode *adjusted_mode,
- 1000 * adjusted_mode->crtc_htotal);
+@@ -67,7 +68,7 @@
}
+ #define VBLANK_EVASION_TIME_US 100
+-
+static DEFINE_LOCAL_IRQ_LOCK(pipe_update_lock);
-+
/**
* intel_pipe_update_start() - start update of a set of display registers
* @crtc: the crtc of which the registers are going to be updated
-@@ -95,7 +98,7 @@ void intel_pipe_update_start(struct intel_crtc *crtc)
- min = vblank_start - intel_usecs_to_scanlines(adjusted_mode, 100);
+@@ -102,7 +103,7 @@
+ VBLANK_EVASION_TIME_US);
max = vblank_start - 1;
- local_irq_disable();
if (min <= 0 || max <= 0)
return;
-@@ -125,11 +128,11 @@ void intel_pipe_update_start(struct intel_crtc *crtc)
+@@ -132,11 +133,11 @@
break;
}
}
finish_wait(wq, &wait);
-@@ -181,7 +184,7 @@ void intel_pipe_update_end(struct intel_crtc *crtc, struct intel_flip_work *work
+@@ -201,7 +202,7 @@
crtc->base.state->event = NULL;
}
- local_irq_enable();
+ local_unlock_irq(pipe_update_lock);
- if (crtc->debug.start_vbl_count &&
- crtc->debug.start_vbl_count != end_vbl_count) {
-diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c
-index 192b2d3a79cb..d5372a207326 100644
---- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
-+++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
-@@ -23,7 +23,7 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
- if (!mutex_is_locked(mutex))
- return false;
-
--#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
-+#if (defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)) && !defined(CONFIG_PREEMPT_RT_BASE)
- return mutex->owner == task;
- #else
- /* Since UP may be pre-empted, we cannot assume that we own the lock */
-diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/radeon/radeon_display.c
-index cdb8cb568c15..b6d7fd964cbc 100644
---- a/drivers/gpu/drm/radeon/radeon_display.c
-+++ b/drivers/gpu/drm/radeon/radeon_display.c
-@@ -1845,6 +1845,7 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe,
+ if (intel_vgpu_active(dev_priv))
+ return;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/gpu/drm/radeon/radeon_display.c linux-4.14/drivers/gpu/drm/radeon/radeon_display.c
+--- linux-4.14.orig/drivers/gpu/drm/radeon/radeon_display.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/gpu/drm/radeon/radeon_display.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1839,6 +1839,7 @@
struct radeon_device *rdev = dev->dev_private;
/* preempt_disable_rt() should go right here in PREEMPT_RT patchset. */
/* Get optional system timestamp before query. */
if (stime)
-@@ -1937,6 +1938,7 @@ int radeon_get_crtc_scanoutpos(struct drm_device *dev, unsigned int pipe,
+@@ -1931,6 +1932,7 @@
*etime = ktime_get();
/* preempt_enable_rt() should go right here in PREEMPT_RT patchset. */
/* Decode into vertical and horizontal scanout position. */
*vpos = position & 0x1fff;
-diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
-index 0276d2ef06ee..8868045eabde 100644
---- a/drivers/hv/vmbus_drv.c
-+++ b/drivers/hv/vmbus_drv.c
-@@ -761,6 +761,8 @@ static void vmbus_isr(void)
- void *page_addr;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/hv/vmbus_drv.c linux-4.14/drivers/hv/vmbus_drv.c
+--- linux-4.14.orig/drivers/hv/vmbus_drv.c 2018-09-05 11:03:21.000000000 +0200
++++ linux-4.14/drivers/hv/vmbus_drv.c 2018-09-05 11:05:37.000000000 +0200
+@@ -39,6 +39,7 @@
+ #include <asm/hyperv.h>
+ #include <asm/hypervisor.h>
+ #include <asm/mshyperv.h>
++#include <asm/irq_regs.h>
+ #include <linux/notifier.h>
+ #include <linux/ptrace.h>
+ #include <linux/screen_info.h>
+@@ -966,6 +967,8 @@
+ void *page_addr = hv_cpu->synic_event_page;
struct hv_message *msg;
union hv_synic_event_flags *event;
+ struct pt_regs *regs = get_irq_regs();
+ u64 ip = regs ? instruction_pointer(regs) : 0;
bool handled = false;
- page_addr = hv_context.synic_event_page[cpu];
-@@ -808,7 +810,7 @@ static void vmbus_isr(void)
- tasklet_schedule(hv_context.msg_dpc[cpu]);
+ if (unlikely(page_addr == NULL))
+@@ -1009,7 +1012,7 @@
+ tasklet_schedule(&hv_cpu->msg_dpc);
}
- add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0);
}
-diff --git a/drivers/ide/alim15x3.c b/drivers/ide/alim15x3.c
-index 36f76e28a0bf..394f142f90c7 100644
---- a/drivers/ide/alim15x3.c
-+++ b/drivers/ide/alim15x3.c
-@@ -234,7 +234,7 @@ static int init_chipset_ali15x3(struct pci_dev *dev)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ide/alim15x3.c linux-4.14/drivers/ide/alim15x3.c
+--- linux-4.14.orig/drivers/ide/alim15x3.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ide/alim15x3.c 2018-09-05 11:05:07.000000000 +0200
+@@ -234,7 +234,7 @@
isa_dev = pci_get_device(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M1533, NULL);
if (m5229_revision < 0xC2) {
/*
-@@ -325,7 +325,7 @@ static int init_chipset_ali15x3(struct pci_dev *dev)
+@@ -325,7 +325,7 @@
}
pci_dev_put(north);
pci_dev_put(isa_dev);
return 0;
}
-diff --git a/drivers/ide/hpt366.c b/drivers/ide/hpt366.c
-index 0ceae5cbd89a..c212e85d7f3e 100644
---- a/drivers/ide/hpt366.c
-+++ b/drivers/ide/hpt366.c
-@@ -1236,7 +1236,7 @@ static int init_dma_hpt366(ide_hwif_t *hwif,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ide/hpt366.c linux-4.14/drivers/ide/hpt366.c
+--- linux-4.14.orig/drivers/ide/hpt366.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ide/hpt366.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1236,7 +1236,7 @@
dma_old = inb(base + 2);
dma_new = dma_old;
pci_read_config_byte(dev, hwif->channel ? 0x4b : 0x43, &masterdma);
-@@ -1247,7 +1247,7 @@ static int init_dma_hpt366(ide_hwif_t *hwif,
+@@ -1247,7 +1247,7 @@
if (dma_new != dma_old)
outb(dma_new, base + 2);
printk(KERN_INFO " %s: BM-DMA at 0x%04lx-0x%04lx\n",
hwif->name, base, base + 7);
-diff --git a/drivers/ide/ide-io-std.c b/drivers/ide/ide-io-std.c
-index 19763977568c..4169433faab5 100644
---- a/drivers/ide/ide-io-std.c
-+++ b/drivers/ide/ide-io-std.c
-@@ -175,7 +175,7 @@ void ide_input_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ide/ide-io.c linux-4.14/drivers/ide/ide-io.c
+--- linux-4.14.orig/drivers/ide/ide-io.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ide/ide-io.c 2018-09-05 11:05:07.000000000 +0200
+@@ -660,7 +660,7 @@
+ /* disable_irq_nosync ?? */
+ disable_irq(hwif->irq);
+ /* local CPU only, as if we were handling an interrupt */
+- local_irq_disable();
++ local_irq_disable_nort();
+ if (hwif->polling) {
+ startstop = handler(drive);
+ } else if (drive_is_ready(drive)) {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ide/ide-iops.c linux-4.14/drivers/ide/ide-iops.c
+--- linux-4.14.orig/drivers/ide/ide-iops.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ide/ide-iops.c 2018-09-05 11:05:07.000000000 +0200
+@@ -129,12 +129,12 @@
+ if ((stat & ATA_BUSY) == 0)
+ break;
+
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ *rstat = stat;
+ return -EBUSY;
+ }
+ }
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ }
+ /*
+ * Allow status to settle, then read it again.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ide/ide-io-std.c linux-4.14/drivers/ide/ide-io-std.c
+--- linux-4.14.orig/drivers/ide/ide-io-std.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ide/ide-io-std.c 2018-09-05 11:05:07.000000000 +0200
+@@ -175,7 +175,7 @@
unsigned long uninitialized_var(flags);
if ((io_32bit & 2) && !mmio) {
ata_vlb_sync(io_ports->nsect_addr);
}
-@@ -186,7 +186,7 @@ void ide_input_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+@@ -186,7 +186,7 @@
insl(data_addr, buf, words);
if ((io_32bit & 2) && !mmio)
if (((len + 1) & 3) < 2)
return;
-@@ -219,7 +219,7 @@ void ide_output_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+@@ -219,7 +219,7 @@
unsigned long uninitialized_var(flags);
if ((io_32bit & 2) && !mmio) {
ata_vlb_sync(io_ports->nsect_addr);
}
-@@ -230,7 +230,7 @@ void ide_output_data(ide_drive_t *drive, struct ide_cmd *cmd, void *buf,
+@@ -230,7 +230,7 @@
outsl(data_addr, buf, words);
if ((io_32bit & 2) && !mmio)
if (((len + 1) & 3) < 2)
return;
-diff --git a/drivers/ide/ide-io.c b/drivers/ide/ide-io.c
-index 669ea1e45795..e12e43e62245 100644
---- a/drivers/ide/ide-io.c
-+++ b/drivers/ide/ide-io.c
-@@ -659,7 +659,7 @@ void ide_timer_expiry (unsigned long data)
- /* disable_irq_nosync ?? */
- disable_irq(hwif->irq);
- /* local CPU only, as if we were handling an interrupt */
-- local_irq_disable();
-+ local_irq_disable_nort();
- if (hwif->polling) {
- startstop = handler(drive);
- } else if (drive_is_ready(drive)) {
-diff --git a/drivers/ide/ide-iops.c b/drivers/ide/ide-iops.c
-index 376f2dc410c5..f014dd1b73dc 100644
---- a/drivers/ide/ide-iops.c
-+++ b/drivers/ide/ide-iops.c
-@@ -129,12 +129,12 @@ int __ide_wait_stat(ide_drive_t *drive, u8 good, u8 bad,
- if ((stat & ATA_BUSY) == 0)
- break;
-
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- *rstat = stat;
- return -EBUSY;
- }
- }
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- }
- /*
- * Allow status to settle, then read it again.
-diff --git a/drivers/ide/ide-probe.c b/drivers/ide/ide-probe.c
-index 0b63facd1d87..4ceba37afc0c 100644
---- a/drivers/ide/ide-probe.c
-+++ b/drivers/ide/ide-probe.c
-@@ -196,10 +196,10 @@ static void do_identify(ide_drive_t *drive, u8 cmd, u16 *id)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ide/ide-probe.c linux-4.14/drivers/ide/ide-probe.c
+--- linux-4.14.orig/drivers/ide/ide-probe.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ide/ide-probe.c 2018-09-05 11:05:07.000000000 +0200
+@@ -196,10 +196,10 @@
int bswap = 1;
/* local CPU only; some systems need this */
drive->dev_flags |= IDE_DFLAG_ID_READ;
#ifdef DEBUG
-diff --git a/drivers/ide/ide-taskfile.c b/drivers/ide/ide-taskfile.c
-index a716693417a3..be0568c722d6 100644
---- a/drivers/ide/ide-taskfile.c
-+++ b/drivers/ide/ide-taskfile.c
-@@ -250,7 +250,7 @@ void ide_pio_bytes(ide_drive_t *drive, struct ide_cmd *cmd,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/ide/ide-taskfile.c linux-4.14/drivers/ide/ide-taskfile.c
+--- linux-4.14.orig/drivers/ide/ide-taskfile.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/ide/ide-taskfile.c 2018-09-05 11:05:07.000000000 +0200
+@@ -251,7 +251,7 @@
page_is_high = PageHighMem(page);
if (page_is_high)
buf = kmap_atomic(page) + offset;
-@@ -271,7 +271,7 @@ void ide_pio_bytes(ide_drive_t *drive, struct ide_cmd *cmd,
+@@ -272,7 +272,7 @@
kunmap_atomic(buf);
if (page_is_high)
len -= nr_bytes;
}
-@@ -414,7 +414,7 @@ static ide_startstop_t pre_task_out_intr(ide_drive_t *drive,
+@@ -415,7 +415,7 @@
}
if ((drive->dev_flags & IDE_DFLAG_UNMASK) == 0)
ide_set_handler(drive, &task_pio_intr, WAIT_WORSTCASE);
-diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
-index fddff403d5d2..cca1bb4fbfe3 100644
---- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
-+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
-@@ -902,7 +902,7 @@ void ipoib_mcast_restart_task(struct work_struct *work)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/infiniband/hw/hfi1/affinity.c linux-4.14/drivers/infiniband/hw/hfi1/affinity.c
+--- linux-4.14.orig/drivers/infiniband/hw/hfi1/affinity.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/infiniband/hw/hfi1/affinity.c 2018-09-05 11:05:07.000000000 +0200
+@@ -575,7 +575,7 @@
+ struct hfi1_affinity_node *entry;
+ cpumask_var_t diff, hw_thread_mask, available_mask, intrs_mask;
+ const struct cpumask *node_mask,
+- *proc_mask = ¤t->cpus_allowed;
++ *proc_mask = current->cpus_ptr;
+ struct hfi1_affinity_node_list *affinity = &node_affinity;
+ struct cpu_mask_set *set = &affinity->proc;
+
+@@ -583,7 +583,7 @@
+ * check whether process/context affinity has already
+ * been set
+ */
+- if (cpumask_weight(proc_mask) == 1) {
++ if (current->nr_cpus_allowed == 1) {
+ hfi1_cdbg(PROC, "PID %u %s affinity set to CPU %*pbl",
+ current->pid, current->comm,
+ cpumask_pr_args(proc_mask));
+@@ -594,7 +594,7 @@
+ cpu = cpumask_first(proc_mask);
+ cpumask_set_cpu(cpu, &set->used);
+ goto done;
+- } else if (cpumask_weight(proc_mask) < cpumask_weight(&set->mask)) {
++ } else if (current->nr_cpus_allowed < cpumask_weight(&set->mask)) {
+ hfi1_cdbg(PROC, "PID %u %s affinity set to CPU set(s) %*pbl",
+ current->pid, current->comm,
+ cpumask_pr_args(proc_mask));
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/infiniband/hw/hfi1/sdma.c linux-4.14/drivers/infiniband/hw/hfi1/sdma.c
+--- linux-4.14.orig/drivers/infiniband/hw/hfi1/sdma.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/infiniband/hw/hfi1/sdma.c 2018-09-05 11:05:07.000000000 +0200
+@@ -856,14 +856,13 @@
+ {
+ struct sdma_rht_node *rht_node;
+ struct sdma_engine *sde = NULL;
+- const struct cpumask *current_mask = ¤t->cpus_allowed;
+ unsigned long cpu_id;
+
+ /*
+ * To ensure that always the same sdma engine(s) will be
+ * selected make sure the process is pinned to this CPU only.
+ */
+- if (cpumask_weight(current_mask) != 1)
++ if (current->nr_cpus_allowed != 1)
+ goto out;
+
+ cpu_id = smp_processor_id();
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/infiniband/hw/qib/qib_file_ops.c linux-4.14/drivers/infiniband/hw/qib/qib_file_ops.c
+--- linux-4.14.orig/drivers/infiniband/hw/qib/qib_file_ops.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/infiniband/hw/qib/qib_file_ops.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1167,7 +1167,7 @@
+ static void assign_ctxt_affinity(struct file *fp, struct qib_devdata *dd)
+ {
+ struct qib_filedata *fd = fp->private_data;
+- const unsigned int weight = cpumask_weight(¤t->cpus_allowed);
++ const unsigned int weight = current->nr_cpus_allowed;
+ const struct cpumask *local_mask = cpumask_of_pcibus(dd->pcidev->bus);
+ int local_cpu;
+
+@@ -1648,9 +1648,8 @@
+ ret = find_free_ctxt(i_minor - 1, fp, uinfo);
+ else {
+ int unit;
+- const unsigned int cpu = cpumask_first(¤t->cpus_allowed);
+- const unsigned int weight =
+- cpumask_weight(¤t->cpus_allowed);
++ const unsigned int cpu = cpumask_first(current->cpus_ptr);
++ const unsigned int weight = current->nr_cpus_allowed;
+
+ if (weight == 1 && !test_bit(cpu, qib_cpulist))
+ if (!find_hca(cpu, &unit) && unit >= 0)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/infiniband/ulp/ipoib/ipoib_multicast.c linux-4.14/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+--- linux-4.14.orig/drivers/infiniband/ulp/ipoib/ipoib_multicast.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/infiniband/ulp/ipoib/ipoib_multicast.c 2018-09-05 11:05:07.000000000 +0200
+@@ -898,7 +898,7 @@
ipoib_dbg_mcast(priv, "restarting multicast task\n");
netif_addr_lock(dev);
spin_lock(&priv->lock);
-@@ -984,7 +984,7 @@ void ipoib_mcast_restart_task(struct work_struct *work)
+@@ -980,7 +980,7 @@
spin_unlock(&priv->lock);
netif_addr_unlock(dev);
- local_irq_restore(flags);
+ local_irq_restore_nort(flags);
- /*
- * make sure the in-flight joins have finished before we attempt
-diff --git a/drivers/input/gameport/gameport.c b/drivers/input/gameport/gameport.c
-index 4a2a9e370be7..e970d9afd179 100644
---- a/drivers/input/gameport/gameport.c
-+++ b/drivers/input/gameport/gameport.c
-@@ -91,13 +91,13 @@ static int gameport_measure_speed(struct gameport *gameport)
- tx = ~0;
+ ipoib_mcast_remove_list(&remove_list);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/input/gameport/gameport.c linux-4.14/drivers/input/gameport/gameport.c
+--- linux-4.14.orig/drivers/input/gameport/gameport.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/input/gameport/gameport.c 2018-09-05 11:05:07.000000000 +0200
+@@ -91,13 +91,13 @@
+ tx = ~0;
+
+ for (i = 0; i < 50; i++) {
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ t1 = ktime_get_ns();
+ for (t = 0; t < 50; t++)
+ gameport_read(gameport);
+ t2 = ktime_get_ns();
+ t3 = ktime_get_ns();
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ udelay(i * 10);
+ t = (t2 - t1) - (t3 - t2);
+ if (t < tx)
+@@ -124,12 +124,12 @@
+ tx = 1 << 30;
+
+ for(i = 0; i < 50; i++) {
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ GET_TIME(t1);
+ for (t = 0; t < 50; t++) gameport_read(gameport);
+ GET_TIME(t2);
+ GET_TIME(t3);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ udelay(i * 10);
+ if ((t = DELTA(t2,t1) - DELTA(t3,t2)) < tx) tx = t;
+ }
+@@ -148,11 +148,11 @@
+ tx = 1 << 30;
+
+ for(i = 0; i < 50; i++) {
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ t1 = rdtsc();
+ for (t = 0; t < 50; t++) gameport_read(gameport);
+ t2 = rdtsc();
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ udelay(i * 10);
+ if (t2 - t1 < tx) tx = t2 - t1;
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/iommu/amd_iommu.c linux-4.14/drivers/iommu/amd_iommu.c
+--- linux-4.14.orig/drivers/iommu/amd_iommu.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/iommu/amd_iommu.c 2018-09-05 11:05:07.000000000 +0200
+@@ -81,11 +81,12 @@
+ */
+ #define AMD_IOMMU_PGSIZES ((~0xFFFUL) & ~(2ULL << 38))
+
+-static DEFINE_RWLOCK(amd_iommu_devtable_lock);
++static DEFINE_SPINLOCK(amd_iommu_devtable_lock);
++static DEFINE_SPINLOCK(pd_bitmap_lock);
++static DEFINE_SPINLOCK(iommu_table_lock);
+
+ /* List of all available dev_data structures */
+-static LIST_HEAD(dev_data_list);
+-static DEFINE_SPINLOCK(dev_data_list_lock);
++static LLIST_HEAD(dev_data_list);
+
+ LIST_HEAD(ioapic_map);
+ LIST_HEAD(hpet_map);
+@@ -204,40 +205,33 @@
+ static struct iommu_dev_data *alloc_dev_data(u16 devid)
+ {
+ struct iommu_dev_data *dev_data;
+- unsigned long flags;
+
+ dev_data = kzalloc(sizeof(*dev_data), GFP_KERNEL);
+ if (!dev_data)
+ return NULL;
+
+ dev_data->devid = devid;
+-
+- spin_lock_irqsave(&dev_data_list_lock, flags);
+- list_add_tail(&dev_data->dev_data_list, &dev_data_list);
+- spin_unlock_irqrestore(&dev_data_list_lock, flags);
+-
+ ratelimit_default_init(&dev_data->rs);
+
++ llist_add(&dev_data->dev_data_list, &dev_data_list);
+ return dev_data;
+ }
+
+ static struct iommu_dev_data *search_dev_data(u16 devid)
+ {
+ struct iommu_dev_data *dev_data;
+- unsigned long flags;
++ struct llist_node *node;
+
+- spin_lock_irqsave(&dev_data_list_lock, flags);
+- list_for_each_entry(dev_data, &dev_data_list, dev_data_list) {
++ if (llist_empty(&dev_data_list))
++ return NULL;
++
++ node = dev_data_list.first;
++ llist_for_each_entry(dev_data, node, dev_data_list) {
+ if (dev_data->devid == devid)
+- goto out_unlock;
++ return dev_data;
+ }
+
+- dev_data = NULL;
+-
+-out_unlock:
+- spin_unlock_irqrestore(&dev_data_list_lock, flags);
+-
+- return dev_data;
++ return NULL;
+ }
+
+ static int __last_alias(struct pci_dev *pdev, u16 alias, void *data)
+@@ -1056,9 +1050,9 @@
+ unsigned long flags;
+ int ret;
+
+- spin_lock_irqsave(&iommu->lock, flags);
++ raw_spin_lock_irqsave(&iommu->lock, flags);
+ ret = __iommu_queue_command_sync(iommu, cmd, sync);
+- spin_unlock_irqrestore(&iommu->lock, flags);
++ raw_spin_unlock_irqrestore(&iommu->lock, flags);
+
+ return ret;
+ }
+@@ -1084,7 +1078,7 @@
+
+ build_completion_wait(&cmd, (u64)&iommu->cmd_sem);
+
+- spin_lock_irqsave(&iommu->lock, flags);
++ raw_spin_lock_irqsave(&iommu->lock, flags);
+
+ iommu->cmd_sem = 0;
+
+@@ -1095,7 +1089,7 @@
+ ret = wait_on_sem(&iommu->cmd_sem);
+
+ out_unlock:
+- spin_unlock_irqrestore(&iommu->lock, flags);
++ raw_spin_unlock_irqrestore(&iommu->lock, flags);
+
+ return ret;
+ }
+@@ -1604,29 +1598,26 @@
+
+ static u16 domain_id_alloc(void)
+ {
+- unsigned long flags;
+ int id;
+
+- write_lock_irqsave(&amd_iommu_devtable_lock, flags);
++ spin_lock(&pd_bitmap_lock);
+ id = find_first_zero_bit(amd_iommu_pd_alloc_bitmap, MAX_DOMAIN_ID);
+ BUG_ON(id == 0);
+ if (id > 0 && id < MAX_DOMAIN_ID)
+ __set_bit(id, amd_iommu_pd_alloc_bitmap);
+ else
+ id = 0;
+- write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
++ spin_unlock(&pd_bitmap_lock);
+
+ return id;
+ }
+
+ static void domain_id_free(int id)
+ {
+- unsigned long flags;
+-
+- write_lock_irqsave(&amd_iommu_devtable_lock, flags);
++ spin_lock(&pd_bitmap_lock);
+ if (id > 0 && id < MAX_DOMAIN_ID)
+ __clear_bit(id, amd_iommu_pd_alloc_bitmap);
+- write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
++ spin_unlock(&pd_bitmap_lock);
+ }
+
+ #define DEFINE_FREE_PT_FN(LVL, FN) \
+@@ -1946,10 +1937,10 @@
+ int ret;
+
+ /*
+- * Must be called with IRQs disabled. Warn here to detect early
+- * when its not.
++ * Must be called with IRQs disabled on a non RT kernel. Warn here to
++ * detect early when its not.
+ */
+- WARN_ON(!irqs_disabled());
++ WARN_ON_NONRT(!irqs_disabled());
+
+ /* lock domain */
+ spin_lock(&domain->lock);
+@@ -2095,9 +2086,9 @@
+ }
+
+ skip_ats_check:
+- write_lock_irqsave(&amd_iommu_devtable_lock, flags);
++ spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
+ ret = __attach_device(dev_data, domain);
+- write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
++ spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
+
+ /*
+ * We might boot into a crash-kernel here. The crashed kernel
+@@ -2117,10 +2108,10 @@
+ struct protection_domain *domain;
+
+ /*
+- * Must be called with IRQs disabled. Warn here to detect early
+- * when its not.
++ * Must be called with IRQs disabled on a non RT kernel. Warn here to
++ * detect early when its not.
+ */
+- WARN_ON(!irqs_disabled());
++ WARN_ON_NONRT(!irqs_disabled());
+
+ if (WARN_ON(!dev_data->domain))
+ return;
+@@ -2147,9 +2138,9 @@
+ domain = dev_data->domain;
+
+ /* lock device table */
+- write_lock_irqsave(&amd_iommu_devtable_lock, flags);
++ spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
+ __detach_device(dev_data);
+- write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
++ spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
+
+ if (!dev_is_pci(dev))
+ return;
+@@ -2813,7 +2804,7 @@
+ struct iommu_dev_data *entry;
+ unsigned long flags;
+
+- write_lock_irqsave(&amd_iommu_devtable_lock, flags);
++ spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
+
+ while (!list_empty(&domain->dev_list)) {
+ entry = list_first_entry(&domain->dev_list,
+@@ -2821,7 +2812,7 @@
+ __detach_device(entry);
+ }
+
+- write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
++ spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
+ }
+
+ static void protection_domain_free(struct protection_domain *domain)
+@@ -3588,14 +3579,62 @@
+ amd_iommu_dev_table[devid].data[2] = dte;
+ }
+
+-static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic)
++static struct irq_remap_table *get_irq_table(u16 devid)
++{
++ struct irq_remap_table *table;
++
++ if (WARN_ONCE(!amd_iommu_rlookup_table[devid],
++ "%s: no iommu for devid %x\n", __func__, devid))
++ return NULL;
++
++ table = irq_lookup_table[devid];
++ if (WARN_ONCE(!table, "%s: no table for devid %x\n", __func__, devid))
++ return NULL;
++
++ return table;
++}
++
++static struct irq_remap_table *__alloc_irq_table(void)
++{
++ struct irq_remap_table *table;
++
++ table = kzalloc(sizeof(*table), GFP_KERNEL);
++ if (!table)
++ return NULL;
++
++ table->table = kmem_cache_alloc(amd_iommu_irq_cache, GFP_KERNEL);
++ if (!table->table) {
++ kfree(table);
++ return NULL;
++ }
++ raw_spin_lock_init(&table->lock);
++
++ if (!AMD_IOMMU_GUEST_IR_GA(amd_iommu_guest_ir))
++ memset(table->table, 0,
++ MAX_IRQS_PER_TABLE * sizeof(u32));
++ else
++ memset(table->table, 0,
++ (MAX_IRQS_PER_TABLE * (sizeof(u64) * 2)));
++ return table;
++}
++
++static void set_remap_table_entry(struct amd_iommu *iommu, u16 devid,
++ struct irq_remap_table *table)
++{
++ irq_lookup_table[devid] = table;
++ set_dte_irq_entry(devid, table);
++ iommu_flush_dte(iommu, devid);
++}
++
++static struct irq_remap_table *alloc_irq_table(u16 devid)
+ {
+ struct irq_remap_table *table = NULL;
++ struct irq_remap_table *new_table = NULL;
+ struct amd_iommu *iommu;
+ unsigned long flags;
+ u16 alias;
+
+- write_lock_irqsave(&amd_iommu_devtable_lock, flags);
++ spin_lock_irqsave(&iommu_table_lock, flags);
+
+ iommu = amd_iommu_rlookup_table[devid];
+ if (!iommu)
+@@ -3608,60 +3647,45 @@
+ alias = amd_iommu_alias_table[devid];
+ table = irq_lookup_table[alias];
+ if (table) {
+- irq_lookup_table[devid] = table;
+- set_dte_irq_entry(devid, table);
+- iommu_flush_dte(iommu, devid);
+- goto out;
++ set_remap_table_entry(iommu, devid, table);
++ goto out_wait;
+ }
++ spin_unlock_irqrestore(&iommu_table_lock, flags);
+
+ /* Nothing there yet, allocate new irq remapping table */
+- table = kzalloc(sizeof(*table), GFP_ATOMIC);
+- if (!table)
+- goto out_unlock;
+-
+- /* Initialize table spin-lock */
+- spin_lock_init(&table->lock);
++ new_table = __alloc_irq_table();
++ if (!new_table)
++ return NULL;
+
+- if (ioapic)
+- /* Keep the first 32 indexes free for IOAPIC interrupts */
+- table->min_index = 32;
++ spin_lock_irqsave(&iommu_table_lock, flags);
+
+- table->table = kmem_cache_alloc(amd_iommu_irq_cache, GFP_ATOMIC);
+- if (!table->table) {
+- kfree(table);
+- table = NULL;
++ table = irq_lookup_table[devid];
++ if (table)
+ goto out_unlock;
+- }
+-
+- if (!AMD_IOMMU_GUEST_IR_GA(amd_iommu_guest_ir))
+- memset(table->table, 0,
+- MAX_IRQS_PER_TABLE * sizeof(u32));
+- else
+- memset(table->table, 0,
+- (MAX_IRQS_PER_TABLE * (sizeof(u64) * 2)));
+-
+- if (ioapic) {
+- int i;
+
+- for (i = 0; i < 32; ++i)
+- iommu->irte_ops->set_allocated(table, i);
++ table = irq_lookup_table[alias];
++ if (table) {
++ set_remap_table_entry(iommu, devid, table);
++ goto out_wait;
+ }
+
+- irq_lookup_table[devid] = table;
+- set_dte_irq_entry(devid, table);
+- iommu_flush_dte(iommu, devid);
+- if (devid != alias) {
+- irq_lookup_table[alias] = table;
+- set_dte_irq_entry(alias, table);
+- iommu_flush_dte(iommu, alias);
+- }
++ table = new_table;
++ new_table = NULL;
+
+-out:
++ set_remap_table_entry(iommu, devid, table);
++ if (devid != alias)
++ set_remap_table_entry(iommu, alias, table);
++
++out_wait:
+ iommu_completion_wait(iommu);
+
+ out_unlock:
+- write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
++ spin_unlock_irqrestore(&iommu_table_lock, flags);
+
++ if (new_table) {
++ kmem_cache_free(amd_iommu_irq_cache, new_table->table);
++ kfree(new_table);
++ }
+ return table;
+ }
+
+@@ -3675,11 +3699,11 @@
+ if (!iommu)
+ return -ENODEV;
+
+- table = get_irq_table(devid, false);
++ table = alloc_irq_table(devid);
+ if (!table)
+ return -ENODEV;
+
+- spin_lock_irqsave(&table->lock, flags);
++ raw_spin_lock_irqsave(&table->lock, flags);
+
+ /* Scan table for free entries */
+ for (c = 0, index = table->min_index;
+@@ -3702,7 +3726,7 @@
+ index = -ENOSPC;
+
+ out:
+- spin_unlock_irqrestore(&table->lock, flags);
++ raw_spin_unlock_irqrestore(&table->lock, flags);
+
+ return index;
+ }
+@@ -3719,11 +3743,11 @@
+ if (iommu == NULL)
+ return -EINVAL;
+
+- table = get_irq_table(devid, false);
++ table = get_irq_table(devid);
+ if (!table)
+ return -ENOMEM;
+
+- spin_lock_irqsave(&table->lock, flags);
++ raw_spin_lock_irqsave(&table->lock, flags);
+
+ entry = (struct irte_ga *)table->table;
+ entry = &entry[index];
+@@ -3734,7 +3758,7 @@
+ if (data)
+ data->ref = entry;
+
+- spin_unlock_irqrestore(&table->lock, flags);
++ raw_spin_unlock_irqrestore(&table->lock, flags);
+
+ iommu_flush_irt(iommu, devid);
+ iommu_completion_wait(iommu);
+@@ -3752,13 +3776,13 @@
+ if (iommu == NULL)
+ return -EINVAL;
+
+- table = get_irq_table(devid, false);
++ table = get_irq_table(devid);
+ if (!table)
+ return -ENOMEM;
+
+- spin_lock_irqsave(&table->lock, flags);
++ raw_spin_lock_irqsave(&table->lock, flags);
+ table->table[index] = irte->val;
+- spin_unlock_irqrestore(&table->lock, flags);
++ raw_spin_unlock_irqrestore(&table->lock, flags);
+
+ iommu_flush_irt(iommu, devid);
+ iommu_completion_wait(iommu);
+@@ -3776,13 +3800,13 @@
+ if (iommu == NULL)
+ return;
+
+- table = get_irq_table(devid, false);
++ table = get_irq_table(devid);
+ if (!table)
+ return;
+
+- spin_lock_irqsave(&table->lock, flags);
++ raw_spin_lock_irqsave(&table->lock, flags);
+ iommu->irte_ops->clear_allocated(table, index);
+- spin_unlock_irqrestore(&table->lock, flags);
++ raw_spin_unlock_irqrestore(&table->lock, flags);
+
+ iommu_flush_irt(iommu, devid);
+ iommu_completion_wait(iommu);
+@@ -3863,10 +3887,8 @@
+ u8 vector, u32 dest_apicid)
+ {
+ struct irte_ga *irte = (struct irte_ga *) entry;
+- struct iommu_dev_data *dev_data = search_dev_data(devid);
+
+- if (!dev_data || !dev_data->use_vapic ||
+- !irte->lo.fields_remap.guest_mode) {
++ if (!irte->lo.fields_remap.guest_mode) {
+ irte->hi.fields.vector = vector;
+ irte->lo.fields_remap.destination = dest_apicid;
+ modify_irte_ga(devid, index, irte, NULL);
+@@ -4072,7 +4094,7 @@
+ struct amd_ir_data *data = NULL;
+ struct irq_cfg *cfg;
+ int i, ret, devid;
+- int index = -1;
++ int index;
+
+ if (!info)
+ return -EINVAL;
+@@ -4096,10 +4118,26 @@
+ return ret;
+
+ if (info->type == X86_IRQ_ALLOC_TYPE_IOAPIC) {
+- if (get_irq_table(devid, true))
++ struct irq_remap_table *table;
++ struct amd_iommu *iommu;
++
++ table = alloc_irq_table(devid);
++ if (table) {
++ if (!table->min_index) {
++ /*
++ * Keep the first 32 indexes free for IOAPIC
++ * interrupts.
++ */
++ table->min_index = 32;
++ iommu = amd_iommu_rlookup_table[devid];
++ for (i = 0; i < 32; ++i)
++ iommu->irte_ops->set_allocated(table, i);
++ }
++ WARN_ON(table->min_index != 32);
+ index = info->ioapic_pin;
+- else
+- ret = -ENOMEM;
++ } else {
++ index = -ENOMEM;
++ }
+ } else {
+ index = alloc_irq_index(devid, nr_irqs);
+ }
+@@ -4343,7 +4381,7 @@
+ {
+ unsigned long flags;
+ struct amd_iommu *iommu;
+- struct irq_remap_table *irt;
++ struct irq_remap_table *table;
+ struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
+ int devid = ir_data->irq_2_irte.devid;
+ struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
+@@ -4357,11 +4395,11 @@
+ if (!iommu)
+ return -ENODEV;
+
+- irt = get_irq_table(devid, false);
+- if (!irt)
++ table = get_irq_table(devid);
++ if (!table)
+ return -ENODEV;
+
+- spin_lock_irqsave(&irt->lock, flags);
++ raw_spin_lock_irqsave(&table->lock, flags);
+
+ if (ref->lo.fields_vapic.guest_mode) {
+ if (cpu >= 0)
+@@ -4370,7 +4408,7 @@
+ barrier();
+ }
+
+- spin_unlock_irqrestore(&irt->lock, flags);
++ raw_spin_unlock_irqrestore(&table->lock, flags);
+
+ iommu_flush_irt(iommu, devid);
+ iommu_completion_wait(iommu);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/iommu/amd_iommu_init.c linux-4.14/drivers/iommu/amd_iommu_init.c
+--- linux-4.14.orig/drivers/iommu/amd_iommu_init.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/iommu/amd_iommu_init.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1474,7 +1474,7 @@
+ {
+ int ret;
+
+- spin_lock_init(&iommu->lock);
++ raw_spin_lock_init(&iommu->lock);
+
+ /* Add IOMMU to internal data structures */
+ list_add_tail(&iommu->list, &amd_iommu_list);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/iommu/amd_iommu_types.h linux-4.14/drivers/iommu/amd_iommu_types.h
+--- linux-4.14.orig/drivers/iommu/amd_iommu_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/iommu/amd_iommu_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -406,7 +406,7 @@
+ #define IRQ_TABLE_ALIGNMENT 128
+
+ struct irq_remap_table {
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ unsigned min_index;
+ u32 *table;
+ };
+@@ -488,7 +488,7 @@
+ int index;
+
+ /* locks the accesses to the hardware */
+- spinlock_t lock;
++ raw_spinlock_t lock;
+
+ /* Pointer to PCI device of this IOMMU */
+ struct pci_dev *dev;
+@@ -625,7 +625,7 @@
+ */
+ struct iommu_dev_data {
+ struct list_head list; /* For domain->dev_list */
+- struct list_head dev_data_list; /* For global dev_data_list */
++ struct llist_node dev_data_list; /* For global dev_data_list */
+ struct protection_domain *domain; /* Domain the device is bound to */
+ u16 devid; /* PCI Device ID */
+ u16 alias; /* Alias Device ID */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/iommu/iova.c linux-4.14/drivers/iommu/iova.c
+--- linux-4.14.orig/drivers/iommu/iova.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/iommu/iova.c 2018-09-05 11:05:07.000000000 +0200
+@@ -570,7 +570,7 @@
+ unsigned long pfn, unsigned long pages,
+ unsigned long data)
+ {
+- struct iova_fq *fq = get_cpu_ptr(iovad->fq);
++ struct iova_fq *fq = raw_cpu_ptr(iovad->fq);
+ unsigned long flags;
+ unsigned idx;
+
+@@ -600,8 +600,6 @@
+ if (atomic_cmpxchg(&iovad->fq_timer_on, 0, 1) == 0)
+ mod_timer(&iovad->fq_timer,
+ jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT));
+-
+- put_cpu_ptr(iovad->fq);
+ }
+ EXPORT_SYMBOL_GPL(queue_iova);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/leds/trigger/Kconfig linux-4.14/drivers/leds/trigger/Kconfig
+--- linux-4.14.orig/drivers/leds/trigger/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/leds/trigger/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -69,7 +69,7 @@
+
+ config LEDS_TRIGGER_CPU
+ bool "LED CPU Trigger"
+- depends on LEDS_TRIGGERS
++ depends on LEDS_TRIGGERS && !PREEMPT_RT_BASE
+ help
+ This allows LEDs to be controlled by active CPUs. This shows
+ the active CPUs across an array of LEDs so you can see which
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/md/bcache/Kconfig linux-4.14/drivers/md/bcache/Kconfig
+--- linux-4.14.orig/drivers/md/bcache/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/md/bcache/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -1,6 +1,7 @@
+
+ config BCACHE
+ tristate "Block device as cache"
++ depends on !PREEMPT_RT_FULL
+ ---help---
+ Allows a block device to be used as cache for other devices; uses
+ a btree for indexing and the layout is optimized for SSDs.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/md/dm-rq.c linux-4.14/drivers/md/dm-rq.c
+--- linux-4.14.orig/drivers/md/dm-rq.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/md/dm-rq.c 2018-09-05 11:05:07.000000000 +0200
+@@ -671,7 +671,7 @@
+ /* Establish tio->ti before queuing work (map_tio_request) */
+ tio->ti = ti;
+ kthread_queue_work(&md->kworker, &tio->work);
+- BUG_ON(!irqs_disabled());
++ BUG_ON_NONRT(!irqs_disabled());
+ }
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/md/raid5.c linux-4.14/drivers/md/raid5.c
+--- linux-4.14.orig/drivers/md/raid5.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/md/raid5.c 2018-09-05 11:05:07.000000000 +0200
+@@ -410,7 +410,7 @@
+ md_wakeup_thread(conf->mddev->thread);
+ return;
+ slow_path:
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ /* we are ok here if STRIPE_ON_RELEASE_LIST is set or not */
+ if (atomic_dec_and_lock(&sh->count, &conf->device_lock)) {
+ INIT_LIST_HEAD(&list);
+@@ -419,7 +419,7 @@
+ spin_unlock(&conf->device_lock);
+ release_inactive_stripe_list(conf, &list, hash);
+ }
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ }
+
+ static inline void remove_hash(struct stripe_head *sh)
+@@ -2067,8 +2067,9 @@
+ struct raid5_percpu *percpu;
+ unsigned long cpu;
+
+- cpu = get_cpu();
++ cpu = get_cpu_light();
+ percpu = per_cpu_ptr(conf->percpu, cpu);
++ spin_lock(&percpu->lock);
+ if (test_bit(STRIPE_OP_BIOFILL, &ops_request)) {
+ ops_run_biofill(sh);
+ overlap_clear++;
+@@ -2127,7 +2128,8 @@
+ if (test_and_clear_bit(R5_Overlap, &dev->flags))
+ wake_up(&sh->raid_conf->wait_for_overlap);
+ }
+- put_cpu();
++ spin_unlock(&percpu->lock);
++ put_cpu_light();
+ }
+
+ static void free_stripe(struct kmem_cache *sc, struct stripe_head *sh)
+@@ -6775,6 +6777,7 @@
+ __func__, cpu);
+ return -ENOMEM;
+ }
++ spin_lock_init(&per_cpu_ptr(conf->percpu, cpu)->lock);
+ return 0;
+ }
+
+@@ -6785,7 +6788,6 @@
+ conf->percpu = alloc_percpu(struct raid5_percpu);
+ if (!conf->percpu)
+ return -ENOMEM;
+-
+ err = cpuhp_state_add_instance(CPUHP_MD_RAID5_PREPARE, &conf->node);
+ if (!err) {
+ conf->scribble_disks = max(conf->raid_disks,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/md/raid5.h linux-4.14/drivers/md/raid5.h
+--- linux-4.14.orig/drivers/md/raid5.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/md/raid5.h 2018-09-05 11:05:07.000000000 +0200
+@@ -624,6 +624,7 @@
+ int recovery_disabled;
+ /* per cpu variables */
+ struct raid5_percpu {
++ spinlock_t lock; /* Protection for -RT */
+ struct page *spare_page; /* Used when checking P/Q in raid6 */
+ struct flex_array *scribble; /* space for constructing buffer
+ * lists and performing address
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/mfd/atmel-smc.c linux-4.14/drivers/mfd/atmel-smc.c
+--- linux-4.14.orig/drivers/mfd/atmel-smc.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/mfd/atmel-smc.c 2018-09-05 11:05:07.000000000 +0200
+@@ -12,6 +12,7 @@
+ */
+
+ #include <linux/mfd/syscon/atmel-smc.h>
++#include <linux/string.h>
+
+ /**
+ * atmel_smc_cs_conf_init - initialize a SMC CS conf
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/misc/Kconfig linux-4.14/drivers/misc/Kconfig
+--- linux-4.14.orig/drivers/misc/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/misc/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -54,6 +54,7 @@
+ config ATMEL_TCLIB
+ bool "Atmel AT32/AT91 Timer/Counter Library"
+ depends on (AVR32 || ARCH_AT91)
++ default y if PREEMPT_RT_FULL
+ help
+ Select this if you want a library to allocate the Timer/Counter
+ blocks found on many Atmel processors. This facilitates using
+@@ -69,8 +70,7 @@
+ are combined to make a single 32-bit timer.
+
+ When GENERIC_CLOCKEVENTS is defined, the third timer channel
+- may be used as a clock event device supporting oneshot mode
+- (delays of up to two seconds) based on the 32 KiHz clock.
++ may be used as a clock event device supporting oneshot mode.
+
+ config ATMEL_TCB_CLKSRC_BLOCK
+ int
+@@ -84,6 +84,15 @@
+ TC can be used for other purposes, such as PWM generation and
+ interval timing.
+
++config ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
++ bool "TC Block use 32 KiHz clock"
++ depends on ATMEL_TCB_CLKSRC
++ default y if !PREEMPT_RT_FULL
++ help
++ Select this to use 32 KiHz base clock rate as TC block clock
++ source for clock events.
++
++
+ config DUMMY_IRQ
+ tristate "Dummy IRQ handler"
+ default n
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/mmc/host/mmci.c linux-4.14/drivers/mmc/host/mmci.c
+--- linux-4.14.orig/drivers/mmc/host/mmci.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/mmc/host/mmci.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1200,15 +1200,12 @@
+ struct sg_mapping_iter *sg_miter = &host->sg_miter;
+ struct variant_data *variant = host->variant;
+ void __iomem *base = host->base;
+- unsigned long flags;
+ u32 status;
+
+ status = readl(base + MMCISTATUS);
+
+ dev_dbg(mmc_dev(host->mmc), "irq1 (pio) %08x\n", status);
+
+- local_irq_save(flags);
+-
+ do {
+ unsigned int remain, len;
+ char *buffer;
+@@ -1248,8 +1245,6 @@
+
+ sg_miter_stop(sg_miter);
+
+- local_irq_restore(flags);
+-
+ /*
+ * If we have less than the fifo 'half-full' threshold to transfer,
+ * trigger a PIO interrupt as soon as any data is available.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/net/ethernet/3com/3c59x.c linux-4.14/drivers/net/ethernet/3com/3c59x.c
+--- linux-4.14.orig/drivers/net/ethernet/3com/3c59x.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/net/ethernet/3com/3c59x.c 2018-09-05 11:05:07.000000000 +0200
+@@ -842,9 +842,9 @@
+ {
+ struct vortex_private *vp = netdev_priv(dev);
+ unsigned long flags;
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ (vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ }
+ #endif
+
+@@ -1908,12 +1908,12 @@
+ * Block interrupts because vortex_interrupt does a bare spin_lock()
+ */
+ unsigned long flags;
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ if (vp->full_bus_master_tx)
+ boomerang_interrupt(dev->irq, dev);
+ else
+ vortex_interrupt(dev->irq, dev);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ }
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/net/ethernet/marvell/mvpp2.c linux-4.14/drivers/net/ethernet/marvell/mvpp2.c
+--- linux-4.14.orig/drivers/net/ethernet/marvell/mvpp2.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/net/ethernet/marvell/mvpp2.c 2018-09-05 11:05:07.000000000 +0200
+@@ -830,9 +830,8 @@
+ /* Per-CPU port control */
+ struct mvpp2_port_pcpu {
+ struct hrtimer tx_done_timer;
++ struct net_device *dev;
+ bool timer_scheduled;
+- /* Tasklet for egress finalization */
+- struct tasklet_struct tx_done_tasklet;
+ };
+
+ struct mvpp2_queue_vector {
+@@ -5954,46 +5953,34 @@
+ }
+ }
+
+-static void mvpp2_timer_set(struct mvpp2_port_pcpu *port_pcpu)
+-{
+- ktime_t interval;
+-
+- if (!port_pcpu->timer_scheduled) {
+- port_pcpu->timer_scheduled = true;
+- interval = MVPP2_TXDONE_HRTIMER_PERIOD_NS;
+- hrtimer_start(&port_pcpu->tx_done_timer, interval,
+- HRTIMER_MODE_REL_PINNED);
+- }
+-}
+-
+-static void mvpp2_tx_proc_cb(unsigned long data)
++static enum hrtimer_restart mvpp2_hr_timer_cb(struct hrtimer *timer)
+ {
+- struct net_device *dev = (struct net_device *)data;
+- struct mvpp2_port *port = netdev_priv(dev);
+- struct mvpp2_port_pcpu *port_pcpu = this_cpu_ptr(port->pcpu);
++ struct net_device *dev;
++ struct mvpp2_port *port;
++ struct mvpp2_port_pcpu *port_pcpu;
+ unsigned int tx_todo, cause;
+
++ port_pcpu = container_of(timer, struct mvpp2_port_pcpu, tx_done_timer);
++ dev = port_pcpu->dev;
++
+ if (!netif_running(dev))
+- return;
++ return HRTIMER_NORESTART;
++
+ port_pcpu->timer_scheduled = false;
++ port = netdev_priv(dev);
+
+ /* Process all the Tx queues */
+ cause = (1 << port->ntxqs) - 1;
+ tx_todo = mvpp2_tx_done(port, cause, smp_processor_id());
+
+ /* Set the timer in case not all the packets were processed */
+- if (tx_todo)
+- mvpp2_timer_set(port_pcpu);
+-}
+-
+-static enum hrtimer_restart mvpp2_hr_timer_cb(struct hrtimer *timer)
+-{
+- struct mvpp2_port_pcpu *port_pcpu = container_of(timer,
+- struct mvpp2_port_pcpu,
+- tx_done_timer);
+-
+- tasklet_schedule(&port_pcpu->tx_done_tasklet);
++ if (tx_todo && !port_pcpu->timer_scheduled) {
++ port_pcpu->timer_scheduled = true;
++ hrtimer_forward_now(&port_pcpu->tx_done_timer,
++ MVPP2_TXDONE_HRTIMER_PERIOD_NS);
+
++ return HRTIMER_RESTART;
++ }
+ return HRTIMER_NORESTART;
+ }
+
+@@ -6482,7 +6469,12 @@
+ txq_pcpu->count > 0) {
+ struct mvpp2_port_pcpu *port_pcpu = this_cpu_ptr(port->pcpu);
+
+- mvpp2_timer_set(port_pcpu);
++ if (!port_pcpu->timer_scheduled) {
++ port_pcpu->timer_scheduled = true;
++ hrtimer_start(&port_pcpu->tx_done_timer,
++ MVPP2_TXDONE_HRTIMER_PERIOD_NS,
++ HRTIMER_MODE_REL_PINNED_SOFT);
++ }
+ }
+
+ return NETDEV_TX_OK;
+@@ -6871,7 +6863,6 @@
+
+ hrtimer_cancel(&port_pcpu->tx_done_timer);
+ port_pcpu->timer_scheduled = false;
+- tasklet_kill(&port_pcpu->tx_done_tasklet);
+ }
+ }
+ mvpp2_cleanup_rxqs(port);
+@@ -7644,13 +7635,10 @@
+ port_pcpu = per_cpu_ptr(port->pcpu, cpu);
+
+ hrtimer_init(&port_pcpu->tx_done_timer, CLOCK_MONOTONIC,
+- HRTIMER_MODE_REL_PINNED);
++ HRTIMER_MODE_REL_PINNED_SOFT);
+ port_pcpu->tx_done_timer.function = mvpp2_hr_timer_cb;
+ port_pcpu->timer_scheduled = false;
+-
+- tasklet_init(&port_pcpu->tx_done_tasklet,
+- mvpp2_tx_proc_cb,
+- (unsigned long)dev);
++ port_pcpu->dev = dev;
+ }
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/net/wireless/intersil/orinoco/orinoco_usb.c linux-4.14/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
+--- linux-4.14.orig/drivers/net/wireless/intersil/orinoco/orinoco_usb.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/net/wireless/intersil/orinoco/orinoco_usb.c 2018-09-05 11:05:07.000000000 +0200
+@@ -697,7 +697,7 @@
+ while (!ctx->done.done && msecs--)
+ udelay(1000);
+ } else {
+- wait_event_interruptible(ctx->done.wait,
++ swait_event_interruptible(ctx->done.wait,
+ ctx->done.done);
+ }
+ break;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/net/wireless/mac80211_hwsim.c linux-4.14/drivers/net/wireless/mac80211_hwsim.c
+--- linux-4.14.orig/drivers/net/wireless/mac80211_hwsim.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/net/wireless/mac80211_hwsim.c 2018-09-05 11:05:07.000000000 +0200
+@@ -537,7 +537,7 @@
+ unsigned int rx_filter;
+ bool started, idle, scanning;
+ struct mutex mutex;
+- struct tasklet_hrtimer beacon_timer;
++ struct hrtimer beacon_timer;
+ enum ps_mode {
+ PS_DISABLED, PS_ENABLED, PS_AUTO_POLL, PS_MANUAL_POLL
+ } ps;
+@@ -1423,7 +1423,7 @@
+ {
+ struct mac80211_hwsim_data *data = hw->priv;
+ data->started = false;
+- tasklet_hrtimer_cancel(&data->beacon_timer);
++ hrtimer_cancel(&data->beacon_timer);
+ wiphy_debug(hw->wiphy, "%s\n", __func__);
+ }
+
+@@ -1546,14 +1546,12 @@
+ mac80211_hwsim_beacon(struct hrtimer *timer)
+ {
+ struct mac80211_hwsim_data *data =
+- container_of(timer, struct mac80211_hwsim_data,
+- beacon_timer.timer);
++ container_of(timer, struct mac80211_hwsim_data, beacon_timer);
+ struct ieee80211_hw *hw = data->hw;
+ u64 bcn_int = data->beacon_int;
+- ktime_t next_bcn;
+
+ if (!data->started)
+- goto out;
++ return HRTIMER_NORESTART;
+
+ ieee80211_iterate_active_interfaces_atomic(
+ hw, IEEE80211_IFACE_ITER_NORMAL,
+@@ -1565,11 +1563,9 @@
+ data->bcn_delta = 0;
+ }
+
+- next_bcn = ktime_add(hrtimer_get_expires(timer),
+- ns_to_ktime(bcn_int * 1000));
+- tasklet_hrtimer_start(&data->beacon_timer, next_bcn, HRTIMER_MODE_ABS);
+-out:
+- return HRTIMER_NORESTART;
++ hrtimer_forward(&data->beacon_timer, hrtimer_get_expires(timer),
++ ns_to_ktime(bcn_int * NSEC_PER_USEC));
++ return HRTIMER_RESTART;
+ }
+
+ static const char * const hwsim_chanwidths[] = {
+@@ -1643,15 +1639,15 @@
+ mutex_unlock(&data->mutex);
+
+ if (!data->started || !data->beacon_int)
+- tasklet_hrtimer_cancel(&data->beacon_timer);
+- else if (!hrtimer_is_queued(&data->beacon_timer.timer)) {
++ hrtimer_cancel(&data->beacon_timer);
++ else if (!hrtimer_is_queued(&data->beacon_timer)) {
+ u64 tsf = mac80211_hwsim_get_tsf(hw, NULL);
+ u32 bcn_int = data->beacon_int;
+ u64 until_tbtt = bcn_int - do_div(tsf, bcn_int);
+
+- tasklet_hrtimer_start(&data->beacon_timer,
+- ns_to_ktime(until_tbtt * 1000),
+- HRTIMER_MODE_REL);
++ hrtimer_start(&data->beacon_timer,
++ ns_to_ktime(until_tbtt * 1000),
++ HRTIMER_MODE_REL_SOFT);
+ }
+
+ return 0;
+@@ -1714,7 +1710,7 @@
+ info->enable_beacon, info->beacon_int);
+ vp->bcn_en = info->enable_beacon;
+ if (data->started &&
+- !hrtimer_is_queued(&data->beacon_timer.timer) &&
++ !hrtimer_is_queued(&data->beacon_timer) &&
+ info->enable_beacon) {
+ u64 tsf, until_tbtt;
+ u32 bcn_int;
+@@ -1722,9 +1718,9 @@
+ tsf = mac80211_hwsim_get_tsf(hw, vif);
+ bcn_int = data->beacon_int;
+ until_tbtt = bcn_int - do_div(tsf, bcn_int);
+- tasklet_hrtimer_start(&data->beacon_timer,
+- ns_to_ktime(until_tbtt * 1000),
+- HRTIMER_MODE_REL);
++ hrtimer_start(&data->beacon_timer,
++ ns_to_ktime(until_tbtt * 1000),
++ HRTIMER_MODE_REL_SOFT);
+ } else if (!info->enable_beacon) {
+ unsigned int count = 0;
+ ieee80211_iterate_active_interfaces_atomic(
+@@ -1733,7 +1729,7 @@
+ wiphy_debug(hw->wiphy, " beaconing vifs remaining: %u",
+ count);
+ if (count == 0) {
+- tasklet_hrtimer_cancel(&data->beacon_timer);
++ hrtimer_cancel(&data->beacon_timer);
+ data->beacon_int = 0;
+ }
+ }
+@@ -2725,9 +2721,9 @@
+ data->debugfs,
+ data, &hwsim_simulate_radar);
+
+- tasklet_hrtimer_init(&data->beacon_timer,
+- mac80211_hwsim_beacon,
+- CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
++ hrtimer_init(&data->beacon_timer, CLOCK_MONOTONIC,
++ HRTIMER_MODE_ABS_SOFT);
++ data->beacon_timer.function = mac80211_hwsim_beacon;
+
+ spin_lock_bh(&hwsim_radio_lock);
+ list_add_tail(&data->list, &hwsim_radios);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/pci/switch/switchtec.c linux-4.14/drivers/pci/switch/switchtec.c
+--- linux-4.14.orig/drivers/pci/switch/switchtec.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/pci/switch/switchtec.c 2018-09-05 11:05:07.000000000 +0200
+@@ -306,10 +306,11 @@
+
+ enum mrpc_state state;
+
+- struct completion comp;
++ wait_queue_head_t cmd_comp;
+ struct kref kref;
+ struct list_head list;
+
++ bool cmd_done;
+ u32 cmd;
+ u32 status;
+ u32 return_code;
+@@ -331,7 +332,7 @@
+ stuser->stdev = stdev;
+ kref_init(&stuser->kref);
+ INIT_LIST_HEAD(&stuser->list);
+- init_completion(&stuser->comp);
++ init_waitqueue_head(&stuser->cmd_comp);
+ stuser->event_cnt = atomic_read(&stdev->event_cnt);
+
+ dev_dbg(&stdev->dev, "%s: %p\n", __func__, stuser);
+@@ -414,7 +415,7 @@
+ kref_get(&stuser->kref);
+ stuser->read_len = sizeof(stuser->data);
+ stuser_set_state(stuser, MRPC_QUEUED);
+- init_completion(&stuser->comp);
++ stuser->cmd_done = false;
+ list_add_tail(&stuser->list, &stdev->mrpc_queue);
+
+ mrpc_cmd_submit(stdev);
+@@ -451,7 +452,8 @@
+ stuser->read_len);
+
+ out:
+- complete_all(&stuser->comp);
++ stuser->cmd_done = true;
++ wake_up_interruptible(&stuser->cmd_comp);
+ list_del_init(&stuser->list);
+ stuser_put(stuser);
+ stdev->mrpc_busy = 0;
+@@ -721,10 +723,11 @@
+ mutex_unlock(&stdev->mrpc_mutex);
+
+ if (filp->f_flags & O_NONBLOCK) {
+- if (!try_wait_for_completion(&stuser->comp))
++ if (!READ_ONCE(stuser->cmd_done))
+ return -EAGAIN;
+ } else {
+- rc = wait_for_completion_interruptible(&stuser->comp);
++ rc = wait_event_interruptible(stuser->cmd_comp,
++ stuser->cmd_done);
+ if (rc < 0)
+ return rc;
+ }
+@@ -772,7 +775,7 @@
+ struct switchtec_dev *stdev = stuser->stdev;
+ int ret = 0;
+
+- poll_wait(filp, &stuser->comp.wait, wait);
++ poll_wait(filp, &stuser->cmd_comp, wait);
+ poll_wait(filp, &stdev->event_wq, wait);
+
+ if (lock_mutex_and_test_alive(stdev))
+@@ -780,7 +783,7 @@
+
+ mutex_unlock(&stdev->mrpc_mutex);
+
+- if (try_wait_for_completion(&stuser->comp))
++ if (READ_ONCE(stuser->cmd_done))
+ ret |= POLLIN | POLLRDNORM;
+
+ if (stuser->event_cnt != atomic_read(&stdev->event_cnt))
+@@ -1255,7 +1258,8 @@
+
+ /* Wake up and kill any users waiting on an MRPC request */
+ list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) {
+- complete_all(&stuser->comp);
++ stuser->cmd_done = true;
++ wake_up_interruptible(&stuser->cmd_comp);
+ list_del_init(&stuser->list);
+ stuser_put(stuser);
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/scsi/fcoe/fcoe.c linux-4.14/drivers/scsi/fcoe/fcoe.c
+--- linux-4.14.orig/drivers/scsi/fcoe/fcoe.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/scsi/fcoe/fcoe.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1464,11 +1464,11 @@
+ static int fcoe_alloc_paged_crc_eof(struct sk_buff *skb, int tlen)
+ {
+ struct fcoe_percpu_s *fps;
+- int rc;
++ int rc, cpu = get_cpu_light();
+
+- fps = &get_cpu_var(fcoe_percpu);
++ fps = &per_cpu(fcoe_percpu, cpu);
+ rc = fcoe_get_paged_crc_eof(skb, tlen, fps);
+- put_cpu_var(fcoe_percpu);
++ put_cpu_light();
+
+ return rc;
+ }
+@@ -1655,11 +1655,11 @@
+ return 0;
+ }
+
+- stats = per_cpu_ptr(lport->stats, get_cpu());
++ stats = per_cpu_ptr(lport->stats, get_cpu_light());
+ stats->InvalidCRCCount++;
+ if (stats->InvalidCRCCount < 5)
+ printk(KERN_WARNING "fcoe: dropping frame with CRC error\n");
+- put_cpu();
++ put_cpu_light();
+ return -EINVAL;
+ }
+
+@@ -1702,7 +1702,7 @@
+ */
+ hp = (struct fcoe_hdr *) skb_network_header(skb);
+
+- stats = per_cpu_ptr(lport->stats, get_cpu());
++ stats = per_cpu_ptr(lport->stats, get_cpu_light());
+ if (unlikely(FC_FCOE_DECAPS_VER(hp) != FC_FCOE_VER)) {
+ if (stats->ErrorFrames < 5)
+ printk(KERN_WARNING "fcoe: FCoE version "
+@@ -1734,13 +1734,13 @@
+ goto drop;
+
+ if (!fcoe_filter_frames(lport, fp)) {
+- put_cpu();
++ put_cpu_light();
+ fc_exch_recv(lport, fp);
+ return;
+ }
+ drop:
+ stats->ErrorFrames++;
+- put_cpu();
++ put_cpu_light();
+ kfree_skb(skb);
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/scsi/fcoe/fcoe_ctlr.c linux-4.14/drivers/scsi/fcoe/fcoe_ctlr.c
+--- linux-4.14.orig/drivers/scsi/fcoe/fcoe_ctlr.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/scsi/fcoe/fcoe_ctlr.c 2018-09-05 11:05:07.000000000 +0200
+@@ -835,7 +835,7 @@
+
+ INIT_LIST_HEAD(&del_list);
+
+- stats = per_cpu_ptr(fip->lp->stats, get_cpu());
++ stats = per_cpu_ptr(fip->lp->stats, get_cpu_light());
+
+ list_for_each_entry_safe(fcf, next, &fip->fcfs, list) {
+ deadline = fcf->time + fcf->fka_period + fcf->fka_period / 2;
+@@ -871,7 +871,7 @@
+ sel_time = fcf->time;
+ }
+ }
+- put_cpu();
++ put_cpu_light();
+
+ list_for_each_entry_safe(fcf, next, &del_list, list) {
+ /* Removes fcf from current list */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/scsi/libfc/fc_exch.c linux-4.14/drivers/scsi/libfc/fc_exch.c
+--- linux-4.14.orig/drivers/scsi/libfc/fc_exch.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/scsi/libfc/fc_exch.c 2018-09-05 11:05:07.000000000 +0200
+@@ -833,10 +833,10 @@
+ }
+ memset(ep, 0, sizeof(*ep));
+
+- cpu = get_cpu();
++ cpu = get_cpu_light();
+ pool = per_cpu_ptr(mp->pool, cpu);
+ spin_lock_bh(&pool->lock);
+- put_cpu();
++ put_cpu_light();
+
+ /* peek cache of free slot */
+ if (pool->left != FC_XID_UNKNOWN) {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/scsi/libsas/sas_ata.c linux-4.14/drivers/scsi/libsas/sas_ata.c
+--- linux-4.14.orig/drivers/scsi/libsas/sas_ata.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/scsi/libsas/sas_ata.c 2018-09-05 11:05:07.000000000 +0200
+@@ -190,7 +190,7 @@
+ /* TODO: audit callers to ensure they are ready for qc_issue to
+ * unconditionally re-enable interrupts
+ */
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ spin_unlock(ap->lock);
+
+ /* If the device fell off, no sense in issuing commands */
+@@ -252,7 +252,7 @@
+
+ out:
+ spin_lock(ap->lock);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ return ret;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/scsi/qla2xxx/qla_inline.h linux-4.14/drivers/scsi/qla2xxx/qla_inline.h
+--- linux-4.14.orig/drivers/scsi/qla2xxx/qla_inline.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/scsi/qla2xxx/qla_inline.h 2018-09-05 11:05:07.000000000 +0200
+@@ -59,12 +59,12 @@
+ {
+ unsigned long flags;
+ struct qla_hw_data *ha = rsp->hw;
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ if (IS_P3P_TYPE(ha))
+ qla82xx_poll(0, rsp);
+ else
+ ha->isp_ops->intr_handler(0, rsp);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ }
+
+ static inline uint8_t *
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/staging/greybus/audio_manager.c linux-4.14/drivers/staging/greybus/audio_manager.c
+--- linux-4.14.orig/drivers/staging/greybus/audio_manager.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/staging/greybus/audio_manager.c 2018-09-05 11:05:07.000000000 +0200
+@@ -10,7 +10,7 @@
+ #include <linux/sysfs.h>
+ #include <linux/module.h>
+ #include <linux/init.h>
+-#include <linux/rwlock.h>
++#include <linux/spinlock.h>
+ #include <linux/idr.h>
+
+ #include "audio_manager.h"
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/target/target_core_tmr.c linux-4.14/drivers/target/target_core_tmr.c
+--- linux-4.14.orig/drivers/target/target_core_tmr.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/target/target_core_tmr.c 2018-09-05 11:05:07.000000000 +0200
+@@ -114,8 +114,6 @@
+ {
+ struct se_session *sess = se_cmd->se_sess;
+
+- assert_spin_locked(&sess->sess_cmd_lock);
+- WARN_ON_ONCE(!irqs_disabled());
+ /*
+ * If command already reached CMD_T_COMPLETE state within
+ * target_complete_cmd() or CMD_T_FABRIC_STOP due to shutdown,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/target/target_core_transport.c linux-4.14/drivers/target/target_core_transport.c
+--- linux-4.14.orig/drivers/target/target_core_transport.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/target/target_core_transport.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2966,9 +2966,6 @@
+ __acquires(&cmd->t_state_lock)
+ {
+
+- assert_spin_locked(&cmd->t_state_lock);
+- WARN_ON_ONCE(!irqs_disabled());
+-
+ if (fabric_stop)
+ cmd->transport_state |= CMD_T_FABRIC_STOP;
+
+@@ -3238,9 +3235,6 @@
+ {
+ int ret;
+
+- assert_spin_locked(&cmd->t_state_lock);
+- WARN_ON_ONCE(!irqs_disabled());
+-
+ if (!(cmd->transport_state & CMD_T_ABORTED))
+ return 0;
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/thermal/x86_pkg_temp_thermal.c linux-4.14/drivers/thermal/x86_pkg_temp_thermal.c
+--- linux-4.14.orig/drivers/thermal/x86_pkg_temp_thermal.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/thermal/x86_pkg_temp_thermal.c 2018-09-05 11:05:07.000000000 +0200
+@@ -29,6 +29,7 @@
+ #include <linux/pm.h>
+ #include <linux/thermal.h>
+ #include <linux/debugfs.h>
++#include <linux/swork.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/mce.h>
+
+@@ -329,7 +330,7 @@
+ schedule_delayed_work_on(cpu, work, ms);
+ }
+
+-static int pkg_thermal_notify(u64 msr_val)
++static void pkg_thermal_notify_work(struct swork_event *event)
+ {
+ int cpu = smp_processor_id();
+ struct pkg_device *pkgdev;
+@@ -348,8 +349,46 @@
+ }
+
+ spin_unlock_irqrestore(&pkg_temp_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_RT_FULL
++static struct swork_event notify_work;
++
++static int pkg_thermal_notify_work_init(void)
++{
++ int err;
++
++ err = swork_get();
++ if (err)
++ return err;
++
++ INIT_SWORK(¬ify_work, pkg_thermal_notify_work);
++ return 0;
++}
++
++static void pkg_thermal_notify_work_cleanup(void)
++{
++ swork_put();
++}
++
++static int pkg_thermal_notify(u64 msr_val)
++{
++ swork_queue(¬ify_work);
++ return 0;
++}
++
++#else /* !CONFIG_PREEMPT_RT_FULL */
++
++static int pkg_thermal_notify_work_init(void) { return 0; }
++
++static void pkg_thermal_notify_work_cleanup(void) { }
++
++static int pkg_thermal_notify(u64 msr_val)
++{
++ pkg_thermal_notify_work(NULL);
+ return 0;
+ }
++#endif /* CONFIG_PREEMPT_RT_FULL */
+
+ static int pkg_temp_thermal_device_add(unsigned int cpu)
+ {
+@@ -515,10 +554,15 @@
+ if (!x86_match_cpu(pkg_temp_thermal_ids))
+ return -ENODEV;
+
++ if (!pkg_thermal_notify_work_init())
++ return -ENODEV;
++
+ max_packages = topology_max_packages();
+ packages = kzalloc(max_packages * sizeof(struct pkg_device *), GFP_KERNEL);
+- if (!packages)
+- return -ENOMEM;
++ if (!packages) {
++ ret = -ENOMEM;
++ goto err;
++ }
+
+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "thermal/x86_pkg:online",
+ pkg_thermal_cpu_online, pkg_thermal_cpu_offline);
+@@ -536,6 +580,7 @@
+ return 0;
+
+ err:
++ pkg_thermal_notify_work_cleanup();
+ kfree(packages);
+ return ret;
+ }
+@@ -549,6 +594,7 @@
+ cpuhp_remove_state(pkg_thermal_hp_state);
+ debugfs_remove_recursive(debugfs);
+ kfree(packages);
++ pkg_thermal_notify_work_cleanup();
+ }
+ module_exit(pkg_temp_thermal_exit)
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/tty/serial/8250/8250_core.c linux-4.14/drivers/tty/serial/8250/8250_core.c
+--- linux-4.14.orig/drivers/tty/serial/8250/8250_core.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/tty/serial/8250/8250_core.c 2018-09-05 11:05:07.000000000 +0200
+@@ -58,7 +58,16 @@
+
+ static unsigned int skip_txen_test; /* force skip of txen test at init time */
+
+-#define PASS_LIMIT 512
++/*
++ * On -rt we can have a more delays, and legitimately
++ * so - so don't drop work spuriously and spam the
++ * syslog:
++ */
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define PASS_LIMIT 1000000
++#else
++# define PASS_LIMIT 512
++#endif
+
+ #include <asm/serial.h>
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/tty/serial/8250/8250_port.c linux-4.14/drivers/tty/serial/8250/8250_port.c
+--- linux-4.14.orig/drivers/tty/serial/8250/8250_port.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/tty/serial/8250/8250_port.c 2018-09-05 11:05:07.000000000 +0200
+@@ -35,6 +35,7 @@
+ #include <linux/nmi.h>
+ #include <linux/mutex.h>
+ #include <linux/slab.h>
++#include <linux/kdb.h>
+ #include <linux/uaccess.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/ktime.h>
+@@ -3224,9 +3225,9 @@
+
+ serial8250_rpm_get(up);
+
+- if (port->sysrq)
++ if (port->sysrq || oops_in_progress)
+ locked = 0;
+- else if (oops_in_progress)
++ else if (in_kdb_printk())
+ locked = spin_trylock_irqsave(&port->lock, flags);
+ else
+ spin_lock_irqsave(&port->lock, flags);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/tty/serial/amba-pl011.c linux-4.14/drivers/tty/serial/amba-pl011.c
+--- linux-4.14.orig/drivers/tty/serial/amba-pl011.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/tty/serial/amba-pl011.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2236,13 +2236,19 @@
+
+ clk_enable(uap->clk);
+
+- local_irq_save(flags);
++ /*
++ * local_irq_save(flags);
++ *
++ * This local_irq_save() is nonsense. If we come in via sysrq
++ * handling then interrupts are already disabled. Aside of
++ * that the port.sysrq check is racy on SMP regardless.
++ */
+ if (uap->port.sysrq)
+ locked = 0;
+ else if (oops_in_progress)
+- locked = spin_trylock(&uap->port.lock);
++ locked = spin_trylock_irqsave(&uap->port.lock, flags);
+ else
+- spin_lock(&uap->port.lock);
++ spin_lock_irqsave(&uap->port.lock, flags);
+
+ /*
+ * First save the CR then disable the interrupts
+@@ -2268,8 +2274,7 @@
+ pl011_write(old_cr, uap, REG_CR);
+
+ if (locked)
+- spin_unlock(&uap->port.lock);
+- local_irq_restore(flags);
++ spin_unlock_irqrestore(&uap->port.lock, flags);
+
+ clk_disable(uap->clk);
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/tty/serial/omap-serial.c linux-4.14/drivers/tty/serial/omap-serial.c
+--- linux-4.14.orig/drivers/tty/serial/omap-serial.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/tty/serial/omap-serial.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1311,13 +1311,10 @@
+
+ pm_runtime_get_sync(up->dev);
+
+- local_irq_save(flags);
+- if (up->port.sysrq)
+- locked = 0;
+- else if (oops_in_progress)
+- locked = spin_trylock(&up->port.lock);
++ if (up->port.sysrq || oops_in_progress)
++ locked = spin_trylock_irqsave(&up->port.lock, flags);
+ else
+- spin_lock(&up->port.lock);
++ spin_lock_irqsave(&up->port.lock, flags);
+
+ /*
+ * First save the IER then disable the interrupts
+@@ -1346,8 +1343,7 @@
+ pm_runtime_mark_last_busy(up->dev);
+ pm_runtime_put_autosuspend(up->dev);
+ if (locked)
+- spin_unlock(&up->port.lock);
+- local_irq_restore(flags);
++ spin_unlock_irqrestore(&up->port.lock, flags);
+ }
+
+ static int __init
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/usb/core/hcd.c linux-4.14/drivers/usb/core/hcd.c
+--- linux-4.14.orig/drivers/usb/core/hcd.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/usb/core/hcd.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1775,9 +1775,9 @@
+ * and no one may trigger the above deadlock situation when
+ * running complete() in tasklet.
+ */
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ urb->complete(urb);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+
+ usb_anchor_resume_wakeups(anchor);
+ atomic_dec(&urb->use_count);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/usb/gadget/function/f_fs.c linux-4.14/drivers/usb/gadget/function/f_fs.c
+--- linux-4.14.orig/drivers/usb/gadget/function/f_fs.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/drivers/usb/gadget/function/f_fs.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1623,7 +1623,7 @@
+ pr_info("%s(): freeing\n", __func__);
+ ffs_data_clear(ffs);
+ BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
+- waitqueue_active(&ffs->ep0req_completion.wait) ||
++ swait_active(&ffs->ep0req_completion.wait) ||
+ waitqueue_active(&ffs->wait));
+ destroy_workqueue(ffs->io_completion_wq);
+ kfree(ffs->dev_name);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/usb/gadget/function/f_ncm.c linux-4.14/drivers/usb/gadget/function/f_ncm.c
+--- linux-4.14.orig/drivers/usb/gadget/function/f_ncm.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/usb/gadget/function/f_ncm.c 2018-09-05 11:05:07.000000000 +0200
+@@ -77,9 +77,7 @@
+ struct sk_buff *skb_tx_ndp;
+ u16 ndp_dgram_count;
+ bool timer_force_tx;
+- struct tasklet_struct tx_tasklet;
+ struct hrtimer task_timer;
+-
+ bool timer_stopping;
+ };
+
+@@ -1108,7 +1106,7 @@
+
+ /* Delay the timer. */
+ hrtimer_start(&ncm->task_timer, TX_TIMEOUT_NSECS,
+- HRTIMER_MODE_REL);
++ HRTIMER_MODE_REL_SOFT);
+
+ /* Add the datagram position entries */
+ ntb_ndp = skb_put_zero(ncm->skb_tx_ndp, dgram_idx_len);
+@@ -1152,17 +1150,15 @@
+ }
+
+ /*
+- * This transmits the NTB if there are frames waiting.
++ * The transmit should only be run if no skb data has been sent
++ * for a certain duration.
+ */
+-static void ncm_tx_tasklet(unsigned long data)
++static enum hrtimer_restart ncm_tx_timeout(struct hrtimer *data)
+ {
+- struct f_ncm *ncm = (void *)data;
+-
+- if (ncm->timer_stopping)
+- return;
++ struct f_ncm *ncm = container_of(data, struct f_ncm, task_timer);
+
+ /* Only send if data is available. */
+- if (ncm->skb_tx_data) {
++ if (!ncm->timer_stopping && ncm->skb_tx_data) {
+ ncm->timer_force_tx = true;
+
+ /* XXX This allowance of a NULL skb argument to ndo_start_xmit
+@@ -1175,16 +1171,6 @@
+
+ ncm->timer_force_tx = false;
+ }
+-}
+-
+-/*
+- * The transmit should only be run if no skb data has been sent
+- * for a certain duration.
+- */
+-static enum hrtimer_restart ncm_tx_timeout(struct hrtimer *data)
+-{
+- struct f_ncm *ncm = container_of(data, struct f_ncm, task_timer);
+- tasklet_schedule(&ncm->tx_tasklet);
+ return HRTIMER_NORESTART;
+ }
+
+@@ -1517,8 +1503,7 @@
+ ncm->port.open = ncm_open;
+ ncm->port.close = ncm_close;
+
+- tasklet_init(&ncm->tx_tasklet, ncm_tx_tasklet, (unsigned long) ncm);
+- hrtimer_init(&ncm->task_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&ncm->task_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
+ ncm->task_timer.function = ncm_tx_timeout;
+
+ DBG(cdev, "CDC Network: %s speed IN/%s OUT/%s NOTIFY/%s\n",
+@@ -1627,7 +1612,6 @@
+ DBG(c->cdev, "ncm unbind\n");
+
+ hrtimer_cancel(&ncm->task_timer);
+- tasklet_kill(&ncm->tx_tasklet);
+
+ ncm_string_defs[0].id = 0;
+ usb_free_all_descriptors(f);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/drivers/usb/gadget/legacy/inode.c linux-4.14/drivers/usb/gadget/legacy/inode.c
+--- linux-4.14.orig/drivers/usb/gadget/legacy/inode.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/drivers/usb/gadget/legacy/inode.c 2018-09-05 11:05:07.000000000 +0200
+@@ -347,7 +347,7 @@
+ spin_unlock_irq (&epdata->dev->lock);
+
+ if (likely (value == 0)) {
+- value = wait_event_interruptible (done.wait, done.done);
++ value = swait_event_interruptible (done.wait, done.done);
+ if (value != 0) {
+ spin_lock_irq (&epdata->dev->lock);
+ if (likely (epdata->ep != NULL)) {
+@@ -356,7 +356,7 @@
+ usb_ep_dequeue (epdata->ep, epdata->req);
+ spin_unlock_irq (&epdata->dev->lock);
+
+- wait_event (done.wait, done.done);
++ swait_event (done.wait, done.done);
+ if (epdata->status == -ECONNRESET)
+ epdata->status = -EINTR;
+ } else {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/aio.c linux-4.14/fs/aio.c
+--- linux-4.14.orig/fs/aio.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/aio.c 2018-09-05 11:05:07.000000000 +0200
+@@ -40,6 +40,7 @@
+ #include <linux/ramfs.h>
+ #include <linux/percpu-refcount.h>
+ #include <linux/mount.h>
++#include <linux/swork.h>
+
+ #include <asm/kmap_types.h>
+ #include <linux/uaccess.h>
+@@ -117,6 +118,7 @@
+
+ struct rcu_head free_rcu;
+ struct work_struct free_work; /* see free_ioctx() */
++ struct swork_event free_swork; /* see free_ioctx() */
+
+ /*
+ * signals when all in-flight requests are done
+@@ -259,6 +261,7 @@
+ .mount = aio_mount,
+ .kill_sb = kill_anon_super,
+ };
++ BUG_ON(swork_get());
+ aio_mnt = kern_mount(&aio_fs);
+ if (IS_ERR(aio_mnt))
+ panic("Failed to create aio fs mount.");
+@@ -633,9 +636,9 @@
+ * and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -
+ * now it's safe to cancel any that need to be.
+ */
+-static void free_ioctx_users(struct percpu_ref *ref)
++static void free_ioctx_users_work(struct swork_event *sev)
+ {
+- struct kioctx *ctx = container_of(ref, struct kioctx, users);
++ struct kioctx *ctx = container_of(sev, struct kioctx, free_swork);
+ struct aio_kiocb *req;
+
+ spin_lock_irq(&ctx->ctx_lock);
+@@ -653,6 +656,14 @@
+ percpu_ref_put(&ctx->reqs);
+ }
+
++static void free_ioctx_users(struct percpu_ref *ref)
++{
++ struct kioctx *ctx = container_of(ref, struct kioctx, users);
++
++ INIT_SWORK(&ctx->free_swork, free_ioctx_users_work);
++ swork_queue(&ctx->free_swork);
++}
++
+ static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm)
+ {
+ unsigned i, new_nr;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/autofs4/autofs_i.h linux-4.14/fs/autofs4/autofs_i.h
+--- linux-4.14.orig/fs/autofs4/autofs_i.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/autofs4/autofs_i.h 2018-09-05 11:05:07.000000000 +0200
+@@ -20,6 +20,7 @@
+ #include <linux/sched.h>
+ #include <linux/mount.h>
+ #include <linux/namei.h>
++#include <linux/delay.h>
+ #include <linux/uaccess.h>
+ #include <linux/mutex.h>
+ #include <linux/spinlock.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/autofs4/expire.c linux-4.14/fs/autofs4/expire.c
+--- linux-4.14.orig/fs/autofs4/expire.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/autofs4/expire.c 2018-09-05 11:05:07.000000000 +0200
+@@ -148,7 +148,7 @@
+ parent = p->d_parent;
+ if (!spin_trylock(&parent->d_lock)) {
+ spin_unlock(&p->d_lock);
+- cpu_relax();
++ cpu_chill();
+ goto relock;
+ }
+ spin_unlock(&p->d_lock);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/buffer.c linux-4.14/fs/buffer.c
+--- linux-4.14.orig/fs/buffer.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/buffer.c 2018-09-05 11:05:07.000000000 +0200
+@@ -302,8 +302,7 @@
+ * decide that the page is now completely done.
+ */
+ first = page_buffers(page);
+- local_irq_save(flags);
+- bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
++ flags = bh_uptodate_lock_irqsave(first);
+ clear_buffer_async_read(bh);
+ unlock_buffer(bh);
+ tmp = bh;
+@@ -316,8 +315,7 @@
+ }
+ tmp = tmp->b_this_page;
+ } while (tmp != bh);
+- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+- local_irq_restore(flags);
++ bh_uptodate_unlock_irqrestore(first, flags);
+
+ /*
+ * If none of the buffers had errors and they are all
+@@ -329,9 +327,7 @@
+ return;
+
+ still_busy:
+- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+- local_irq_restore(flags);
+- return;
++ bh_uptodate_unlock_irqrestore(first, flags);
+ }
+
+ /*
+@@ -358,8 +354,7 @@
+ }
+
+ first = page_buffers(page);
+- local_irq_save(flags);
+- bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
++ flags = bh_uptodate_lock_irqsave(first);
+
+ clear_buffer_async_write(bh);
+ unlock_buffer(bh);
+@@ -371,15 +366,12 @@
+ }
+ tmp = tmp->b_this_page;
+ }
+- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+- local_irq_restore(flags);
++ bh_uptodate_unlock_irqrestore(first, flags);
+ end_page_writeback(page);
+ return;
+
+ still_busy:
+- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+- local_irq_restore(flags);
+- return;
++ bh_uptodate_unlock_irqrestore(first, flags);
+ }
+ EXPORT_SYMBOL(end_buffer_async_write);
+
+@@ -3417,6 +3409,7 @@
+ struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags);
+ if (ret) {
+ INIT_LIST_HEAD(&ret->b_assoc_buffers);
++ buffer_head_init_locks(ret);
+ preempt_disable();
+ __this_cpu_inc(bh_accounting.nr);
+ recalc_bh_state();
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/cifs/readdir.c linux-4.14/fs/cifs/readdir.c
+--- linux-4.14.orig/fs/cifs/readdir.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/cifs/readdir.c 2018-09-05 11:05:07.000000000 +0200
+@@ -80,7 +80,7 @@
+ struct inode *inode;
+ struct super_block *sb = parent->d_sb;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+
+ cifs_dbg(FYI, "%s: for %s\n", __func__, name->name);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/dcache.c linux-4.14/fs/dcache.c
+--- linux-4.14.orig/fs/dcache.c 2018-09-05 11:03:29.000000000 +0200
++++ linux-4.14/fs/dcache.c 2018-09-05 11:05:07.000000000 +0200
+@@ -19,6 +19,7 @@
+ #include <linux/mm.h>
+ #include <linux/fs.h>
+ #include <linux/fsnotify.h>
++#include <linux/delay.h>
+ #include <linux/slab.h>
+ #include <linux/init.h>
+ #include <linux/hash.h>
+@@ -793,6 +794,8 @@
+ */
+ void dput(struct dentry *dentry)
+ {
++ struct dentry *parent;
++
+ if (unlikely(!dentry))
+ return;
+
+@@ -829,9 +832,18 @@
+ return;
+
+ kill_it:
+- dentry = dentry_kill(dentry);
+- if (dentry) {
+- cond_resched();
++ parent = dentry_kill(dentry);
++ if (parent) {
++ int r;
++
++ if (parent == dentry) {
++ /* the task with the highest priority won't schedule */
++ r = cond_resched();
++ if (!r)
++ cpu_chill();
++ } else {
++ dentry = parent;
++ }
+ goto repeat;
+ }
+ }
+@@ -2394,7 +2406,7 @@
+ if (dentry->d_lockref.count == 1) {
+ if (!spin_trylock(&inode->i_lock)) {
+ spin_unlock(&dentry->d_lock);
+- cpu_relax();
++ cpu_chill();
+ goto again;
+ }
+ dentry->d_flags &= ~DCACHE_CANT_MOUNT;
+@@ -2439,9 +2451,10 @@
+ static inline unsigned start_dir_add(struct inode *dir)
+ {
+
++ preempt_disable_rt();
+ for (;;) {
+- unsigned n = dir->i_dir_seq;
+- if (!(n & 1) && cmpxchg(&dir->i_dir_seq, n, n + 1) == n)
++ unsigned n = dir->__i_dir_seq;
++ if (!(n & 1) && cmpxchg(&dir->__i_dir_seq, n, n + 1) == n)
+ return n;
+ cpu_relax();
+ }
+@@ -2449,26 +2462,30 @@
+
+ static inline void end_dir_add(struct inode *dir, unsigned n)
+ {
+- smp_store_release(&dir->i_dir_seq, n + 2);
++ smp_store_release(&dir->__i_dir_seq, n + 2);
++ preempt_enable_rt();
+ }
+
+ static void d_wait_lookup(struct dentry *dentry)
+ {
+- if (d_in_lookup(dentry)) {
+- DECLARE_WAITQUEUE(wait, current);
+- add_wait_queue(dentry->d_wait, &wait);
+- do {
+- set_current_state(TASK_UNINTERRUPTIBLE);
+- spin_unlock(&dentry->d_lock);
+- schedule();
+- spin_lock(&dentry->d_lock);
+- } while (d_in_lookup(dentry));
+- }
++ struct swait_queue __wait;
++
++ if (!d_in_lookup(dentry))
++ return;
++
++ INIT_LIST_HEAD(&__wait.task_list);
++ do {
++ prepare_to_swait(dentry->d_wait, &__wait, TASK_UNINTERRUPTIBLE);
++ spin_unlock(&dentry->d_lock);
++ schedule();
++ spin_lock(&dentry->d_lock);
++ } while (d_in_lookup(dentry));
++ finish_swait(dentry->d_wait, &__wait);
+ }
+
+ struct dentry *d_alloc_parallel(struct dentry *parent,
+ const struct qstr *name,
+- wait_queue_head_t *wq)
++ struct swait_queue_head *wq)
+ {
+ unsigned int hash = name->hash;
+ struct hlist_bl_head *b = in_lookup_hash(parent, hash);
+@@ -2482,7 +2499,7 @@
+
+ retry:
+ rcu_read_lock();
+- seq = smp_load_acquire(&parent->d_inode->i_dir_seq);
++ seq = smp_load_acquire(&parent->d_inode->__i_dir_seq);
+ r_seq = read_seqbegin(&rename_lock);
+ dentry = __d_lookup_rcu(parent, name, &d_seq);
+ if (unlikely(dentry)) {
+@@ -2510,7 +2527,7 @@
+ }
+
+ hlist_bl_lock(b);
+- if (unlikely(READ_ONCE(parent->d_inode->i_dir_seq) != seq)) {
++ if (unlikely(READ_ONCE(parent->d_inode->__i_dir_seq) != seq)) {
+ hlist_bl_unlock(b);
+ rcu_read_unlock();
+ goto retry;
+@@ -2583,7 +2600,7 @@
+ hlist_bl_lock(b);
+ dentry->d_flags &= ~DCACHE_PAR_LOOKUP;
+ __hlist_bl_del(&dentry->d_u.d_in_lookup_hash);
+- wake_up_all(dentry->d_wait);
++ swake_up_all(dentry->d_wait);
+ dentry->d_wait = NULL;
+ hlist_bl_unlock(b);
+ INIT_HLIST_NODE(&dentry->d_u.d_alias);
+@@ -3619,6 +3636,8 @@
+
+ static void __init dcache_init_early(void)
+ {
++ unsigned int loop;
++
+ /* If hashes are distributed across NUMA nodes, defer
+ * hash allocation until vmalloc space is available.
+ */
+@@ -3635,10 +3654,14 @@
+ &d_hash_mask,
+ 0,
+ 0);
++
++ for (loop = 0; loop < (1U << d_hash_shift); loop++)
++ INIT_HLIST_BL_HEAD(dentry_hashtable + loop);
+ }
+
+ static void __init dcache_init(void)
+ {
++ unsigned int loop;
+ /*
+ * A constructor could be added for stable state like the lists,
+ * but it is probably not worth it because of the cache nature
+@@ -3661,6 +3684,10 @@
+ &d_hash_mask,
+ 0,
+ 0);
++
++ for (loop = 0; loop < (1U << d_hash_shift); loop++)
++ INIT_HLIST_BL_HEAD(dentry_hashtable + loop);
++
+ }
+
+ /* SLAB cache for __getname() consumers */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/eventpoll.c linux-4.14/fs/eventpoll.c
+--- linux-4.14.orig/fs/eventpoll.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/eventpoll.c 2018-09-05 11:05:07.000000000 +0200
+@@ -587,12 +587,12 @@
+ */
+ static void ep_poll_safewake(wait_queue_head_t *wq)
+ {
+- int this_cpu = get_cpu();
++ int this_cpu = get_cpu_light();
+
+ ep_call_nested(&poll_safewake_ncalls, EP_MAX_NESTS,
+ ep_poll_wakeup_proc, NULL, wq, (void *) (long) this_cpu);
+
+- put_cpu();
++ put_cpu_light();
+ }
+
+ static void ep_remove_wait_queue(struct eppoll_entry *pwq)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/exec.c linux-4.14/fs/exec.c
+--- linux-4.14.orig/fs/exec.c 2018-09-05 11:03:29.000000000 +0200
++++ linux-4.14/fs/exec.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1025,12 +1025,14 @@
+ }
+ }
+ task_lock(tsk);
++ preempt_disable_rt();
+ active_mm = tsk->active_mm;
+ tsk->mm = mm;
+ tsk->active_mm = mm;
+ activate_mm(active_mm, mm);
+ tsk->mm->vmacache_seqnum = 0;
+ vmacache_flush(tsk);
++ preempt_enable_rt();
+ task_unlock(tsk);
+ if (old_mm) {
+ up_read(&old_mm->mmap_sem);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/ext4/page-io.c linux-4.14/fs/ext4/page-io.c
+--- linux-4.14.orig/fs/ext4/page-io.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/ext4/page-io.c 2018-09-05 11:05:07.000000000 +0200
+@@ -95,8 +95,7 @@
+ * We check all buffers in the page under BH_Uptodate_Lock
+ * to avoid races with other end io clearing async_write flags
+ */
+- local_irq_save(flags);
+- bit_spin_lock(BH_Uptodate_Lock, &head->b_state);
++ flags = bh_uptodate_lock_irqsave(head);
+ do {
+ if (bh_offset(bh) < bio_start ||
+ bh_offset(bh) + bh->b_size > bio_end) {
+@@ -108,8 +107,7 @@
+ if (bio->bi_status)
+ buffer_io_error(bh);
+ } while ((bh = bh->b_this_page) != head);
+- bit_spin_unlock(BH_Uptodate_Lock, &head->b_state);
+- local_irq_restore(flags);
++ bh_uptodate_unlock_irqrestore(head, flags);
+ if (!under_io) {
+ #ifdef CONFIG_EXT4_FS_ENCRYPTION
+ if (data_page)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/fuse/dir.c linux-4.14/fs/fuse/dir.c
+--- linux-4.14.orig/fs/fuse/dir.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/fuse/dir.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1187,7 +1187,7 @@
+ struct inode *dir = d_inode(parent);
+ struct fuse_conn *fc;
+ struct inode *inode;
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+
+ if (!o->nodeid) {
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/inode.c linux-4.14/fs/inode.c
+--- linux-4.14.orig/fs/inode.c 2018-09-05 11:03:29.000000000 +0200
++++ linux-4.14/fs/inode.c 2018-09-05 11:05:07.000000000 +0200
+@@ -154,7 +154,7 @@
+ inode->i_bdev = NULL;
+ inode->i_cdev = NULL;
+ inode->i_link = NULL;
+- inode->i_dir_seq = 0;
++ inode->__i_dir_seq = 0;
+ inode->i_rdev = 0;
+ inode->dirtied_when = 0;
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/libfs.c linux-4.14/fs/libfs.c
+--- linux-4.14.orig/fs/libfs.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/libfs.c 2018-09-05 11:05:07.000000000 +0200
+@@ -90,7 +90,7 @@
+ struct list_head *from,
+ int count)
+ {
+- unsigned *seq = &parent->d_inode->i_dir_seq, n;
++ unsigned *seq = &parent->d_inode->__i_dir_seq, n;
+ struct dentry *res;
+ struct list_head *p;
+ bool skipped;
+@@ -123,8 +123,9 @@
+ static void move_cursor(struct dentry *cursor, struct list_head *after)
+ {
+ struct dentry *parent = cursor->d_parent;
+- unsigned n, *seq = &parent->d_inode->i_dir_seq;
++ unsigned n, *seq = &parent->d_inode->__i_dir_seq;
+ spin_lock(&parent->d_lock);
++ preempt_disable_rt();
+ for (;;) {
+ n = *seq;
+ if (!(n & 1) && cmpxchg(seq, n, n + 1) == n)
+@@ -137,6 +138,7 @@
+ else
+ list_add_tail(&cursor->d_child, &parent->d_subdirs);
+ smp_store_release(seq, n + 2);
++ preempt_enable_rt();
+ spin_unlock(&parent->d_lock);
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/locks.c linux-4.14/fs/locks.c
+--- linux-4.14.orig/fs/locks.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/locks.c 2018-09-05 11:05:07.000000000 +0200
+@@ -945,7 +945,7 @@
+ return -ENOMEM;
+ }
+
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+ if (request->fl_flags & FL_ACCESS)
+ goto find_conflict;
+@@ -986,7 +986,7 @@
+
+ out:
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+ if (new_fl)
+ locks_free_lock(new_fl);
+ locks_dispose_list(&dispose);
+@@ -1023,7 +1023,7 @@
+ new_fl2 = locks_alloc_lock();
+ }
+
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+ /*
+ * New lock request. Walk all POSIX locks and look for conflicts. If
+@@ -1195,7 +1195,7 @@
+ }
+ out:
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+ /*
+ * Free any unused locks.
+ */
+@@ -1470,7 +1470,7 @@
+ return error;
+ }
+
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+
+ time_out_leases(inode, &dispose);
+@@ -1522,13 +1522,13 @@
+ locks_insert_block(fl, new_fl);
+ trace_break_lease_block(inode, new_fl);
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+
+ locks_dispose_list(&dispose);
+ error = wait_event_interruptible_timeout(new_fl->fl_wait,
+ !new_fl->fl_next, break_time);
+
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+ trace_break_lease_unblock(inode, new_fl);
+ locks_delete_block(new_fl);
+@@ -1545,7 +1545,7 @@
+ }
+ out:
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+ locks_dispose_list(&dispose);
+ locks_free_lock(new_fl);
+ return error;
+@@ -1619,7 +1619,7 @@
+
+ ctx = smp_load_acquire(&inode->i_flctx);
+ if (ctx && !list_empty_careful(&ctx->flc_lease)) {
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+ time_out_leases(inode, &dispose);
+ list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
+@@ -1629,7 +1629,7 @@
+ break;
+ }
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+
+ locks_dispose_list(&dispose);
+ }
+@@ -1704,7 +1704,7 @@
+ return -EINVAL;
+ }
+
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+ time_out_leases(inode, &dispose);
+ error = check_conflicting_open(dentry, arg, lease->fl_flags);
+@@ -1775,7 +1775,7 @@
+ lease->fl_lmops->lm_setup(lease, priv);
+ out:
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+ locks_dispose_list(&dispose);
+ if (is_deleg)
+ inode_unlock(inode);
+@@ -1798,7 +1798,7 @@
+ return error;
+ }
+
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+ list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
+ if (fl->fl_file == filp &&
+@@ -1811,7 +1811,7 @@
+ if (victim)
+ error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+ locks_dispose_list(&dispose);
+ return error;
+ }
+@@ -2535,13 +2535,13 @@
+ if (list_empty(&ctx->flc_lease))
+ return;
+
+- percpu_down_read_preempt_disable(&file_rwsem);
++ percpu_down_read(&file_rwsem);
+ spin_lock(&ctx->flc_lock);
+ list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list)
+ if (filp == fl->fl_file)
+ lease_modify(fl, F_UNLCK, &dispose);
+ spin_unlock(&ctx->flc_lock);
+- percpu_up_read_preempt_enable(&file_rwsem);
++ percpu_up_read(&file_rwsem);
+
+ locks_dispose_list(&dispose);
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/namei.c linux-4.14/fs/namei.c
+--- linux-4.14.orig/fs/namei.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/namei.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1627,7 +1627,7 @@
+ {
+ struct dentry *dentry = ERR_PTR(-ENOENT), *old;
+ struct inode *inode = dir->d_inode;
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+
+ inode_lock_shared(inode);
+ /* Don't go there if it's already dead */
+@@ -3100,7 +3100,7 @@
+ struct dentry *dentry;
+ int error, create_error = 0;
+ umode_t mode = op->mode;
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+
+ if (unlikely(IS_DEADDIR(dir_inode)))
+ return -ENOENT;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/namespace.c linux-4.14/fs/namespace.c
+--- linux-4.14.orig/fs/namespace.c 2018-09-05 11:03:29.000000000 +0200
++++ linux-4.14/fs/namespace.c 2018-09-05 11:05:07.000000000 +0200
+@@ -14,6 +14,7 @@
+ #include <linux/mnt_namespace.h>
+ #include <linux/user_namespace.h>
+ #include <linux/namei.h>
++#include <linux/delay.h>
+ #include <linux/security.h>
+ #include <linux/cred.h>
+ #include <linux/idr.h>
+@@ -353,8 +354,11 @@
+ * incremented count after it has set MNT_WRITE_HOLD.
+ */
+ smp_mb();
+- while (ACCESS_ONCE(mnt->mnt.mnt_flags) & MNT_WRITE_HOLD)
+- cpu_relax();
++ while (ACCESS_ONCE(mnt->mnt.mnt_flags) & MNT_WRITE_HOLD) {
++ preempt_enable();
++ cpu_chill();
++ preempt_disable();
++ }
+ /*
+ * After the slowpath clears MNT_WRITE_HOLD, mnt_is_readonly will
+ * be set to match its requirements. So we must not load that until
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/nfs/delegation.c linux-4.14/fs/nfs/delegation.c
+--- linux-4.14.orig/fs/nfs/delegation.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/nfs/delegation.c 2018-09-05 11:05:07.000000000 +0200
+@@ -150,11 +150,11 @@
+ sp = state->owner;
+ /* Block nfs4_proc_unlck */
+ mutex_lock(&sp->so_delegreturn_mutex);
+- seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
++ seq = read_seqbegin(&sp->so_reclaim_seqlock);
+ err = nfs4_open_delegation_recall(ctx, state, stateid, type);
+ if (!err)
+ err = nfs_delegation_claim_locks(ctx, state, stateid);
+- if (!err && read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
++ if (!err && read_seqretry(&sp->so_reclaim_seqlock, seq))
+ err = -EAGAIN;
+ mutex_unlock(&sp->so_delegreturn_mutex);
+ put_nfs_open_context(ctx);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/nfs/dir.c linux-4.14/fs/nfs/dir.c
+--- linux-4.14.orig/fs/nfs/dir.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/nfs/dir.c 2018-09-05 11:05:07.000000000 +0200
+@@ -452,7 +452,7 @@
+ void nfs_prime_dcache(struct dentry *parent, struct nfs_entry *entry)
+ {
+ struct qstr filename = QSTR_INIT(entry->name, entry->len);
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+ struct dentry *dentry;
+ struct dentry *alias;
+ struct inode *dir = d_inode(parent);
+@@ -1443,7 +1443,7 @@
+ struct file *file, unsigned open_flags,
+ umode_t mode, int *opened)
+ {
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+ struct nfs_open_context *ctx;
+ struct dentry *res;
+ struct iattr attr = { .ia_valid = ATTR_OPEN };
+@@ -1763,7 +1763,11 @@
+
+ trace_nfs_rmdir_enter(dir, dentry);
+ if (d_really_is_positive(dentry)) {
++#ifdef CONFIG_PREEMPT_RT_BASE
++ down(&NFS_I(d_inode(dentry))->rmdir_sem);
++#else
+ down_write(&NFS_I(d_inode(dentry))->rmdir_sem);
++#endif
+ error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
+ /* Ensure the VFS deletes this inode */
+ switch (error) {
+@@ -1773,7 +1777,11 @@
+ case -ENOENT:
+ nfs_dentry_handle_enoent(dentry);
+ }
++#ifdef CONFIG_PREEMPT_RT_BASE
++ up(&NFS_I(d_inode(dentry))->rmdir_sem);
++#else
+ up_write(&NFS_I(d_inode(dentry))->rmdir_sem);
++#endif
+ } else
+ error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
+ trace_nfs_rmdir_exit(dir, dentry, error);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/nfs/inode.c linux-4.14/fs/nfs/inode.c
+--- linux-4.14.orig/fs/nfs/inode.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/nfs/inode.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2014,7 +2014,11 @@
+ atomic_long_set(&nfsi->nrequests, 0);
+ atomic_long_set(&nfsi->commit_info.ncommit, 0);
+ atomic_set(&nfsi->commit_info.rpcs_out, 0);
++#ifdef CONFIG_PREEMPT_RT_BASE
++ sema_init(&nfsi->rmdir_sem, 1);
++#else
+ init_rwsem(&nfsi->rmdir_sem);
++#endif
+ mutex_init(&nfsi->commit_mutex);
+ nfs4_init_once(nfsi);
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/nfs/nfs4_fs.h linux-4.14/fs/nfs/nfs4_fs.h
+--- linux-4.14.orig/fs/nfs/nfs4_fs.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/nfs/nfs4_fs.h 2018-09-05 11:05:07.000000000 +0200
+@@ -112,7 +112,7 @@
+ unsigned long so_flags;
+ struct list_head so_states;
+ struct nfs_seqid_counter so_seqid;
+- seqcount_t so_reclaim_seqcount;
++ seqlock_t so_reclaim_seqlock;
+ struct mutex so_delegreturn_mutex;
+ };
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/nfs/nfs4proc.c linux-4.14/fs/nfs/nfs4proc.c
+--- linux-4.14.orig/fs/nfs/nfs4proc.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/nfs/nfs4proc.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2689,7 +2689,7 @@
+ unsigned int seq;
+ int ret;
+
+- seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
++ seq = raw_seqcount_begin(&sp->so_reclaim_seqlock.seqcount);
+
+ ret = _nfs4_proc_open(opendata);
+ if (ret != 0)
+@@ -2727,7 +2727,7 @@
+
+ if (d_inode(dentry) == state->inode) {
+ nfs_inode_attach_open_context(ctx);
+- if (read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
++ if (read_seqretry(&sp->so_reclaim_seqlock, seq))
+ nfs4_schedule_stateid_recovery(server, state);
+ }
+ out:
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/nfs/nfs4state.c linux-4.14/fs/nfs/nfs4state.c
+--- linux-4.14.orig/fs/nfs/nfs4state.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/nfs/nfs4state.c 2018-09-05 11:05:07.000000000 +0200
+@@ -494,7 +494,7 @@
+ nfs4_init_seqid_counter(&sp->so_seqid);
+ atomic_set(&sp->so_count, 1);
+ INIT_LIST_HEAD(&sp->so_lru);
+- seqcount_init(&sp->so_reclaim_seqcount);
++ seqlock_init(&sp->so_reclaim_seqlock);
+ mutex_init(&sp->so_delegreturn_mutex);
+ return sp;
+ }
+@@ -1519,8 +1519,12 @@
+ * recovering after a network partition or a reboot from a
+ * server that doesn't support a grace period.
+ */
++#ifdef CONFIG_PREEMPT_RT_FULL
++ write_seqlock(&sp->so_reclaim_seqlock);
++#else
++ write_seqcount_begin(&sp->so_reclaim_seqlock.seqcount);
++#endif
+ spin_lock(&sp->so_lock);
+- raw_write_seqcount_begin(&sp->so_reclaim_seqcount);
+ restart:
+ list_for_each_entry(state, &sp->so_states, open_states) {
+ if (!test_and_clear_bit(ops->state_flag_bit, &state->flags))
+@@ -1589,14 +1593,20 @@
+ spin_lock(&sp->so_lock);
+ goto restart;
+ }
+- raw_write_seqcount_end(&sp->so_reclaim_seqcount);
+ spin_unlock(&sp->so_lock);
++#ifdef CONFIG_PREEMPT_RT_FULL
++ write_sequnlock(&sp->so_reclaim_seqlock);
++#else
++ write_seqcount_end(&sp->so_reclaim_seqlock.seqcount);
++#endif
+ return 0;
+ out_err:
+ nfs4_put_open_state(state);
+- spin_lock(&sp->so_lock);
+- raw_write_seqcount_end(&sp->so_reclaim_seqcount);
+- spin_unlock(&sp->so_lock);
++#ifdef CONFIG_PREEMPT_RT_FULL
++ write_sequnlock(&sp->so_reclaim_seqlock);
++#else
++ write_seqcount_end(&sp->so_reclaim_seqlock.seqcount);
++#endif
+ return status;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/nfs/unlink.c linux-4.14/fs/nfs/unlink.c
+--- linux-4.14.orig/fs/nfs/unlink.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/nfs/unlink.c 2018-09-05 11:05:07.000000000 +0200
+@@ -13,7 +13,7 @@
+ #include <linux/sunrpc/clnt.h>
+ #include <linux/nfs_fs.h>
+ #include <linux/sched.h>
+-#include <linux/wait.h>
++#include <linux/swait.h>
+ #include <linux/namei.h>
+ #include <linux/fsnotify.h>
+
+@@ -52,6 +52,29 @@
+ rpc_restart_call_prepare(task);
+ }
+
++#ifdef CONFIG_PREEMPT_RT_BASE
++static void nfs_down_anon(struct semaphore *sema)
++{
++ down(sema);
++}
++
++static void nfs_up_anon(struct semaphore *sema)
++{
++ up(sema);
++}
++
++#else
++static void nfs_down_anon(struct rw_semaphore *rwsem)
++{
++ down_read_non_owner(rwsem);
++}
++
++static void nfs_up_anon(struct rw_semaphore *rwsem)
++{
++ up_read_non_owner(rwsem);
++}
++#endif
++
+ /**
+ * nfs_async_unlink_release - Release the sillydelete data.
+ * @task: rpc_task of the sillydelete
+@@ -65,7 +88,7 @@
+ struct dentry *dentry = data->dentry;
+ struct super_block *sb = dentry->d_sb;
+
+- up_read_non_owner(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
++ nfs_up_anon(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
+ d_lookup_done(dentry);
+ nfs_free_unlinkdata(data);
+ dput(dentry);
+@@ -118,10 +141,10 @@
+ struct inode *dir = d_inode(dentry->d_parent);
+ struct dentry *alias;
+
+- down_read_non_owner(&NFS_I(dir)->rmdir_sem);
++ nfs_down_anon(&NFS_I(dir)->rmdir_sem);
+ alias = d_alloc_parallel(dentry->d_parent, &data->args.name, &data->wq);
+ if (IS_ERR(alias)) {
+- up_read_non_owner(&NFS_I(dir)->rmdir_sem);
++ nfs_up_anon(&NFS_I(dir)->rmdir_sem);
+ return 0;
+ }
+ if (!d_in_lookup(alias)) {
+@@ -143,7 +166,7 @@
+ ret = 0;
+ spin_unlock(&alias->d_lock);
+ dput(alias);
+- up_read_non_owner(&NFS_I(dir)->rmdir_sem);
++ nfs_up_anon(&NFS_I(dir)->rmdir_sem);
+ /*
+ * If we'd displaced old cached devname, free it. At that
+ * point dentry is definitely not a root, so we won't need
+@@ -183,7 +206,7 @@
+ goto out_free_name;
+ }
+ data->res.dir_attr = &data->dir_attr;
+- init_waitqueue_head(&data->wq);
++ init_swait_queue_head(&data->wq);
+
+ status = -EBUSY;
+ spin_lock(&dentry->d_lock);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/ntfs/aops.c linux-4.14/fs/ntfs/aops.c
+--- linux-4.14.orig/fs/ntfs/aops.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/ntfs/aops.c 2018-09-05 11:05:07.000000000 +0200
+@@ -93,13 +93,13 @@
+ ofs = 0;
+ if (file_ofs < init_size)
+ ofs = init_size - file_ofs;
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ kaddr = kmap_atomic(page);
+ memset(kaddr + bh_offset(bh) + ofs, 0,
+ bh->b_size - ofs);
+ flush_dcache_page(page);
+ kunmap_atomic(kaddr);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ }
+ } else {
+ clear_buffer_uptodate(bh);
+@@ -108,8 +108,7 @@
+ "0x%llx.", (unsigned long long)bh->b_blocknr);
+ }
+ first = page_buffers(page);
+- local_irq_save(flags);
+- bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
++ flags = bh_uptodate_lock_irqsave(first);
+ clear_buffer_async_read(bh);
+ unlock_buffer(bh);
+ tmp = bh;
+@@ -124,8 +123,7 @@
+ }
+ tmp = tmp->b_this_page;
+ } while (tmp != bh);
+- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+- local_irq_restore(flags);
++ bh_uptodate_unlock_irqrestore(first, flags);
+ /*
+ * If none of the buffers had errors then we can set the page uptodate,
+ * but we first have to perform the post read mst fixups, if the
+@@ -146,13 +144,13 @@
+ recs = PAGE_SIZE / rec_size;
+ /* Should have been verified before we got here... */
+ BUG_ON(!recs);
+- local_irq_save(flags);
++ local_irq_save_nort(flags);
+ kaddr = kmap_atomic(page);
+ for (i = 0; i < recs; i++)
+ post_read_mst_fixup((NTFS_RECORD*)(kaddr +
+ i * rec_size), rec_size);
+ kunmap_atomic(kaddr);
+- local_irq_restore(flags);
++ local_irq_restore_nort(flags);
+ flush_dcache_page(page);
+ if (likely(page_uptodate && !PageError(page)))
+ SetPageUptodate(page);
+@@ -160,9 +158,7 @@
+ unlock_page(page);
+ return;
+ still_busy:
+- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
+- local_irq_restore(flags);
+- return;
++ bh_uptodate_unlock_irqrestore(first, flags);
+ }
+
+ /**
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/proc/array.c linux-4.14/fs/proc/array.c
+--- linux-4.14.orig/fs/proc/array.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/proc/array.c 2018-09-05 11:05:07.000000000 +0200
+@@ -386,9 +386,9 @@
+ static void task_cpus_allowed(struct seq_file *m, struct task_struct *task)
+ {
+ seq_printf(m, "Cpus_allowed:\t%*pb\n",
+- cpumask_pr_args(&task->cpus_allowed));
++ cpumask_pr_args(task->cpus_ptr));
+ seq_printf(m, "Cpus_allowed_list:\t%*pbl\n",
+- cpumask_pr_args(&task->cpus_allowed));
++ cpumask_pr_args(task->cpus_ptr));
+ }
+
+ int proc_pid_status(struct seq_file *m, struct pid_namespace *ns,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/proc/base.c linux-4.14/fs/proc/base.c
+--- linux-4.14.orig/fs/proc/base.c 2018-09-05 11:03:28.000000000 +0200
++++ linux-4.14/fs/proc/base.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1886,7 +1886,7 @@
+
+ child = d_hash_and_lookup(dir, &qname);
+ if (!child) {
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+ child = d_alloc_parallel(dir, &qname, &wq);
+ if (IS_ERR(child))
+ goto end_instantiate;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/proc/proc_sysctl.c linux-4.14/fs/proc/proc_sysctl.c
+--- linux-4.14.orig/fs/proc/proc_sysctl.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/proc/proc_sysctl.c 2018-09-05 11:05:07.000000000 +0200
+@@ -679,7 +679,7 @@
+
+ child = d_lookup(dir, &qname);
+ if (!child) {
+- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
++ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+ child = d_alloc_parallel(dir, &qname, &wq);
+ if (IS_ERR(child))
+ return false;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/timerfd.c linux-4.14/fs/timerfd.c
+--- linux-4.14.orig/fs/timerfd.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/fs/timerfd.c 2018-09-05 11:05:07.000000000 +0200
+@@ -471,7 +471,10 @@
+ break;
+ }
+ spin_unlock_irq(&ctx->wqh.lock);
+- cpu_relax();
++ if (isalarm(ctx))
++ hrtimer_wait_for_timer(&ctx->t.alarm.timer);
++ else
++ hrtimer_wait_for_timer(&ctx->t.tmr);
+ }
+
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/fs/xfs/xfs_aops.c linux-4.14/fs/xfs/xfs_aops.c
+--- linux-4.14.orig/fs/xfs/xfs_aops.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/fs/xfs/xfs_aops.c 2018-09-05 11:05:07.000000000 +0200
+@@ -120,8 +120,7 @@
+ ASSERT(bvec->bv_offset + bvec->bv_len <= PAGE_SIZE);
+ ASSERT((bvec->bv_len & (i_blocksize(inode) - 1)) == 0);
+
+- local_irq_save(flags);
+- bit_spin_lock(BH_Uptodate_Lock, &head->b_state);
++ flags = bh_uptodate_lock_irqsave(head);
+ do {
+ if (off >= bvec->bv_offset &&
+ off < bvec->bv_offset + bvec->bv_len) {
+@@ -143,8 +142,7 @@
+ }
+ off += bh->b_size;
+ } while ((bh = bh->b_this_page) != head);
+- bit_spin_unlock(BH_Uptodate_Lock, &head->b_state);
+- local_irq_restore(flags);
++ bh_uptodate_unlock_irqrestore(head, flags);
+
+ if (!busy)
+ end_page_writeback(bvec->bv_page);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/acpi/platform/aclinux.h linux-4.14/include/acpi/platform/aclinux.h
+--- linux-4.14.orig/include/acpi/platform/aclinux.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/acpi/platform/aclinux.h 2018-09-05 11:05:07.000000000 +0200
+@@ -134,6 +134,7 @@
+
+ #define acpi_cache_t struct kmem_cache
+ #define acpi_spinlock spinlock_t *
++#define acpi_raw_spinlock raw_spinlock_t *
+ #define acpi_cpu_flags unsigned long
+
+ /* Use native linux version of acpi_os_allocate_zeroed */
+@@ -152,6 +153,20 @@
+ #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_thread_id
+ #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_lock
+
++#define acpi_os_create_raw_lock(__handle) \
++({ \
++ raw_spinlock_t *lock = ACPI_ALLOCATE(sizeof(*lock)); \
++ \
++ if (lock) { \
++ *(__handle) = lock; \
++ raw_spin_lock_init(*(__handle)); \
++ } \
++ lock ? AE_OK : AE_NO_MEMORY; \
++ })
++
++#define acpi_os_delete_raw_lock(__handle) kfree(__handle)
++
++
+ /*
+ * OSL interfaces used by debugger/disassembler
+ */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/asm-generic/bug.h linux-4.14/include/asm-generic/bug.h
+--- linux-4.14.orig/include/asm-generic/bug.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/asm-generic/bug.h 2018-09-05 11:05:07.000000000 +0200
+@@ -234,6 +234,20 @@
+ # define WARN_ON_SMP(x) ({0;})
+ #endif
+
++#ifdef CONFIG_PREEMPT_RT_BASE
++# define BUG_ON_RT(c) BUG_ON(c)
++# define BUG_ON_NONRT(c) do { } while (0)
++# define WARN_ON_RT(condition) WARN_ON(condition)
++# define WARN_ON_NONRT(condition) do { } while (0)
++# define WARN_ON_ONCE_NONRT(condition) do { } while (0)
++#else
++# define BUG_ON_RT(c) do { } while (0)
++# define BUG_ON_NONRT(c) BUG_ON(c)
++# define WARN_ON_RT(condition) do { } while (0)
++# define WARN_ON_NONRT(condition) WARN_ON(condition)
++# define WARN_ON_ONCE_NONRT(condition) WARN_ON_ONCE(condition)
++#endif
++
+ #endif /* __ASSEMBLY__ */
+
+ #endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/blkdev.h linux-4.14/include/linux/blkdev.h
+--- linux-4.14.orig/include/linux/blkdev.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/blkdev.h 2018-09-05 11:05:07.000000000 +0200
+@@ -27,6 +27,7 @@
+ #include <linux/percpu-refcount.h>
+ #include <linux/scatterlist.h>
+ #include <linux/blkzoned.h>
++#include <linux/swork.h>
+
+ struct module;
+ struct scsi_ioctl_command;
+@@ -134,6 +135,9 @@
+ */
+ struct request {
+ struct list_head queuelist;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ struct work_struct work;
++#endif
+ union {
+ struct __call_single_data csd;
+ u64 fifo_time;
+@@ -596,6 +600,7 @@
+ #endif
+ struct rcu_head rcu_head;
+ wait_queue_head_t mq_freeze_wq;
++ struct swork_event mq_pcpu_wake;
+ struct percpu_ref q_usage_counter;
+ struct list_head all_q_node;
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/blk-mq.h linux-4.14/include/linux/blk-mq.h
+--- linux-4.14.orig/include/linux/blk-mq.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/blk-mq.h 2018-09-05 11:05:07.000000000 +0200
+@@ -226,7 +226,7 @@
+ return unique_tag & BLK_MQ_UNIQUE_TAG_MASK;
+ }
+
+-
++void __blk_mq_complete_request_remote_work(struct work_struct *work);
+ int blk_mq_request_started(struct request *rq);
+ void blk_mq_start_request(struct request *rq);
+ void blk_mq_end_request(struct request *rq, blk_status_t error);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/bottom_half.h linux-4.14/include/linux/bottom_half.h
+--- linux-4.14.orig/include/linux/bottom_half.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/bottom_half.h 2018-09-05 11:05:07.000000000 +0200
+@@ -4,6 +4,39 @@
+
+ #include <linux/preempt.h>
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++
++extern void __local_bh_disable(void);
++extern void _local_bh_enable(void);
++extern void __local_bh_enable(void);
++
++static inline void local_bh_disable(void)
++{
++ __local_bh_disable();
++}
++
++static inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
++{
++ __local_bh_disable();
++}
++
++static inline void local_bh_enable(void)
++{
++ __local_bh_enable();
++}
++
++static inline void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
++{
++ __local_bh_enable();
++}
++
++static inline void local_bh_enable_ip(unsigned long ip)
++{
++ __local_bh_enable();
++}
++
++#else
++
+ #ifdef CONFIG_TRACE_IRQFLAGS
+ extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
+ #else
+@@ -31,5 +64,6 @@
+ {
+ __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
+ }
++#endif
+
+ #endif /* _LINUX_BH_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/buffer_head.h linux-4.14/include/linux/buffer_head.h
+--- linux-4.14.orig/include/linux/buffer_head.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/buffer_head.h 2018-09-05 11:05:07.000000000 +0200
+@@ -76,8 +76,50 @@
+ struct address_space *b_assoc_map; /* mapping this buffer is
+ associated with */
+ atomic_t b_count; /* users using this buffer_head */
++#ifdef CONFIG_PREEMPT_RT_BASE
++ spinlock_t b_uptodate_lock;
++#if IS_ENABLED(CONFIG_JBD2)
++ spinlock_t b_state_lock;
++ spinlock_t b_journal_head_lock;
++#endif
++#endif
+ };
+
++static inline unsigned long bh_uptodate_lock_irqsave(struct buffer_head *bh)
++{
++ unsigned long flags;
++
++#ifndef CONFIG_PREEMPT_RT_BASE
++ local_irq_save(flags);
++ bit_spin_lock(BH_Uptodate_Lock, &bh->b_state);
++#else
++ spin_lock_irqsave(&bh->b_uptodate_lock, flags);
++#endif
++ return flags;
++}
++
++static inline void
++bh_uptodate_unlock_irqrestore(struct buffer_head *bh, unsigned long flags)
++{
++#ifndef CONFIG_PREEMPT_RT_BASE
++ bit_spin_unlock(BH_Uptodate_Lock, &bh->b_state);
++ local_irq_restore(flags);
++#else
++ spin_unlock_irqrestore(&bh->b_uptodate_lock, flags);
++#endif
++}
++
++static inline void buffer_head_init_locks(struct buffer_head *bh)
++{
++#ifdef CONFIG_PREEMPT_RT_BASE
++ spin_lock_init(&bh->b_uptodate_lock);
++#if IS_ENABLED(CONFIG_JBD2)
++ spin_lock_init(&bh->b_state_lock);
++ spin_lock_init(&bh->b_journal_head_lock);
++#endif
++#endif
++}
++
+ /*
+ * macro tricks to expand the set_buffer_foo(), clear_buffer_foo()
+ * and buffer_foo() functions.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/cgroup-defs.h linux-4.14/include/linux/cgroup-defs.h
+--- linux-4.14.orig/include/linux/cgroup-defs.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/cgroup-defs.h 2018-09-05 11:05:07.000000000 +0200
+@@ -19,6 +19,7 @@
+ #include <linux/percpu-rwsem.h>
+ #include <linux/workqueue.h>
+ #include <linux/bpf-cgroup.h>
++#include <linux/swork.h>
+
+ #ifdef CONFIG_CGROUPS
+
+@@ -152,6 +153,7 @@
+ /* percpu_ref killing and RCU release */
+ struct rcu_head rcu_head;
+ struct work_struct destroy_work;
++ struct swork_event destroy_swork;
+
+ /*
+ * PI: the parent css. Placed here for cache proximity to following
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/completion.h linux-4.14/include/linux/completion.h
+--- linux-4.14.orig/include/linux/completion.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/completion.h 2018-09-05 11:05:07.000000000 +0200
+@@ -9,7 +9,7 @@
+ * See kernel/sched/completion.c for details.
+ */
+
+-#include <linux/wait.h>
++#include <linux/swait.h>
+ #ifdef CONFIG_LOCKDEP_COMPLETIONS
+ #include <linux/lockdep.h>
+ #endif
+@@ -28,7 +28,7 @@
+ */
+ struct completion {
+ unsigned int done;
+- wait_queue_head_t wait;
++ struct swait_queue_head wait;
+ #ifdef CONFIG_LOCKDEP_COMPLETIONS
+ struct lockdep_map_cross map;
+ #endif
+@@ -67,11 +67,11 @@
+
+ #ifdef CONFIG_LOCKDEP_COMPLETIONS
+ #define COMPLETION_INITIALIZER(work) \
+- { 0, __WAIT_QUEUE_HEAD_INITIALIZER((work).wait), \
++ { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait), \
+ STATIC_CROSS_LOCKDEP_MAP_INIT("(complete)" #work, &(work)) }
+ #else
+ #define COMPLETION_INITIALIZER(work) \
+- { 0, __WAIT_QUEUE_HEAD_INITIALIZER((work).wait) }
++ { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) }
+ #endif
+
+ #define COMPLETION_INITIALIZER_ONSTACK(work) \
+@@ -117,7 +117,7 @@
+ static inline void __init_completion(struct completion *x)
+ {
+ x->done = 0;
+- init_waitqueue_head(&x->wait);
++ init_swait_queue_head(&x->wait);
+ }
+
+ /**
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/cpu.h linux-4.14/include/linux/cpu.h
+--- linux-4.14.orig/include/linux/cpu.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/cpu.h 2018-09-05 11:05:07.000000000 +0200
+@@ -120,6 +120,8 @@
+ extern void cpu_hotplug_enable(void);
+ void clear_tasks_mm_cpumask(int cpu);
+ int cpu_down(unsigned int cpu);
++extern void pin_current_cpu(void);
++extern void unpin_current_cpu(void);
+
+ #else /* CONFIG_HOTPLUG_CPU */
+
+@@ -130,6 +132,9 @@
+ static inline void lockdep_assert_cpus_held(void) { }
+ static inline void cpu_hotplug_disable(void) { }
+ static inline void cpu_hotplug_enable(void) { }
++static inline void pin_current_cpu(void) { }
++static inline void unpin_current_cpu(void) { }
++
+ #endif /* !CONFIG_HOTPLUG_CPU */
+
+ /* Wrappers which go away once all code is converted */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/dcache.h linux-4.14/include/linux/dcache.h
+--- linux-4.14.orig/include/linux/dcache.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/dcache.h 2018-09-05 11:05:07.000000000 +0200
+@@ -107,7 +107,7 @@
+
+ union {
+ struct list_head d_lru; /* LRU list */
+- wait_queue_head_t *d_wait; /* in-lookup ones only */
++ struct swait_queue_head *d_wait; /* in-lookup ones only */
+ };
+ struct list_head d_child; /* child of parent list */
+ struct list_head d_subdirs; /* our children */
+@@ -238,7 +238,7 @@
+ extern struct dentry * d_alloc(struct dentry *, const struct qstr *);
+ extern struct dentry * d_alloc_pseudo(struct super_block *, const struct qstr *);
+ extern struct dentry * d_alloc_parallel(struct dentry *, const struct qstr *,
+- wait_queue_head_t *);
++ struct swait_queue_head *);
+ extern struct dentry * d_splice_alias(struct inode *, struct dentry *);
+ extern struct dentry * d_add_ci(struct dentry *, struct inode *, struct qstr *);
+ extern struct dentry * d_exact_alias(struct dentry *, struct inode *);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/delay.h linux-4.14/include/linux/delay.h
+--- linux-4.14.orig/include/linux/delay.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/delay.h 2018-09-05 11:05:07.000000000 +0200
+@@ -64,4 +64,10 @@
+ msleep(seconds * 1000);
+ }
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++extern void cpu_chill(void);
++#else
++# define cpu_chill() cpu_relax()
++#endif
++
+ #endif /* defined(_LINUX_DELAY_H) */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/fs.h linux-4.14/include/linux/fs.h
+--- linux-4.14.orig/include/linux/fs.h 2018-09-05 11:03:29.000000000 +0200
++++ linux-4.14/include/linux/fs.h 2018-09-05 11:05:07.000000000 +0200
+@@ -655,7 +655,7 @@
+ struct block_device *i_bdev;
+ struct cdev *i_cdev;
+ char *i_link;
+- unsigned i_dir_seq;
++ unsigned __i_dir_seq;
+ };
+
+ __u32 i_generation;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/highmem.h linux-4.14/include/linux/highmem.h
+--- linux-4.14.orig/include/linux/highmem.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/highmem.h 2018-09-05 11:05:07.000000000 +0200
+@@ -8,6 +8,7 @@
+ #include <linux/mm.h>
+ #include <linux/uaccess.h>
+ #include <linux/hardirq.h>
++#include <linux/sched.h>
+
+ #include <asm/cacheflush.h>
+
+@@ -66,7 +67,7 @@
+
+ static inline void *kmap_atomic(struct page *page)
+ {
+- preempt_disable();
++ preempt_disable_nort();
+ pagefault_disable();
+ return page_address(page);
+ }
+@@ -75,7 +76,7 @@
+ static inline void __kunmap_atomic(void *addr)
+ {
+ pagefault_enable();
+- preempt_enable();
++ preempt_enable_nort();
+ }
+
+ #define kmap_atomic_pfn(pfn) kmap_atomic(pfn_to_page(pfn))
+@@ -87,32 +88,51 @@
+
+ #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
+
++#ifndef CONFIG_PREEMPT_RT_FULL
+ DECLARE_PER_CPU(int, __kmap_atomic_idx);
++#endif
+
+ static inline int kmap_atomic_idx_push(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
+
+-#ifdef CONFIG_DEBUG_HIGHMEM
++# ifdef CONFIG_DEBUG_HIGHMEM
+ WARN_ON_ONCE(in_irq() && !irqs_disabled());
+ BUG_ON(idx >= KM_TYPE_NR);
+-#endif
++# endif
+ return idx;
++#else
++ current->kmap_idx++;
++ BUG_ON(current->kmap_idx > KM_TYPE_NR);
++ return current->kmap_idx - 1;
++#endif
+ }
+
+ static inline int kmap_atomic_idx(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ return __this_cpu_read(__kmap_atomic_idx) - 1;
++#else
++ return current->kmap_idx - 1;
++#endif
+ }
+
+ static inline void kmap_atomic_idx_pop(void)
+ {
+-#ifdef CONFIG_DEBUG_HIGHMEM
++#ifndef CONFIG_PREEMPT_RT_FULL
++# ifdef CONFIG_DEBUG_HIGHMEM
+ int idx = __this_cpu_dec_return(__kmap_atomic_idx);
+
+ BUG_ON(idx < 0);
+-#else
++# else
+ __this_cpu_dec(__kmap_atomic_idx);
++# endif
++#else
++ current->kmap_idx--;
++# ifdef CONFIG_DEBUG_HIGHMEM
++ BUG_ON(current->kmap_idx < 0);
++# endif
+ #endif
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/hrtimer.h linux-4.14/include/linux/hrtimer.h
+--- linux-4.14.orig/include/linux/hrtimer.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/hrtimer.h 2018-09-05 11:05:07.000000000 +0200
+@@ -22,19 +22,42 @@
+ #include <linux/percpu.h>
+ #include <linux/timer.h>
+ #include <linux/timerqueue.h>
++#include <linux/wait.h>
+
+ struct hrtimer_clock_base;
+ struct hrtimer_cpu_base;
+
+ /*
+ * Mode arguments of xxx_hrtimer functions:
++ *
++ * HRTIMER_MODE_ABS - Time value is absolute
++ * HRTIMER_MODE_REL - Time value is relative to now
++ * HRTIMER_MODE_PINNED - Timer is bound to CPU (is only considered
++ * when starting the timer)
++ * HRTIMER_MODE_SOFT - Timer callback function will be executed in
++ * soft irq context
+ */
+ enum hrtimer_mode {
+- HRTIMER_MODE_ABS = 0x0, /* Time value is absolute */
+- HRTIMER_MODE_REL = 0x1, /* Time value is relative to now */
+- HRTIMER_MODE_PINNED = 0x02, /* Timer is bound to CPU */
+- HRTIMER_MODE_ABS_PINNED = 0x02,
+- HRTIMER_MODE_REL_PINNED = 0x03,
++ HRTIMER_MODE_ABS = 0x00,
++ HRTIMER_MODE_REL = 0x01,
++ HRTIMER_MODE_PINNED = 0x02,
++ HRTIMER_MODE_SOFT = 0x04,
++ HRTIMER_MODE_HARD = 0x08,
++
++ HRTIMER_MODE_ABS_PINNED = HRTIMER_MODE_ABS | HRTIMER_MODE_PINNED,
++ HRTIMER_MODE_REL_PINNED = HRTIMER_MODE_REL | HRTIMER_MODE_PINNED,
++
++ HRTIMER_MODE_ABS_SOFT = HRTIMER_MODE_ABS | HRTIMER_MODE_SOFT,
++ HRTIMER_MODE_REL_SOFT = HRTIMER_MODE_REL | HRTIMER_MODE_SOFT,
++
++ HRTIMER_MODE_ABS_PINNED_SOFT = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_SOFT,
++ HRTIMER_MODE_REL_PINNED_SOFT = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_SOFT,
++
++ HRTIMER_MODE_ABS_HARD = HRTIMER_MODE_ABS | HRTIMER_MODE_HARD,
++ HRTIMER_MODE_REL_HARD = HRTIMER_MODE_REL | HRTIMER_MODE_HARD,
++
++ HRTIMER_MODE_ABS_PINNED_HARD = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_HARD,
++ HRTIMER_MODE_REL_PINNED_HARD = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_HARD,
+ };
+
+ /*
+@@ -87,6 +110,7 @@
+ * @base: pointer to the timer base (per cpu and per clock)
+ * @state: state information (See bit values above)
+ * @is_rel: Set if the timer was armed relative
++ * @is_soft: Set if hrtimer will be expired in soft interrupt context.
+ *
+ * The hrtimer structure must be initialized by hrtimer_init()
+ */
+@@ -97,6 +121,7 @@
+ struct hrtimer_clock_base *base;
+ u8 state;
+ u8 is_rel;
++ u8 is_soft;
+ };
+
+ /**
+@@ -112,9 +137,9 @@
+ };
+
+ #ifdef CONFIG_64BIT
+-# define HRTIMER_CLOCK_BASE_ALIGN 64
++# define __hrtimer_clock_base_align ____cacheline_aligned
+ #else
+-# define HRTIMER_CLOCK_BASE_ALIGN 32
++# define __hrtimer_clock_base_align
+ #endif
+
+ /**
+@@ -123,48 +148,57 @@
+ * @index: clock type index for per_cpu support when moving a
+ * timer to a base on another cpu.
+ * @clockid: clock id for per_cpu support
++ * @seq: seqcount around __run_hrtimer
++ * @running: pointer to the currently running hrtimer
+ * @active: red black tree root node for the active timers
+ * @get_time: function to retrieve the current time of the clock
+ * @offset: offset of this clock to the monotonic base
+ */
+ struct hrtimer_clock_base {
+ struct hrtimer_cpu_base *cpu_base;
+- int index;
++ unsigned int index;
+ clockid_t clockid;
++ seqcount_t seq;
++ struct hrtimer *running;
+ struct timerqueue_head active;
+ ktime_t (*get_time)(void);
+ ktime_t offset;
+-} __attribute__((__aligned__(HRTIMER_CLOCK_BASE_ALIGN)));
++} __hrtimer_clock_base_align;
+
+ enum hrtimer_base_type {
+ HRTIMER_BASE_MONOTONIC,
+ HRTIMER_BASE_REALTIME,
+ HRTIMER_BASE_BOOTTIME,
+ HRTIMER_BASE_TAI,
++ HRTIMER_BASE_MONOTONIC_SOFT,
++ HRTIMER_BASE_REALTIME_SOFT,
++ HRTIMER_BASE_BOOTTIME_SOFT,
++ HRTIMER_BASE_TAI_SOFT,
+ HRTIMER_MAX_CLOCK_BASES,
+ };
+
+-/*
++/**
+ * struct hrtimer_cpu_base - the per cpu clock bases
+ * @lock: lock protecting the base and associated clock bases
+ * and timers
+- * @seq: seqcount around __run_hrtimer
+- * @running: pointer to the currently running hrtimer
+ * @cpu: cpu number
+ * @active_bases: Bitfield to mark bases with active timers
+ * @clock_was_set_seq: Sequence counter of clock was set events
+- * @migration_enabled: The migration of hrtimers to other cpus is enabled
+- * @nohz_active: The nohz functionality is enabled
+- * @expires_next: absolute time of the next event which was scheduled
+- * via clock_set_next_event()
+- * @next_timer: Pointer to the first expiring timer
+- * @in_hrtirq: hrtimer_interrupt() is currently executing
+ * @hres_active: State of high resolution mode
++ * @in_hrtirq: hrtimer_interrupt() is currently executing
+ * @hang_detected: The last hrtimer interrupt detected a hang
++ * @softirq_activated: displays, if the softirq is raised - update of softirq
++ * related settings is not required then.
+ * @nr_events: Total number of hrtimer interrupt events
+ * @nr_retries: Total number of hrtimer interrupt retries
+ * @nr_hangs: Total number of hrtimer interrupt hangs
+ * @max_hang_time: Maximum time spent in hrtimer_interrupt
++ * @expires_next: absolute time of the next event, is required for remote
++ * hrtimer enqueue; it is the total first expiry time (hard
++ * and soft hrtimer are taken into account)
++ * @next_timer: Pointer to the first expiring timer
++ * @softirq_expires_next: Time to check, if soft queues needs also to be expired
++ * @softirq_next_timer: Pointer to the first expiring softirq based timer
+ * @clock_base: array of clock bases for this cpu
+ *
+ * Note: next_timer is just an optimization for __remove_hrtimer().
+@@ -173,31 +207,31 @@
+ */
+ struct hrtimer_cpu_base {
+ raw_spinlock_t lock;
+- seqcount_t seq;
+- struct hrtimer *running;
+ unsigned int cpu;
+ unsigned int active_bases;
+ unsigned int clock_was_set_seq;
+- bool migration_enabled;
+- bool nohz_active;
++ unsigned int hres_active : 1,
++ in_hrtirq : 1,
++ hang_detected : 1,
++ softirq_activated : 1;
+ #ifdef CONFIG_HIGH_RES_TIMERS
+- unsigned int in_hrtirq : 1,
+- hres_active : 1,
+- hang_detected : 1;
+- ktime_t expires_next;
+- struct hrtimer *next_timer;
+ unsigned int nr_events;
+- unsigned int nr_retries;
+- unsigned int nr_hangs;
++ unsigned short nr_retries;
++ unsigned short nr_hangs;
+ unsigned int max_hang_time;
+ #endif
++ ktime_t expires_next;
++ struct hrtimer *next_timer;
++ ktime_t softirq_expires_next;
++#ifdef CONFIG_PREEMPT_RT_BASE
++ wait_queue_head_t wait;
++#endif
++ struct hrtimer *softirq_next_timer;
+ struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES];
+ } ____cacheline_aligned;
+
+ static inline void hrtimer_set_expires(struct hrtimer *timer, ktime_t time)
+ {
+- BUILD_BUG_ON(sizeof(struct hrtimer_clock_base) > HRTIMER_CLOCK_BASE_ALIGN);
+-
+ timer->node.expires = time;
+ timer->_softexpires = time;
+ }
+@@ -266,16 +300,17 @@
+ return timer->base->get_time();
+ }
+
++static inline int hrtimer_is_hres_active(struct hrtimer *timer)
++{
++ return IS_ENABLED(CONFIG_HIGH_RES_TIMERS) ?
++ timer->base->cpu_base->hres_active : 0;
++}
++
+ #ifdef CONFIG_HIGH_RES_TIMERS
+ struct clock_event_device;
+
+ extern void hrtimer_interrupt(struct clock_event_device *dev);
+
+-static inline int hrtimer_is_hres_active(struct hrtimer *timer)
+-{
+- return timer->base->cpu_base->hres_active;
+-}
+-
+ /*
+ * The resolution of the clocks. The resolution value is returned in
+ * the clock_getres() system call to give application programmers an
+@@ -298,11 +333,6 @@
+
+ #define hrtimer_resolution (unsigned int)LOW_RES_NSEC
+
+-static inline int hrtimer_is_hres_active(struct hrtimer *timer)
+-{
+- return 0;
+-}
+-
+ static inline void clock_was_set_delayed(void) { }
+
+ #endif
+@@ -344,10 +374,17 @@
+ /* Initialize timers: */
+ extern void hrtimer_init(struct hrtimer *timer, clockid_t which_clock,
+ enum hrtimer_mode mode);
++extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
++ enum hrtimer_mode mode,
++ struct task_struct *task);
+
+ #ifdef CONFIG_DEBUG_OBJECTS_TIMERS
+ extern void hrtimer_init_on_stack(struct hrtimer *timer, clockid_t which_clock,
+ enum hrtimer_mode mode);
++extern void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
++ clockid_t clock_id,
++ enum hrtimer_mode mode,
++ struct task_struct *task);
+
+ extern void destroy_hrtimer_on_stack(struct hrtimer *timer);
+ #else
+@@ -357,6 +394,15 @@
+ {
+ hrtimer_init(timer, which_clock, mode);
+ }
++
++static inline void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
++ clockid_t clock_id,
++ enum hrtimer_mode mode,
++ struct task_struct *task)
++{
++ hrtimer_init_sleeper(sl, clock_id, mode, task);
++}
++
+ static inline void destroy_hrtimer_on_stack(struct hrtimer *timer) { }
+ #endif
+
+@@ -365,11 +411,12 @@
+ u64 range_ns, const enum hrtimer_mode mode);
+
+ /**
+- * hrtimer_start - (re)start an hrtimer on the current CPU
++ * hrtimer_start - (re)start an hrtimer
+ * @timer: the timer to be added
+ * @tim: expiry time
+- * @mode: expiry mode: absolute (HRTIMER_MODE_ABS) or
+- * relative (HRTIMER_MODE_REL)
++ * @mode: timer mode: absolute (HRTIMER_MODE_ABS) or
++ * relative (HRTIMER_MODE_REL), and pinned (HRTIMER_MODE_PINNED);
++ * softirq based mode is considered for debug purpose only!
+ */
+ static inline void hrtimer_start(struct hrtimer *timer, ktime_t tim,
+ const enum hrtimer_mode mode)
+@@ -396,6 +443,13 @@
+ hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
+ }
+
++/* Softirq preemption could deadlock timer removal */
++#ifdef CONFIG_PREEMPT_RT_BASE
++ extern void hrtimer_wait_for_timer(const struct hrtimer *timer);
++#else
++# define hrtimer_wait_for_timer(timer) do { cpu_relax(); } while (0)
++#endif
++
+ /* Query timers: */
+ extern ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust);
+
+@@ -420,9 +474,9 @@
+ * Helper function to check, whether the timer is running the callback
+ * function
+ */
+-static inline int hrtimer_callback_running(struct hrtimer *timer)
++static inline int hrtimer_callback_running(const struct hrtimer *timer)
+ {
+- return timer->base->cpu_base->running == timer;
++ return timer->base->running == timer;
+ }
+
+ /* Forward a hrtimer so it expires after now: */
+@@ -458,15 +512,12 @@
+ const enum hrtimer_mode mode,
+ const clockid_t clockid);
+
+-extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
+- struct task_struct *tsk);
+-
+ extern int schedule_hrtimeout_range(ktime_t *expires, u64 delta,
+ const enum hrtimer_mode mode);
+ extern int schedule_hrtimeout_range_clock(ktime_t *expires,
+ u64 delta,
+ const enum hrtimer_mode mode,
+- int clock);
++ clockid_t clock_id);
+ extern int schedule_hrtimeout(ktime_t *expires, const enum hrtimer_mode mode);
+
+ /* Soft interrupt function to run the hrtimer queues: */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/idr.h linux-4.14/include/linux/idr.h
+--- linux-4.14.orig/include/linux/idr.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/idr.h 2018-09-05 11:05:07.000000000 +0200
+@@ -167,10 +167,7 @@
+ * Each idr_preload() should be matched with an invocation of this
+ * function. See idr_preload() for details.
+ */
+-static inline void idr_preload_end(void)
+-{
+- preempt_enable();
+-}
++void idr_preload_end(void);
+
+ /**
+ * idr_find - return pointer for given id
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/init_task.h linux-4.14/include/linux/init_task.h
+--- linux-4.14.orig/include/linux/init_task.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/init_task.h 2018-09-05 11:05:07.000000000 +0200
+@@ -163,6 +163,12 @@
+ # define INIT_PERF_EVENTS(tsk)
+ #endif
+
++#if defined(CONFIG_POSIX_TIMERS) && defined(CONFIG_PREEMPT_RT_BASE)
++# define INIT_TIMER_LIST .posix_timer_list = NULL,
++#else
++# define INIT_TIMER_LIST
++#endif
++
+ #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN
+ # define INIT_VTIME(tsk) \
+ .vtime.seqcount = SEQCNT_ZERO(tsk.vtime.seqcount), \
+@@ -234,7 +240,8 @@
+ .static_prio = MAX_PRIO-20, \
+ .normal_prio = MAX_PRIO-20, \
+ .policy = SCHED_NORMAL, \
+- .cpus_allowed = CPU_MASK_ALL, \
++ .cpus_ptr = &tsk.cpus_mask, \
++ .cpus_mask = CPU_MASK_ALL, \
+ .nr_cpus_allowed= NR_CPUS, \
+ .mm = NULL, \
+ .active_mm = &init_mm, \
+@@ -276,6 +283,7 @@
+ INIT_CPU_TIMERS(tsk) \
+ .pi_lock = __RAW_SPIN_LOCK_UNLOCKED(tsk.pi_lock), \
+ .timer_slack_ns = 50000, /* 50 usec default slack */ \
++ INIT_TIMER_LIST \
+ .pids = { \
+ [PIDTYPE_PID] = INIT_PID_LINK(PIDTYPE_PID), \
+ [PIDTYPE_PGID] = INIT_PID_LINK(PIDTYPE_PGID), \
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/interrupt.h linux-4.14/include/linux/interrupt.h
+--- linux-4.14.orig/include/linux/interrupt.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/interrupt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -15,6 +15,7 @@
+ #include <linux/hrtimer.h>
+ #include <linux/kref.h>
+ #include <linux/workqueue.h>
++#include <linux/swork.h>
+
+ #include <linux/atomic.h>
+ #include <asm/ptrace.h>
+@@ -63,6 +64,7 @@
+ * interrupt handler after suspending interrupts. For system
+ * wakeup devices users need to implement wakeup detection in
+ * their interrupt handlers.
++ * IRQF_NO_SOFTIRQ_CALL - Do not process softirqs in the irq thread context (RT)
+ */
+ #define IRQF_SHARED 0x00000080
+ #define IRQF_PROBE_SHARED 0x00000100
+@@ -76,6 +78,7 @@
+ #define IRQF_NO_THREAD 0x00010000
+ #define IRQF_EARLY_RESUME 0x00020000
+ #define IRQF_COND_SUSPEND 0x00040000
++#define IRQF_NO_SOFTIRQ_CALL 0x00080000
+
+ #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
+
+@@ -207,7 +210,7 @@
+ #ifdef CONFIG_LOCKDEP
+ # define local_irq_enable_in_hardirq() do { } while (0)
+ #else
+-# define local_irq_enable_in_hardirq() local_irq_enable()
++# define local_irq_enable_in_hardirq() local_irq_enable_nort()
+ #endif
+
+ extern void disable_irq_nosync(unsigned int irq);
+@@ -227,6 +230,7 @@
+ * struct irq_affinity_notify - context for notification of IRQ affinity changes
+ * @irq: Interrupt to which notification applies
+ * @kref: Reference count, for internal use
++ * @swork: Swork item, for internal use
+ * @work: Work item, for internal use
+ * @notify: Function to be called on change. This will be
+ * called in process context.
+@@ -238,7 +242,11 @@
+ struct irq_affinity_notify {
+ unsigned int irq;
+ struct kref kref;
++#ifdef CONFIG_PREEMPT_RT_BASE
++ struct swork_event swork;
++#else
+ struct work_struct work;
++#endif
+ void (*notify)(struct irq_affinity_notify *, const cpumask_t *mask);
+ void (*release)(struct kref *ref);
+ };
+@@ -429,9 +437,13 @@
+ bool state);
+
+ #ifdef CONFIG_IRQ_FORCED_THREADING
++# ifndef CONFIG_PREEMPT_RT_BASE
+ extern bool force_irqthreads;
++# else
++# define force_irqthreads (true)
++# endif
+ #else
+-#define force_irqthreads (0)
++#define force_irqthreads (false)
+ #endif
+
+ #ifndef __ARCH_SET_SOFTIRQ_PENDING
+@@ -488,9 +500,10 @@
+ void (*action)(struct softirq_action *);
+ };
+
++#ifndef CONFIG_PREEMPT_RT_FULL
+ asmlinkage void do_softirq(void);
+ asmlinkage void __do_softirq(void);
+-
++static inline void thread_do_softirq(void) { do_softirq(); }
+ #ifdef __ARCH_HAS_DO_SOFTIRQ
+ void do_softirq_own_stack(void);
+ #else
+@@ -499,13 +512,25 @@
+ __do_softirq();
+ }
+ #endif
++#else
++extern void thread_do_softirq(void);
++#endif
+
+ extern void open_softirq(int nr, void (*action)(struct softirq_action *));
+ extern void softirq_init(void);
+ extern void __raise_softirq_irqoff(unsigned int nr);
++#ifdef CONFIG_PREEMPT_RT_FULL
++extern void __raise_softirq_irqoff_ksoft(unsigned int nr);
++#else
++static inline void __raise_softirq_irqoff_ksoft(unsigned int nr)
++{
++ __raise_softirq_irqoff(nr);
++}
++#endif
+
+ extern void raise_softirq_irqoff(unsigned int nr);
+ extern void raise_softirq(unsigned int nr);
++extern void softirq_check_pending_idle(void);
+
+ DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
+
+@@ -527,8 +552,9 @@
+ to be executed on some cpu at least once after this.
+ * If the tasklet is already scheduled, but its execution is still not
+ started, it will be executed only once.
+- * If this tasklet is already running on another CPU (or schedule is called
+- from tasklet itself), it is rescheduled for later.
++ * If this tasklet is already running on another CPU, it is rescheduled
++ for later.
++ * Schedule must not be called from the tasklet itself (a lockup occurs)
+ * Tasklet is strictly serialized wrt itself, but not
+ wrt another tasklets. If client needs some intertask synchronization,
+ he makes it with spinlocks.
+@@ -553,27 +579,36 @@
+ enum
+ {
+ TASKLET_STATE_SCHED, /* Tasklet is scheduled for execution */
+- TASKLET_STATE_RUN /* Tasklet is running (SMP only) */
++ TASKLET_STATE_RUN, /* Tasklet is running (SMP only) */
++ TASKLET_STATE_PENDING /* Tasklet is pending */
+ };
+
+-#ifdef CONFIG_SMP
++#define TASKLET_STATEF_SCHED (1 << TASKLET_STATE_SCHED)
++#define TASKLET_STATEF_RUN (1 << TASKLET_STATE_RUN)
++#define TASKLET_STATEF_PENDING (1 << TASKLET_STATE_PENDING)
++
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+ static inline int tasklet_trylock(struct tasklet_struct *t)
+ {
+ return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
+ }
+
++static inline int tasklet_tryunlock(struct tasklet_struct *t)
++{
++ return cmpxchg(&t->state, TASKLET_STATEF_RUN, 0) == TASKLET_STATEF_RUN;
++}
++
+ static inline void tasklet_unlock(struct tasklet_struct *t)
+ {
+ smp_mb__before_atomic();
+ clear_bit(TASKLET_STATE_RUN, &(t)->state);
+ }
+
+-static inline void tasklet_unlock_wait(struct tasklet_struct *t)
+-{
+- while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
+-}
++extern void tasklet_unlock_wait(struct tasklet_struct *t);
++
+ #else
+ #define tasklet_trylock(t) 1
++#define tasklet_tryunlock(t) 1
+ #define tasklet_unlock_wait(t) do { } while (0)
+ #define tasklet_unlock(t) do { } while (0)
+ #endif
+@@ -607,41 +642,17 @@
+ smp_mb();
+ }
+
+-static inline void tasklet_enable(struct tasklet_struct *t)
+-{
+- smp_mb__before_atomic();
+- atomic_dec(&t->count);
+-}
+-
++extern void tasklet_enable(struct tasklet_struct *t);
+ extern void tasklet_kill(struct tasklet_struct *t);
+ extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu);
+ extern void tasklet_init(struct tasklet_struct *t,
+ void (*func)(unsigned long), unsigned long data);
+
+-struct tasklet_hrtimer {
+- struct hrtimer timer;
+- struct tasklet_struct tasklet;
+- enum hrtimer_restart (*function)(struct hrtimer *);
+-};
+-
+-extern void
+-tasklet_hrtimer_init(struct tasklet_hrtimer *ttimer,
+- enum hrtimer_restart (*function)(struct hrtimer *),
+- clockid_t which_clock, enum hrtimer_mode mode);
+-
+-static inline
+-void tasklet_hrtimer_start(struct tasklet_hrtimer *ttimer, ktime_t time,
+- const enum hrtimer_mode mode)
+-{
+- hrtimer_start(&ttimer->timer, time, mode);
+-}
+-
+-static inline
+-void tasklet_hrtimer_cancel(struct tasklet_hrtimer *ttimer)
+-{
+- hrtimer_cancel(&ttimer->timer);
+- tasklet_kill(&ttimer->tasklet);
+-}
++#ifdef CONFIG_PREEMPT_RT_FULL
++extern void softirq_early_init(void);
++#else
++static inline void softirq_early_init(void) { }
++#endif
+
+ /*
+ * Autoprobing for irqs:
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/irqdesc.h linux-4.14/include/linux/irqdesc.h
+--- linux-4.14.orig/include/linux/irqdesc.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/irqdesc.h 2018-09-05 11:05:07.000000000 +0200
+@@ -70,6 +70,7 @@
+ unsigned int irqs_unhandled;
+ atomic_t threads_handled;
+ int threads_handled_last;
++ u64 random_ip;
+ raw_spinlock_t lock;
+ struct cpumask *percpu_enabled;
+ const struct cpumask *percpu_affinity;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/irqflags.h linux-4.14/include/linux/irqflags.h
+--- linux-4.14.orig/include/linux/irqflags.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/irqflags.h 2018-09-05 11:05:07.000000000 +0200
+@@ -34,16 +34,6 @@
+ current->hardirq_context--; \
+ crossrelease_hist_end(XHLOCK_HARD); \
+ } while (0)
+-# define lockdep_softirq_enter() \
+-do { \
+- current->softirq_context++; \
+- crossrelease_hist_start(XHLOCK_SOFT); \
+-} while (0)
+-# define lockdep_softirq_exit() \
+-do { \
+- current->softirq_context--; \
+- crossrelease_hist_end(XHLOCK_SOFT); \
+-} while (0)
+ # define INIT_TRACE_IRQFLAGS .softirqs_enabled = 1,
+ #else
+ # define trace_hardirqs_on() do { } while (0)
+@@ -56,9 +46,23 @@
+ # define trace_softirqs_enabled(p) 0
+ # define trace_hardirq_enter() do { } while (0)
+ # define trace_hardirq_exit() do { } while (0)
++# define INIT_TRACE_IRQFLAGS
++#endif
++
++#if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PREEMPT_RT_FULL)
++# define lockdep_softirq_enter() \
++do { \
++ current->softirq_context++; \
++ crossrelease_hist_start(XHLOCK_SOFT); \
++} while (0)
++# define lockdep_softirq_exit() \
++do { \
++ current->softirq_context--; \
++ crossrelease_hist_end(XHLOCK_SOFT); \
++} while (0)
++#else
+ # define lockdep_softirq_enter() do { } while (0)
+ # define lockdep_softirq_exit() do { } while (0)
+-# define INIT_TRACE_IRQFLAGS
+ #endif
+
+ #if defined(CONFIG_IRQSOFF_TRACER) || \
+@@ -165,4 +169,23 @@
+
+ #define irqs_disabled_flags(flags) raw_irqs_disabled_flags(flags)
+
++/*
++ * local_irq* variants depending on RT/!RT
++ */
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define local_irq_disable_nort() do { } while (0)
++# define local_irq_enable_nort() do { } while (0)
++# define local_irq_save_nort(flags) local_save_flags(flags)
++# define local_irq_restore_nort(flags) (void)(flags)
++# define local_irq_disable_rt() local_irq_disable()
++# define local_irq_enable_rt() local_irq_enable()
++#else
++# define local_irq_disable_nort() local_irq_disable()
++# define local_irq_enable_nort() local_irq_enable()
++# define local_irq_save_nort(flags) local_irq_save(flags)
++# define local_irq_restore_nort(flags) local_irq_restore(flags)
++# define local_irq_disable_rt() do { } while (0)
++# define local_irq_enable_rt() do { } while (0)
++#endif
++
+ #endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/irq.h linux-4.14/include/linux/irq.h
+--- linux-4.14.orig/include/linux/irq.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/irq.h 2018-09-05 11:05:07.000000000 +0200
+@@ -74,6 +74,7 @@
+ * IRQ_IS_POLLED - Always polled by another interrupt. Exclude
+ * it from the spurious interrupt detection
+ * mechanism and from core side polling.
++ * IRQ_NO_SOFTIRQ_CALL - No softirq processing in the irq thread context (RT)
+ * IRQ_DISABLE_UNLAZY - Disable lazy irq disable
+ */
+ enum {
+@@ -101,13 +102,14 @@
+ IRQ_PER_CPU_DEVID = (1 << 17),
+ IRQ_IS_POLLED = (1 << 18),
+ IRQ_DISABLE_UNLAZY = (1 << 19),
++ IRQ_NO_SOFTIRQ_CALL = (1 << 20),
+ };
+
+ #define IRQF_MODIFY_MASK \
+ (IRQ_TYPE_SENSE_MASK | IRQ_NOPROBE | IRQ_NOREQUEST | \
+ IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL | IRQ_NO_BALANCING | \
+ IRQ_PER_CPU | IRQ_NESTED_THREAD | IRQ_NOTHREAD | IRQ_PER_CPU_DEVID | \
+- IRQ_IS_POLLED | IRQ_DISABLE_UNLAZY)
++ IRQ_IS_POLLED | IRQ_DISABLE_UNLAZY | IRQ_NO_SOFTIRQ_CALL)
+
+ #define IRQ_NO_BALANCING_MASK (IRQ_PER_CPU | IRQ_NO_BALANCING)
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/irq_work.h linux-4.14/include/linux/irq_work.h
+--- linux-4.14.orig/include/linux/irq_work.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/irq_work.h 2018-09-05 11:05:07.000000000 +0200
+@@ -17,6 +17,7 @@
+ #define IRQ_WORK_BUSY 2UL
+ #define IRQ_WORK_FLAGS 3UL
+ #define IRQ_WORK_LAZY 4UL /* Doesn't want IPI, wait for tick */
++#define IRQ_WORK_HARD_IRQ 8UL /* Run hard IRQ context, even on RT */
+
+ struct irq_work {
+ unsigned long flags;
+@@ -52,4 +53,10 @@
+ static inline void irq_work_run(void) { }
+ #endif
+
++#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
++void irq_work_tick_soft(void);
++#else
++static inline void irq_work_tick_soft(void) { }
++#endif
++
+ #endif /* _LINUX_IRQ_WORK_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/jbd2.h linux-4.14/include/linux/jbd2.h
+--- linux-4.14.orig/include/linux/jbd2.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/jbd2.h 2018-09-05 11:05:07.000000000 +0200
+@@ -347,32 +347,56 @@
+
+ static inline void jbd_lock_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ bit_spin_lock(BH_State, &bh->b_state);
++#else
++ spin_lock(&bh->b_state_lock);
++#endif
+ }
+
+ static inline int jbd_trylock_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ return bit_spin_trylock(BH_State, &bh->b_state);
++#else
++ return spin_trylock(&bh->b_state_lock);
++#endif
+ }
+
+ static inline int jbd_is_locked_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ return bit_spin_is_locked(BH_State, &bh->b_state);
++#else
++ return spin_is_locked(&bh->b_state_lock);
++#endif
+ }
+
+ static inline void jbd_unlock_bh_state(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ bit_spin_unlock(BH_State, &bh->b_state);
++#else
++ spin_unlock(&bh->b_state_lock);
++#endif
+ }
+
+ static inline void jbd_lock_bh_journal_head(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ bit_spin_lock(BH_JournalHead, &bh->b_state);
++#else
++ spin_lock(&bh->b_journal_head_lock);
++#endif
+ }
+
+ static inline void jbd_unlock_bh_journal_head(struct buffer_head *bh)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ bit_spin_unlock(BH_JournalHead, &bh->b_state);
++#else
++ spin_unlock(&bh->b_journal_head_lock);
++#endif
+ }
+
+ #define J_ASSERT(assert) BUG_ON(!(assert))
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/kdb.h linux-4.14/include/linux/kdb.h
+--- linux-4.14.orig/include/linux/kdb.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/kdb.h 2018-09-05 11:05:07.000000000 +0200
+@@ -167,6 +167,7 @@
+ extern __printf(1, 2) int kdb_printf(const char *, ...);
+ typedef __printf(1, 2) int (*kdb_printf_t)(const char *, ...);
+
++#define in_kdb_printk() (kdb_trap_printk)
+ extern void kdb_init(int level);
+
+ /* Access to kdb specific polling devices */
+@@ -201,6 +202,7 @@
+ extern int kdb_unregister(char *);
+ #else /* ! CONFIG_KGDB_KDB */
+ static inline __printf(1, 2) int kdb_printf(const char *fmt, ...) { return 0; }
++#define in_kdb_printk() (0)
+ static inline void kdb_init(int level) {}
+ static inline int kdb_register(char *cmd, kdb_func_t func, char *usage,
+ char *help, short minlen) { return 0; }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/kernel.h linux-4.14/include/linux/kernel.h
+--- linux-4.14.orig/include/linux/kernel.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/kernel.h 2018-09-05 11:05:07.000000000 +0200
+@@ -225,6 +225,9 @@
+ */
+ # define might_sleep() \
+ do { __might_sleep(__FILE__, __LINE__, 0); might_resched(); } while (0)
++
++# define might_sleep_no_state_check() \
++ do { ___might_sleep(__FILE__, __LINE__, 0); might_resched(); } while (0)
+ # define sched_annotate_sleep() (current->task_state_change = 0)
+ #else
+ static inline void ___might_sleep(const char *file, int line,
+@@ -232,6 +235,7 @@
+ static inline void __might_sleep(const char *file, int line,
+ int preempt_offset) { }
+ # define might_sleep() do { might_resched(); } while (0)
++# define might_sleep_no_state_check() do { might_resched(); } while (0)
+ # define sched_annotate_sleep() do { } while (0)
+ #endif
+
+@@ -531,6 +535,7 @@
+ SYSTEM_HALT,
+ SYSTEM_POWER_OFF,
+ SYSTEM_RESTART,
++ SYSTEM_SUSPEND,
+ } system_state;
+
+ #define TAINT_PROPRIETARY_MODULE 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/list_bl.h linux-4.14/include/linux/list_bl.h
+--- linux-4.14.orig/include/linux/list_bl.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/list_bl.h 2018-09-05 11:05:07.000000000 +0200
+@@ -3,6 +3,7 @@
+ #define _LINUX_LIST_BL_H
+
+ #include <linux/list.h>
++#include <linux/spinlock.h>
+ #include <linux/bit_spinlock.h>
+
+ /*
+@@ -33,13 +34,24 @@
+
+ struct hlist_bl_head {
+ struct hlist_bl_node *first;
++#ifdef CONFIG_PREEMPT_RT_BASE
++ raw_spinlock_t lock;
++#endif
+ };
+
+ struct hlist_bl_node {
+ struct hlist_bl_node *next, **pprev;
+ };
+-#define INIT_HLIST_BL_HEAD(ptr) \
+- ((ptr)->first = NULL)
++
++#ifdef CONFIG_PREEMPT_RT_BASE
++#define INIT_HLIST_BL_HEAD(h) \
++do { \
++ (h)->first = NULL; \
++ raw_spin_lock_init(&(h)->lock); \
++} while (0)
++#else
++#define INIT_HLIST_BL_HEAD(h) (h)->first = NULL
++#endif
+
+ static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
+ {
+@@ -119,12 +131,26 @@
+
+ static inline void hlist_bl_lock(struct hlist_bl_head *b)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ bit_spin_lock(0, (unsigned long *)b);
++#else
++ raw_spin_lock(&b->lock);
++#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
++ __set_bit(0, (unsigned long *)b);
++#endif
++#endif
+ }
+
+ static inline void hlist_bl_unlock(struct hlist_bl_head *b)
+ {
++#ifndef CONFIG_PREEMPT_RT_BASE
+ __bit_spin_unlock(0, (unsigned long *)b);
++#else
++#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
++ __clear_bit(0, (unsigned long *)b);
++#endif
++ raw_spin_unlock(&b->lock);
++#endif
+ }
+
+ static inline bool hlist_bl_is_locked(struct hlist_bl_head *b)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/locallock.h linux-4.14/include/linux/locallock.h
+--- linux-4.14.orig/include/linux/locallock.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/locallock.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,271 @@
++#ifndef _LINUX_LOCALLOCK_H
++#define _LINUX_LOCALLOCK_H
++
++#include <linux/percpu.h>
++#include <linux/spinlock.h>
++
++#ifdef CONFIG_PREEMPT_RT_BASE
++
++#ifdef CONFIG_DEBUG_SPINLOCK
++# define LL_WARN(cond) WARN_ON(cond)
++#else
++# define LL_WARN(cond) do { } while (0)
++#endif
++
++/*
++ * per cpu lock based substitute for local_irq_*()
++ */
++struct local_irq_lock {
++ spinlock_t lock;
++ struct task_struct *owner;
++ int nestcnt;
++ unsigned long flags;
++};
++
++#define DEFINE_LOCAL_IRQ_LOCK(lvar) \
++ DEFINE_PER_CPU(struct local_irq_lock, lvar) = { \
++ .lock = __SPIN_LOCK_UNLOCKED((lvar).lock) }
++
++#define DECLARE_LOCAL_IRQ_LOCK(lvar) \
++ DECLARE_PER_CPU(struct local_irq_lock, lvar)
++
++#define local_irq_lock_init(lvar) \
++ do { \
++ int __cpu; \
++ for_each_possible_cpu(__cpu) \
++ spin_lock_init(&per_cpu(lvar, __cpu).lock); \
++ } while (0)
++
++static inline void __local_lock(struct local_irq_lock *lv)
++{
++ if (lv->owner != current) {
++ spin_lock(&lv->lock);
++ LL_WARN(lv->owner);
++ LL_WARN(lv->nestcnt);
++ lv->owner = current;
++ }
++ lv->nestcnt++;
++}
++
++#define local_lock(lvar) \
++ do { __local_lock(&get_local_var(lvar)); } while (0)
++
++#define local_lock_on(lvar, cpu) \
++ do { __local_lock(&per_cpu(lvar, cpu)); } while (0)
++
++static inline int __local_trylock(struct local_irq_lock *lv)
++{
++ if (lv->owner != current && spin_trylock(&lv->lock)) {
++ LL_WARN(lv->owner);
++ LL_WARN(lv->nestcnt);
++ lv->owner = current;
++ lv->nestcnt = 1;
++ return 1;
++ } else if (lv->owner == current) {
++ lv->nestcnt++;
++ return 1;
++ }
++ return 0;
++}
++
++#define local_trylock(lvar) \
++ ({ \
++ int __locked; \
++ __locked = __local_trylock(&get_local_var(lvar)); \
++ if (!__locked) \
++ put_local_var(lvar); \
++ __locked; \
++ })
++
++static inline void __local_unlock(struct local_irq_lock *lv)
++{
++ LL_WARN(lv->nestcnt == 0);
++ LL_WARN(lv->owner != current);
++ if (--lv->nestcnt)
++ return;
++
++ lv->owner = NULL;
++ spin_unlock(&lv->lock);
++}
++
++#define local_unlock(lvar) \
++ do { \
++ __local_unlock(this_cpu_ptr(&lvar)); \
++ put_local_var(lvar); \
++ } while (0)
++
++#define local_unlock_on(lvar, cpu) \
++ do { __local_unlock(&per_cpu(lvar, cpu)); } while (0)
++
++static inline void __local_lock_irq(struct local_irq_lock *lv)
++{
++ spin_lock_irqsave(&lv->lock, lv->flags);
++ LL_WARN(lv->owner);
++ LL_WARN(lv->nestcnt);
++ lv->owner = current;
++ lv->nestcnt = 1;
++}
++
++#define local_lock_irq(lvar) \
++ do { __local_lock_irq(&get_local_var(lvar)); } while (0)
++
++#define local_lock_irq_on(lvar, cpu) \
++ do { __local_lock_irq(&per_cpu(lvar, cpu)); } while (0)
++
++static inline void __local_unlock_irq(struct local_irq_lock *lv)
++{
++ LL_WARN(!lv->nestcnt);
++ LL_WARN(lv->owner != current);
++ lv->owner = NULL;
++ lv->nestcnt = 0;
++ spin_unlock_irq(&lv->lock);
++}
++
++#define local_unlock_irq(lvar) \
++ do { \
++ __local_unlock_irq(this_cpu_ptr(&lvar)); \
++ put_local_var(lvar); \
++ } while (0)
++
++#define local_unlock_irq_on(lvar, cpu) \
++ do { \
++ __local_unlock_irq(&per_cpu(lvar, cpu)); \
++ } while (0)
++
++static inline int __local_lock_irqsave(struct local_irq_lock *lv)
++{
++ if (lv->owner != current) {
++ __local_lock_irq(lv);
++ return 0;
++ } else {
++ lv->nestcnt++;
++ return 1;
++ }
++}
++
++#define local_lock_irqsave(lvar, _flags) \
++ do { \
++ if (__local_lock_irqsave(&get_local_var(lvar))) \
++ put_local_var(lvar); \
++ _flags = __this_cpu_read(lvar.flags); \
++ } while (0)
++
++#define local_lock_irqsave_on(lvar, _flags, cpu) \
++ do { \
++ __local_lock_irqsave(&per_cpu(lvar, cpu)); \
++ _flags = per_cpu(lvar, cpu).flags; \
++ } while (0)
++
++static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
++ unsigned long flags)
++{
++ LL_WARN(!lv->nestcnt);
++ LL_WARN(lv->owner != current);
++ if (--lv->nestcnt)
++ return 0;
++
++ lv->owner = NULL;
++ spin_unlock_irqrestore(&lv->lock, lv->flags);
++ return 1;
++}
++
++#define local_unlock_irqrestore(lvar, flags) \
++ do { \
++ if (__local_unlock_irqrestore(this_cpu_ptr(&lvar), flags)) \
++ put_local_var(lvar); \
++ } while (0)
++
++#define local_unlock_irqrestore_on(lvar, flags, cpu) \
++ do { \
++ __local_unlock_irqrestore(&per_cpu(lvar, cpu), flags); \
++ } while (0)
++
++#define local_spin_trylock_irq(lvar, lock) \
++ ({ \
++ int __locked; \
++ local_lock_irq(lvar); \
++ __locked = spin_trylock(lock); \
++ if (!__locked) \
++ local_unlock_irq(lvar); \
++ __locked; \
++ })
++
++#define local_spin_lock_irq(lvar, lock) \
++ do { \
++ local_lock_irq(lvar); \
++ spin_lock(lock); \
++ } while (0)
++
++#define local_spin_unlock_irq(lvar, lock) \
++ do { \
++ spin_unlock(lock); \
++ local_unlock_irq(lvar); \
++ } while (0)
++
++#define local_spin_lock_irqsave(lvar, lock, flags) \
++ do { \
++ local_lock_irqsave(lvar, flags); \
++ spin_lock(lock); \
++ } while (0)
++
++#define local_spin_unlock_irqrestore(lvar, lock, flags) \
++ do { \
++ spin_unlock(lock); \
++ local_unlock_irqrestore(lvar, flags); \
++ } while (0)
++
++#define get_locked_var(lvar, var) \
++ (*({ \
++ local_lock(lvar); \
++ this_cpu_ptr(&var); \
++ }))
++
++#define put_locked_var(lvar, var) local_unlock(lvar);
++
++#define local_lock_cpu(lvar) \
++ ({ \
++ local_lock(lvar); \
++ smp_processor_id(); \
++ })
++
++#define local_unlock_cpu(lvar) local_unlock(lvar)
++
++#else /* PREEMPT_RT_BASE */
++
++#define DEFINE_LOCAL_IRQ_LOCK(lvar) __typeof__(const int) lvar
++#define DECLARE_LOCAL_IRQ_LOCK(lvar) extern __typeof__(const int) lvar
++
++static inline void local_irq_lock_init(int lvar) { }
++
++#define local_trylock(lvar) \
++ ({ \
++ preempt_disable(); \
++ 1; \
++ })
++
++#define local_lock(lvar) preempt_disable()
++#define local_unlock(lvar) preempt_enable()
++#define local_lock_irq(lvar) local_irq_disable()
++#define local_lock_irq_on(lvar, cpu) local_irq_disable()
++#define local_unlock_irq(lvar) local_irq_enable()
++#define local_unlock_irq_on(lvar, cpu) local_irq_enable()
++#define local_lock_irqsave(lvar, flags) local_irq_save(flags)
++#define local_unlock_irqrestore(lvar, flags) local_irq_restore(flags)
++
++#define local_spin_trylock_irq(lvar, lock) spin_trylock_irq(lock)
++#define local_spin_lock_irq(lvar, lock) spin_lock_irq(lock)
++#define local_spin_unlock_irq(lvar, lock) spin_unlock_irq(lock)
++#define local_spin_lock_irqsave(lvar, lock, flags) \
++ spin_lock_irqsave(lock, flags)
++#define local_spin_unlock_irqrestore(lvar, lock, flags) \
++ spin_unlock_irqrestore(lock, flags)
++
++#define get_locked_var(lvar, var) get_cpu_var(var)
++#define put_locked_var(lvar, var) put_cpu_var(var)
++
++#define local_lock_cpu(lvar) get_cpu()
++#define local_unlock_cpu(lvar) put_cpu()
++
++#endif
++
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/mm_types.h linux-4.14/include/linux/mm_types.h
+--- linux-4.14.orig/include/linux/mm_types.h 2018-09-05 11:03:28.000000000 +0200
++++ linux-4.14/include/linux/mm_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -12,6 +12,7 @@
+ #include <linux/completion.h>
+ #include <linux/cpumask.h>
+ #include <linux/uprobes.h>
++#include <linux/rcupdate.h>
+ #include <linux/page-flags-layout.h>
+ #include <linux/workqueue.h>
+
+@@ -498,6 +499,9 @@
+ bool tlb_flush_batched;
+ #endif
+ struct uprobes_state uprobes_state;
++#ifdef CONFIG_PREEMPT_RT_BASE
++ struct rcu_head delayed_drop;
++#endif
+ #ifdef CONFIG_HUGETLB_PAGE
+ atomic_long_t hugetlb_usage;
+ #endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/mutex.h linux-4.14/include/linux/mutex.h
+--- linux-4.14.orig/include/linux/mutex.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/mutex.h 2018-09-05 11:05:07.000000000 +0200
+@@ -23,6 +23,17 @@
- for (i = 0; i < 50; i++) {
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- t1 = ktime_get_ns();
- for (t = 0; t < 50; t++)
- gameport_read(gameport);
- t2 = ktime_get_ns();
- t3 = ktime_get_ns();
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- udelay(i * 10);
- t = (t2 - t1) - (t3 - t2);
- if (t < tx)
-@@ -124,12 +124,12 @@ static int old_gameport_measure_speed(struct gameport *gameport)
- tx = 1 << 30;
+ struct ww_acquire_ctx;
- for(i = 0; i < 50; i++) {
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- GET_TIME(t1);
- for (t = 0; t < 50; t++) gameport_read(gameport);
- GET_TIME(t2);
- GET_TIME(t3);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- udelay(i * 10);
- if ((t = DELTA(t2,t1) - DELTA(t3,t2)) < tx) tx = t;
- }
-@@ -148,11 +148,11 @@ static int old_gameport_measure_speed(struct gameport *gameport)
- tx = 1 << 30;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
++ , .dep_map = { .name = #lockname }
++#else
++# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
++#endif
++
++#ifdef CONFIG_PREEMPT_RT_FULL
++# include <linux/mutex_rt.h>
++#else
++
+ /*
+ * Simple, straightforward mutexes with strict semantics:
+ *
+@@ -114,13 +125,6 @@
+ __mutex_init((mutex), #mutex, &__key); \
+ } while (0)
- for(i = 0; i < 50; i++) {
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- t1 = rdtsc();
- for (t = 0; t < 50; t++) gameport_read(gameport);
- t2 = rdtsc();
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- udelay(i * 10);
- if (t2 - t1 < tx) tx = t2 - t1;
- }
-diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
-index 11a13b5be73a..baaed0ac274b 100644
---- a/drivers/iommu/amd_iommu.c
-+++ b/drivers/iommu/amd_iommu.c
-@@ -1923,10 +1923,10 @@ static int __attach_device(struct iommu_dev_data *dev_data,
- int ret;
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
+- , .dep_map = { .name = #lockname }
+-#else
+-# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
+-#endif
+-
+ #define __MUTEX_INITIALIZER(lockname) \
+ { .owner = ATOMIC_LONG_INIT(0) \
+ , .wait_lock = __SPIN_LOCK_UNLOCKED(lockname.wait_lock) \
+@@ -228,4 +232,6 @@
+ return mutex_trylock(lock);
+ }
- /*
-- * Must be called with IRQs disabled. Warn here to detect early
-- * when its not.
-+ * Must be called with IRQs disabled on a non RT kernel. Warn here to
-+ * detect early when its not.
- */
-- WARN_ON(!irqs_disabled());
-+ WARN_ON_NONRT(!irqs_disabled());
++#endif /* !PREEMPT_RT_FULL */
++
+ #endif /* __LINUX_MUTEX_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/mutex_rt.h linux-4.14/include/linux/mutex_rt.h
+--- linux-4.14.orig/include/linux/mutex_rt.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/mutex_rt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,130 @@
++#ifndef __LINUX_MUTEX_RT_H
++#define __LINUX_MUTEX_RT_H
++
++#ifndef __LINUX_MUTEX_H
++#error "Please include mutex.h"
++#endif
++
++#include <linux/rtmutex.h>
++
++/* FIXME: Just for __lockfunc */
++#include <linux/spinlock.h>
++
++struct mutex {
++ struct rt_mutex lock;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ struct lockdep_map dep_map;
++#endif
++};
++
++#define __MUTEX_INITIALIZER(mutexname) \
++ { \
++ .lock = __RT_MUTEX_INITIALIZER(mutexname.lock) \
++ __DEP_MAP_MUTEX_INITIALIZER(mutexname) \
++ }
++
++#define DEFINE_MUTEX(mutexname) \
++ struct mutex mutexname = __MUTEX_INITIALIZER(mutexname)
++
++extern void __mutex_do_init(struct mutex *lock, const char *name, struct lock_class_key *key);
++extern void __lockfunc _mutex_lock(struct mutex *lock);
++extern void __lockfunc _mutex_lock_io(struct mutex *lock);
++extern void __lockfunc _mutex_lock_io_nested(struct mutex *lock, int subclass);
++extern int __lockfunc _mutex_lock_interruptible(struct mutex *lock);
++extern int __lockfunc _mutex_lock_killable(struct mutex *lock);
++extern void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass);
++extern void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
++extern int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass);
++extern int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass);
++extern int __lockfunc _mutex_trylock(struct mutex *lock);
++extern void __lockfunc _mutex_unlock(struct mutex *lock);
++
++#define mutex_is_locked(l) rt_mutex_is_locked(&(l)->lock)
++#define mutex_lock(l) _mutex_lock(l)
++#define mutex_lock_interruptible(l) _mutex_lock_interruptible(l)
++#define mutex_lock_killable(l) _mutex_lock_killable(l)
++#define mutex_trylock(l) _mutex_trylock(l)
++#define mutex_unlock(l) _mutex_unlock(l)
++#define mutex_lock_io(l) _mutex_lock_io(l);
++
++#define __mutex_owner(l) ((l)->lock.owner)
++
++#ifdef CONFIG_DEBUG_MUTEXES
++#define mutex_destroy(l) rt_mutex_destroy(&(l)->lock)
++#else
++static inline void mutex_destroy(struct mutex *lock) {}
++#endif
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define mutex_lock_nested(l, s) _mutex_lock_nested(l, s)
++# define mutex_lock_interruptible_nested(l, s) \
++ _mutex_lock_interruptible_nested(l, s)
++# define mutex_lock_killable_nested(l, s) \
++ _mutex_lock_killable_nested(l, s)
++# define mutex_lock_io_nested(l, s) _mutex_lock_io_nested(l, s)
++
++# define mutex_lock_nest_lock(lock, nest_lock) \
++do { \
++ typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \
++ _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \
++} while (0)
++
++#else
++# define mutex_lock_nested(l, s) _mutex_lock(l)
++# define mutex_lock_interruptible_nested(l, s) \
++ _mutex_lock_interruptible(l)
++# define mutex_lock_killable_nested(l, s) \
++ _mutex_lock_killable(l)
++# define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock)
++# define mutex_lock_io_nested(l, s) _mutex_lock_io(l)
++#endif
++
++# define mutex_init(mutex) \
++do { \
++ static struct lock_class_key __key; \
++ \
++ rt_mutex_init(&(mutex)->lock); \
++ __mutex_do_init((mutex), #mutex, &__key); \
++} while (0)
++
++# define __mutex_init(mutex, name, key) \
++do { \
++ rt_mutex_init(&(mutex)->lock); \
++ __mutex_do_init((mutex), name, key); \
++} while (0)
++
++/**
++ * These values are chosen such that FAIL and SUCCESS match the
++ * values of the regular mutex_trylock().
++ */
++enum mutex_trylock_recursive_enum {
++ MUTEX_TRYLOCK_FAILED = 0,
++ MUTEX_TRYLOCK_SUCCESS = 1,
++ MUTEX_TRYLOCK_RECURSIVE,
++};
++/**
++ * mutex_trylock_recursive - trylock variant that allows recursive locking
++ * @lock: mutex to be locked
++ *
++ * This function should not be used, _ever_. It is purely for hysterical GEM
++ * raisins, and once those are gone this will be removed.
++ *
++ * Returns:
++ * MUTEX_TRYLOCK_FAILED - trylock failed,
++ * MUTEX_TRYLOCK_SUCCESS - lock acquired,
++ * MUTEX_TRYLOCK_RECURSIVE - we already owned the lock.
++ */
++int __rt_mutex_owner_current(struct rt_mutex *lock);
++
++static inline /* __deprecated */ __must_check enum mutex_trylock_recursive_enum
++mutex_trylock_recursive(struct mutex *lock)
++{
++ if (unlikely(__rt_mutex_owner_current(&lock->lock)))
++ return MUTEX_TRYLOCK_RECURSIVE;
++
++ return mutex_trylock(lock);
++}
++
++extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
++
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/netdevice.h linux-4.14/include/linux/netdevice.h
+--- linux-4.14.orig/include/linux/netdevice.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/netdevice.h 2018-09-05 11:05:07.000000000 +0200
+@@ -409,7 +409,19 @@
+ typedef rx_handler_result_t rx_handler_func_t(struct sk_buff **pskb);
- /* lock domain */
- spin_lock(&domain->lock);
-@@ -2094,10 +2094,10 @@ static void __detach_device(struct iommu_dev_data *dev_data)
- struct protection_domain *domain;
+ void __napi_schedule(struct napi_struct *n);
++
++/*
++ * When PREEMPT_RT_FULL is defined, all device interrupt handlers
++ * run as threads, and they can also be preempted (without PREEMPT_RT
++ * interrupt threads can not be preempted). Which means that calling
++ * __napi_schedule_irqoff() from an interrupt handler can be preempted
++ * and can corrupt the napi->poll_list.
++ */
++#ifdef CONFIG_PREEMPT_RT_FULL
++#define __napi_schedule_irqoff(n) __napi_schedule(n)
++#else
+ void __napi_schedule_irqoff(struct napi_struct *n);
++#endif
+ static inline bool napi_disable_pending(struct napi_struct *n)
+ {
+@@ -571,7 +583,11 @@
+ * write-mostly part
+ */
+ spinlock_t _xmit_lock ____cacheline_aligned_in_smp;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ struct task_struct *xmit_lock_owner;
++#else
+ int xmit_lock_owner;
++#endif
/*
-- * Must be called with IRQs disabled. Warn here to detect early
-- * when its not.
-+ * Must be called with IRQs disabled on a non RT kernel. Warn here to
-+ * detect early when its not.
+ * Time (in jiffies) of last Tx
*/
-- WARN_ON(!irqs_disabled());
-+ WARN_ON_NONRT(!irqs_disabled());
+@@ -2433,14 +2449,53 @@
+ void synchronize_net(void);
+ int init_dummy_netdev(struct net_device *dev);
- if (WARN_ON(!dev_data->domain))
- return;
-diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
-index d82637ab09fd..ebe41d30c093 100644
---- a/drivers/iommu/intel-iommu.c
-+++ b/drivers/iommu/intel-iommu.c
-@@ -479,7 +479,7 @@ struct deferred_flush_data {
- struct deferred_flush_table *tables;
- };
+-DECLARE_PER_CPU(int, xmit_recursion);
+ #define XMIT_RECURSION_LIMIT 10
++#ifdef CONFIG_PREEMPT_RT_FULL
++static inline int dev_recursion_level(void)
++{
++ return current->xmit_recursion;
++}
++
++static inline int xmit_rec_read(void)
++{
++ return current->xmit_recursion;
++}
++
++static inline void xmit_rec_inc(void)
++{
++ current->xmit_recursion++;
++}
++
++static inline void xmit_rec_dec(void)
++{
++ current->xmit_recursion--;
++}
++
++#else
++
++DECLARE_PER_CPU(int, xmit_recursion);
--DEFINE_PER_CPU(struct deferred_flush_data, deferred_flush);
-+static DEFINE_PER_CPU(struct deferred_flush_data, deferred_flush);
+ static inline int dev_recursion_level(void)
+ {
+ return this_cpu_read(xmit_recursion);
+ }
- /* bitmap for indexing intel_iommus */
- static int g_num_of_iommus;
-@@ -3715,10 +3715,8 @@ static void add_unmap(struct dmar_domain *dom, unsigned long iova_pfn,
- struct intel_iommu *iommu;
- struct deferred_flush_entry *entry;
- struct deferred_flush_data *flush_data;
-- unsigned int cpuid;
++static inline int xmit_rec_read(void)
++{
++ return __this_cpu_read(xmit_recursion);
++}
++
++static inline void xmit_rec_inc(void)
++{
++ __this_cpu_inc(xmit_recursion);
++}
++
++static inline void xmit_rec_dec(void)
++{
++ __this_cpu_dec(xmit_recursion);
++}
++#endif
++
+ struct net_device *dev_get_by_index(struct net *net, int ifindex);
+ struct net_device *__dev_get_by_index(struct net *net, int ifindex);
+ struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
+@@ -2792,6 +2847,7 @@
+ unsigned int dropped;
+ struct sk_buff_head input_pkt_queue;
+ struct napi_struct backlog;
++ struct sk_buff_head tofree_queue;
-- cpuid = get_cpu();
-- flush_data = per_cpu_ptr(&deferred_flush, cpuid);
-+ flush_data = raw_cpu_ptr(&deferred_flush);
+ };
- /* Flush all CPUs' entries to avoid deferring too much. If
- * this becomes a bottleneck, can just flush us, and rely on
-@@ -3751,8 +3749,6 @@ static void add_unmap(struct dmar_domain *dom, unsigned long iova_pfn,
- }
- flush_data->size++;
- spin_unlock_irqrestore(&flush_data->lock, flags);
--
-- put_cpu();
+@@ -3515,10 +3571,48 @@
+ return (1 << debug_value) - 1;
}
- static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
-diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
-index e23001bfcfee..359d5d169ec0 100644
---- a/drivers/iommu/iova.c
-+++ b/drivers/iommu/iova.c
-@@ -22,6 +22,7 @@
- #include <linux/slab.h>
- #include <linux/smp.h>
- #include <linux/bitops.h>
-+#include <linux/cpu.h>
++#ifdef CONFIG_PREEMPT_RT_FULL
++static inline void netdev_queue_set_owner(struct netdev_queue *txq, int cpu)
++{
++ txq->xmit_lock_owner = current;
++}
++
++static inline void netdev_queue_clear_owner(struct netdev_queue *txq)
++{
++ txq->xmit_lock_owner = NULL;
++}
++
++static inline bool netdev_queue_has_owner(struct netdev_queue *txq)
++{
++ if (txq->xmit_lock_owner != NULL)
++ return true;
++ return false;
++}
++
++#else
++
++static inline void netdev_queue_set_owner(struct netdev_queue *txq, int cpu)
++{
++ txq->xmit_lock_owner = cpu;
++}
++
++static inline void netdev_queue_clear_owner(struct netdev_queue *txq)
++{
++ txq->xmit_lock_owner = -1;
++}
++
++static inline bool netdev_queue_has_owner(struct netdev_queue *txq)
++{
++ if (txq->xmit_lock_owner != -1)
++ return true;
++ return false;
++}
++#endif
++
+ static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
+ {
+ spin_lock(&txq->_xmit_lock);
+- txq->xmit_lock_owner = cpu;
++ netdev_queue_set_owner(txq, cpu);
+ }
- static bool iova_rcache_insert(struct iova_domain *iovad,
- unsigned long pfn,
-@@ -420,10 +421,8 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
+ static inline bool __netif_tx_acquire(struct netdev_queue *txq)
+@@ -3535,32 +3629,32 @@
+ static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
+ {
+ spin_lock_bh(&txq->_xmit_lock);
+- txq->xmit_lock_owner = smp_processor_id();
++ netdev_queue_set_owner(txq, smp_processor_id());
+ }
- /* Try replenishing IOVAs by flushing rcache. */
- flushed_rcache = true;
-- preempt_disable();
- for_each_online_cpu(cpu)
- free_cpu_cached_iovas(cpu, iovad);
-- preempt_enable();
- goto retry;
- }
+ static inline bool __netif_tx_trylock(struct netdev_queue *txq)
+ {
+ bool ok = spin_trylock(&txq->_xmit_lock);
+ if (likely(ok))
+- txq->xmit_lock_owner = smp_processor_id();
++ netdev_queue_set_owner(txq, smp_processor_id());
+ return ok;
+ }
-@@ -751,7 +750,7 @@ static bool __iova_rcache_insert(struct iova_domain *iovad,
- bool can_insert = false;
- unsigned long flags;
+ static inline void __netif_tx_unlock(struct netdev_queue *txq)
+ {
+- txq->xmit_lock_owner = -1;
++ netdev_queue_clear_owner(txq);
+ spin_unlock(&txq->_xmit_lock);
+ }
-- cpu_rcache = get_cpu_ptr(rcache->cpu_rcaches);
-+ cpu_rcache = raw_cpu_ptr(rcache->cpu_rcaches);
- spin_lock_irqsave(&cpu_rcache->lock, flags);
+ static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
+ {
+- txq->xmit_lock_owner = -1;
++ netdev_queue_clear_owner(txq);
+ spin_unlock_bh(&txq->_xmit_lock);
+ }
- if (!iova_magazine_full(cpu_rcache->loaded)) {
-@@ -781,7 +780,6 @@ static bool __iova_rcache_insert(struct iova_domain *iovad,
- iova_magazine_push(cpu_rcache->loaded, iova_pfn);
+ static inline void txq_trans_update(struct netdev_queue *txq)
+ {
+- if (txq->xmit_lock_owner != -1)
++ if (netdev_queue_has_owner(txq))
+ txq->trans_start = jiffies;
+ }
- spin_unlock_irqrestore(&cpu_rcache->lock, flags);
-- put_cpu_ptr(rcache->cpu_rcaches);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/netfilter/x_tables.h linux-4.14/include/linux/netfilter/x_tables.h
+--- linux-4.14.orig/include/linux/netfilter/x_tables.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/netfilter/x_tables.h 2018-09-05 11:05:07.000000000 +0200
+@@ -6,6 +6,7 @@
+ #include <linux/netdevice.h>
+ #include <linux/static_key.h>
+ #include <linux/netfilter.h>
++#include <linux/locallock.h>
+ #include <uapi/linux/netfilter/x_tables.h>
- if (mag_to_free) {
- iova_magazine_free_pfns(mag_to_free, iovad);
-@@ -815,7 +813,7 @@ static unsigned long __iova_rcache_get(struct iova_rcache *rcache,
- bool has_pfn = false;
- unsigned long flags;
+ /* Test a struct->invflags and a boolean for inequality */
+@@ -341,6 +342,8 @@
+ */
+ DECLARE_PER_CPU(seqcount_t, xt_recseq);
-- cpu_rcache = get_cpu_ptr(rcache->cpu_rcaches);
-+ cpu_rcache = raw_cpu_ptr(rcache->cpu_rcaches);
- spin_lock_irqsave(&cpu_rcache->lock, flags);
++DECLARE_LOCAL_IRQ_LOCK(xt_write_lock);
++
+ /* xt_tee_enabled - true if x_tables needs to handle reentrancy
+ *
+ * Enabled if current ip(6)tables ruleset has at least one -j TEE rule.
+@@ -361,6 +364,9 @@
+ {
+ unsigned int addend;
- if (!iova_magazine_empty(cpu_rcache->loaded)) {
-@@ -837,7 +835,6 @@ static unsigned long __iova_rcache_get(struct iova_rcache *rcache,
- iova_pfn = iova_magazine_pop(cpu_rcache->loaded, limit_pfn);
++ /* RT protection */
++ local_lock(xt_write_lock);
++
+ /*
+ * Low order bit of sequence is set if we already
+ * called xt_write_recseq_begin().
+@@ -391,6 +397,7 @@
+ /* this is kind of a write_seqcount_end(), but addend is 0 or 1 */
+ smp_wmb();
+ __this_cpu_add(xt_recseq.sequence, addend);
++ local_unlock(xt_write_lock);
+ }
- spin_unlock_irqrestore(&cpu_rcache->lock, flags);
-- put_cpu_ptr(rcache->cpu_rcaches);
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/nfs_fs.h linux-4.14/include/linux/nfs_fs.h
+--- linux-4.14.orig/include/linux/nfs_fs.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/nfs_fs.h 2018-09-05 11:05:07.000000000 +0200
+@@ -162,7 +162,11 @@
- return iova_pfn;
- }
-diff --git a/drivers/leds/trigger/Kconfig b/drivers/leds/trigger/Kconfig
-index 3f9ddb9fafa7..09da5b6b44a1 100644
---- a/drivers/leds/trigger/Kconfig
-+++ b/drivers/leds/trigger/Kconfig
-@@ -69,7 +69,7 @@ config LEDS_TRIGGER_BACKLIGHT
+ /* Readers: in-flight sillydelete RPC calls */
+ /* Writers: rmdir */
++#ifdef CONFIG_PREEMPT_RT_BASE
++ struct semaphore rmdir_sem;
++#else
+ struct rw_semaphore rmdir_sem;
++#endif
+ struct mutex commit_mutex;
- config LEDS_TRIGGER_CPU
- bool "LED CPU Trigger"
-- depends on LEDS_TRIGGERS
-+ depends on LEDS_TRIGGERS && !PREEMPT_RT_BASE
- help
- This allows LEDs to be controlled by active CPUs. This shows
- the active CPUs across an array of LEDs so you can see which
-diff --git a/drivers/md/bcache/Kconfig b/drivers/md/bcache/Kconfig
-index 4d200883c505..98b64ed5cb81 100644
---- a/drivers/md/bcache/Kconfig
-+++ b/drivers/md/bcache/Kconfig
-@@ -1,6 +1,7 @@
+ #if IS_ENABLED(CONFIG_NFS_V4)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/nfs_xdr.h linux-4.14/include/linux/nfs_xdr.h
+--- linux-4.14.orig/include/linux/nfs_xdr.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/nfs_xdr.h 2018-09-05 11:05:07.000000000 +0200
+@@ -1530,7 +1530,7 @@
+ struct nfs_removeargs args;
+ struct nfs_removeres res;
+ struct dentry *dentry;
+- wait_queue_head_t wq;
++ struct swait_queue_head wq;
+ struct rpc_cred *cred;
+ struct nfs_fattr dir_attr;
+ long timeout;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/notifier.h linux-4.14/include/linux/notifier.h
+--- linux-4.14.orig/include/linux/notifier.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/notifier.h 2018-09-05 11:05:07.000000000 +0200
+@@ -7,7 +7,7 @@
+ *
+ * Alan Cox <Alan.Cox@linux.org>
+ */
+-
++
+ #ifndef _LINUX_NOTIFIER_H
+ #define _LINUX_NOTIFIER_H
+ #include <linux/errno.h>
+@@ -43,9 +43,7 @@
+ * in srcu_notifier_call_chain(): no cache bounces and no memory barriers.
+ * As compensation, srcu_notifier_chain_unregister() is rather expensive.
+ * SRCU notifier chains should be used when the chain will be called very
+- * often but notifier_blocks will seldom be removed. Also, SRCU notifier
+- * chains are slightly more difficult to use because they require special
+- * runtime initialization.
++ * often but notifier_blocks will seldom be removed.
+ */
- config BCACHE
- tristate "Block device as cache"
-+ depends on !PREEMPT_RT_FULL
- ---help---
- Allows a block device to be used as cache for other devices; uses
- a btree for indexing and the layout is optimized for SSDs.
-diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
-index 31a89c8832c0..c3a7e8a9f761 100644
---- a/drivers/md/dm-rq.c
-+++ b/drivers/md/dm-rq.c
-@@ -838,7 +838,7 @@ static void dm_old_request_fn(struct request_queue *q)
- /* Establish tio->ti before queuing work (map_tio_request) */
- tio->ti = ti;
- kthread_queue_work(&md->kworker, &tio->work);
-- BUG_ON(!irqs_disabled());
-+ BUG_ON_NONRT(!irqs_disabled());
- }
- }
+ struct notifier_block;
+@@ -91,7 +89,7 @@
+ (name)->head = NULL; \
+ } while (0)
-diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
-index cce6057b9aca..fa2c4de32a64 100644
---- a/drivers/md/raid5.c
-+++ b/drivers/md/raid5.c
-@@ -1928,8 +1928,9 @@ static void raid_run_ops(struct stripe_head *sh, unsigned long ops_request)
- struct raid5_percpu *percpu;
- unsigned long cpu;
+-/* srcu_notifier_heads must be initialized and cleaned up dynamically */
++/* srcu_notifier_heads must be cleaned up dynamically */
+ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
+ #define srcu_cleanup_notifier_head(name) \
+ cleanup_srcu_struct(&(name)->srcu);
+@@ -104,7 +102,13 @@
+ .head = NULL }
+ #define RAW_NOTIFIER_INIT(name) { \
+ .head = NULL }
+-/* srcu_notifier_heads cannot be initialized statically */
++
++#define SRCU_NOTIFIER_INIT(name, pcpu) \
++ { \
++ .mutex = __MUTEX_INITIALIZER(name.mutex), \
++ .head = NULL, \
++ .srcu = __SRCU_STRUCT_INIT(name.srcu, pcpu), \
++ }
-- cpu = get_cpu();
-+ cpu = get_cpu_light();
- percpu = per_cpu_ptr(conf->percpu, cpu);
-+ spin_lock(&percpu->lock);
- if (test_bit(STRIPE_OP_BIOFILL, &ops_request)) {
- ops_run_biofill(sh);
- overlap_clear++;
-@@ -1985,7 +1986,8 @@ static void raid_run_ops(struct stripe_head *sh, unsigned long ops_request)
- if (test_and_clear_bit(R5_Overlap, &dev->flags))
- wake_up(&sh->raid_conf->wait_for_overlap);
- }
-- put_cpu();
-+ spin_unlock(&percpu->lock);
-+ put_cpu_light();
- }
+ #define ATOMIC_NOTIFIER_HEAD(name) \
+ struct atomic_notifier_head name = \
+@@ -116,6 +120,26 @@
+ struct raw_notifier_head name = \
+ RAW_NOTIFIER_INIT(name)
- static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
-@@ -6391,6 +6393,7 @@ static int raid456_cpu_up_prepare(unsigned int cpu, struct hlist_node *node)
- __func__, cpu);
- return -ENOMEM;
- }
-+ spin_lock_init(&per_cpu_ptr(conf->percpu, cpu)->lock);
- return 0;
- }
++#ifdef CONFIG_TREE_SRCU
++#define _SRCU_NOTIFIER_HEAD(name, mod) \
++ static DEFINE_PER_CPU(struct srcu_data, \
++ name##_head_srcu_data); \
++ mod struct srcu_notifier_head name = \
++ SRCU_NOTIFIER_INIT(name, name##_head_srcu_data)
++
++#else
++#define _SRCU_NOTIFIER_HEAD(name, mod) \
++ mod struct srcu_notifier_head name = \
++ SRCU_NOTIFIER_INIT(name, name)
++
++#endif
++
++#define SRCU_NOTIFIER_HEAD(name) \
++ _SRCU_NOTIFIER_HEAD(name, )
++
++#define SRCU_NOTIFIER_HEAD_STATIC(name) \
++ _SRCU_NOTIFIER_HEAD(name, static)
++
+ #ifdef __KERNEL__
-@@ -6401,7 +6404,6 @@ static int raid5_alloc_percpu(struct r5conf *conf)
- conf->percpu = alloc_percpu(struct raid5_percpu);
- if (!conf->percpu)
- return -ENOMEM;
--
- err = cpuhp_state_add_instance(CPUHP_MD_RAID5_PREPARE, &conf->node);
- if (!err) {
- conf->scribble_disks = max(conf->raid_disks,
-diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
-index 57ec49f0839e..0739604990b7 100644
---- a/drivers/md/raid5.h
-+++ b/drivers/md/raid5.h
-@@ -504,6 +504,7 @@ struct r5conf {
- int recovery_disabled;
- /* per cpu variables */
- struct raid5_percpu {
-+ spinlock_t lock; /* Protection for -RT */
- struct page *spare_page; /* Used when checking P/Q in raid6 */
- struct flex_array *scribble; /* space for constructing buffer
- * lists and performing address
-diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
-index 64971baf11fa..215e91e36198 100644
---- a/drivers/misc/Kconfig
-+++ b/drivers/misc/Kconfig
-@@ -54,6 +54,7 @@ config AD525X_DPOT_SPI
- config ATMEL_TCLIB
- bool "Atmel AT32/AT91 Timer/Counter Library"
- depends on (AVR32 || ARCH_AT91)
-+ default y if PREEMPT_RT_FULL
- help
- Select this if you want a library to allocate the Timer/Counter
- blocks found on many Atmel processors. This facilitates using
-@@ -69,8 +70,7 @@ config ATMEL_TCB_CLKSRC
- are combined to make a single 32-bit timer.
+ extern int atomic_notifier_chain_register(struct atomic_notifier_head *nh,
+@@ -185,12 +209,12 @@
- When GENERIC_CLOCKEVENTS is defined, the third timer channel
-- may be used as a clock event device supporting oneshot mode
-- (delays of up to two seconds) based on the 32 KiHz clock.
-+ may be used as a clock event device supporting oneshot mode.
+ /*
+ * Declared notifiers so far. I can imagine quite a few more chains
+- * over time (eg laptop power reset chains, reboot chain (to clean
++ * over time (eg laptop power reset chains, reboot chain (to clean
+ * device units up), device [un]mount chain, module load/unload chain,
+- * low memory chain, screenblank chain (for plug in modular screenblankers)
++ * low memory chain, screenblank chain (for plug in modular screenblankers)
+ * VC switch chains (for loadable kernel svgalib VC switch helpers) etc...
+ */
+-
++
+ /* CPU notfiers are defined in include/linux/cpu.h. */
- config ATMEL_TCB_CLKSRC_BLOCK
- int
-@@ -84,6 +84,15 @@ config ATMEL_TCB_CLKSRC_BLOCK
- TC can be used for other purposes, such as PWM generation and
- interval timing.
+ /* netdevice notifiers are defined in include/linux/netdevice.h */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/percpu.h linux-4.14/include/linux/percpu.h
+--- linux-4.14.orig/include/linux/percpu.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/percpu.h 2018-09-05 11:05:07.000000000 +0200
+@@ -19,6 +19,35 @@
+ #define PERCPU_MODULE_RESERVE 0
+ #endif
-+config ATMEL_TCB_CLKSRC_USE_SLOW_CLOCK
-+ bool "TC Block use 32 KiHz clock"
-+ depends on ATMEL_TCB_CLKSRC
-+ default y if !PREEMPT_RT_FULL
-+ help
-+ Select this to use 32 KiHz base clock rate as TC block clock
-+ source for clock events.
++#ifdef CONFIG_PREEMPT_RT_FULL
++
++#define get_local_var(var) (*({ \
++ migrate_disable(); \
++ this_cpu_ptr(&var); }))
+
++#define put_local_var(var) do { \
++ (void)&(var); \
++ migrate_enable(); \
++} while (0)
+
- config DUMMY_IRQ
- tristate "Dummy IRQ handler"
- default n
-diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
-index df990bb8c873..1a162709a85e 100644
---- a/drivers/mmc/host/mmci.c
-+++ b/drivers/mmc/host/mmci.c
-@@ -1147,15 +1147,12 @@ static irqreturn_t mmci_pio_irq(int irq, void *dev_id)
- struct sg_mapping_iter *sg_miter = &host->sg_miter;
- struct variant_data *variant = host->variant;
- void __iomem *base = host->base;
-- unsigned long flags;
- u32 status;
++# define get_local_ptr(var) ({ \
++ migrate_disable(); \
++ this_cpu_ptr(var); })
++
++# define put_local_ptr(var) do { \
++ (void)(var); \
++ migrate_enable(); \
++} while (0)
++
++#else
++
++#define get_local_var(var) get_cpu_var(var)
++#define put_local_var(var) put_cpu_var(var)
++#define get_local_ptr(var) get_cpu_ptr(var)
++#define put_local_ptr(var) put_cpu_ptr(var)
++
++#endif
++
+ /* minimum unit size, also is the maximum supported allocation size */
+ #define PCPU_MIN_UNIT_SIZE PFN_ALIGN(32 << 10)
- status = readl(base + MMCISTATUS);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/percpu-rwsem.h linux-4.14/include/linux/percpu-rwsem.h
+--- linux-4.14.orig/include/linux/percpu-rwsem.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/percpu-rwsem.h 2018-09-05 11:05:07.000000000 +0200
+@@ -29,7 +29,7 @@
+ extern int __percpu_down_read(struct percpu_rw_semaphore *, int);
+ extern void __percpu_up_read(struct percpu_rw_semaphore *);
- dev_dbg(mmc_dev(host->mmc), "irq1 (pio) %08x\n", status);
+-static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *sem)
++static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
+ {
+ might_sleep();
-- local_irq_save(flags);
+@@ -47,16 +47,10 @@
+ __this_cpu_inc(*sem->read_count);
+ if (unlikely(!rcu_sync_is_idle(&sem->rss)))
+ __percpu_down_read(sem, false); /* Unconditional memory barrier */
+- barrier();
+ /*
+- * The barrier() prevents the compiler from
++ * The preempt_enable() prevents the compiler from
+ * bleeding the critical section out.
+ */
+-}
-
- do {
- unsigned int remain, len;
- char *buffer;
-@@ -1195,8 +1192,6 @@ static irqreturn_t mmci_pio_irq(int irq, void *dev_id)
+-static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
+-{
+- percpu_down_read_preempt_disable(sem);
+ preempt_enable();
+ }
- sg_miter_stop(sg_miter);
+@@ -83,13 +77,9 @@
+ return ret;
+ }
-- local_irq_restore(flags);
--
- /*
- * If we have less than the fifo 'half-full' threshold to transfer,
- * trigger a PIO interrupt as soon as any data is available.
-diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c
-index 9133e7926da5..63afb921ed40 100644
---- a/drivers/net/ethernet/3com/3c59x.c
-+++ b/drivers/net/ethernet/3com/3c59x.c
-@@ -842,9 +842,9 @@ static void poll_vortex(struct net_device *dev)
+-static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem)
++static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
{
- struct vortex_private *vp = netdev_priv(dev);
- unsigned long flags;
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- (vp->full_bus_master_rx ? boomerang_interrupt:vortex_interrupt)(dev->irq,dev);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
+- /*
+- * The barrier() prevents the compiler from
+- * bleeding the critical section out.
+- */
+- barrier();
++ preempt_disable();
+ /*
+ * Same as in percpu_down_read().
+ */
+@@ -102,12 +92,6 @@
+ rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_);
}
- #endif
-
-@@ -1910,12 +1910,12 @@ static void vortex_tx_timeout(struct net_device *dev)
- * Block interrupts because vortex_interrupt does a bare spin_lock()
- */
- unsigned long flags;
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- if (vp->full_bus_master_tx)
- boomerang_interrupt(dev->irq, dev);
- else
- vortex_interrupt(dev->irq, dev);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- }
- }
-
-diff --git a/drivers/net/ethernet/realtek/8139too.c b/drivers/net/ethernet/realtek/8139too.c
-index da4c2d8a4173..1420dfb56bac 100644
---- a/drivers/net/ethernet/realtek/8139too.c
-+++ b/drivers/net/ethernet/realtek/8139too.c
-@@ -2233,7 +2233,7 @@ static void rtl8139_poll_controller(struct net_device *dev)
- struct rtl8139_private *tp = netdev_priv(dev);
- const int irq = tp->pci_dev->irq;
-
-- disable_irq(irq);
-+ disable_irq_nosync(irq);
- rtl8139_interrupt(irq, dev);
- enable_irq(irq);
- }
-diff --git a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
-index bca6935a94db..d7a35ee34d03 100644
---- a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
-+++ b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c
-@@ -697,7 +697,7 @@ static void ezusb_req_ctx_wait(struct ezusb_priv *upriv,
- while (!ctx->done.done && msecs--)
- udelay(1000);
- } else {
-- wait_event_interruptible(ctx->done.wait,
-+ swait_event_interruptible(ctx->done.wait,
- ctx->done.done);
- }
- break;
-diff --git a/drivers/pci/access.c b/drivers/pci/access.c
-index d11cdbb8fba3..223bbb9acb03 100644
---- a/drivers/pci/access.c
-+++ b/drivers/pci/access.c
-@@ -672,7 +672,7 @@ void pci_cfg_access_unlock(struct pci_dev *dev)
- WARN_ON(!dev->block_cfg_access);
-
- dev->block_cfg_access = 0;
-- wake_up_all(&pci_cfg_wait);
-+ wake_up_all_locked(&pci_cfg_wait);
- raw_spin_unlock_irqrestore(&pci_lock, flags);
- }
- EXPORT_SYMBOL_GPL(pci_cfg_access_unlock);
-diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
-index 775c88303017..f8e9e1c2b2f6 100644
---- a/drivers/pinctrl/qcom/pinctrl-msm.c
-+++ b/drivers/pinctrl/qcom/pinctrl-msm.c
-@@ -61,7 +61,7 @@ struct msm_pinctrl {
- struct notifier_block restart_nb;
- int irq;
-
-- spinlock_t lock;
-+ raw_spinlock_t lock;
- DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO);
- DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO);
-@@ -153,14 +153,14 @@ static int msm_pinmux_set_mux(struct pinctrl_dev *pctldev,
- if (WARN_ON(i == g->nfuncs))
- return -EINVAL;
-
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
-
- val = readl(pctrl->regs + g->ctl_reg);
- val &= ~mask;
- val |= i << g->mux_bit;
- writel(val, pctrl->regs + g->ctl_reg);
+-static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
+-{
+- preempt_disable();
+- percpu_up_read_preempt_enable(sem);
+-}
+-
+ extern void percpu_down_write(struct percpu_rw_semaphore *);
+ extern void percpu_up_write(struct percpu_rw_semaphore *);
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/pid.h linux-4.14/include/linux/pid.h
+--- linux-4.14.orig/include/linux/pid.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/pid.h 2018-09-05 11:05:07.000000000 +0200
+@@ -3,6 +3,7 @@
+ #define _LINUX_PID_H
- return 0;
- }
-@@ -323,14 +323,14 @@ static int msm_config_group_set(struct pinctrl_dev *pctldev,
- break;
- case PIN_CONFIG_OUTPUT:
- /* set output value */
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
- val = readl(pctrl->regs + g->io_reg);
- if (arg)
- val |= BIT(g->out_bit);
- else
- val &= ~BIT(g->out_bit);
- writel(val, pctrl->regs + g->io_reg);
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
-
- /* enable output */
- arg = 1;
-@@ -351,12 +351,12 @@ static int msm_config_group_set(struct pinctrl_dev *pctldev,
- return -EINVAL;
- }
+ #include <linux/rculist.h>
++#include <linux/atomic.h>
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
- val = readl(pctrl->regs + g->ctl_reg);
- val &= ~(mask << bit);
- val |= arg << bit;
- writel(val, pctrl->regs + g->ctl_reg);
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
- }
+ enum pid_type
+ {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/posix-timers.h linux-4.14/include/linux/posix-timers.h
+--- linux-4.14.orig/include/linux/posix-timers.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/posix-timers.h 2018-09-05 11:05:07.000000000 +0200
+@@ -101,8 +101,8 @@
+ struct {
+ struct alarm alarmtimer;
+ } alarm;
+- struct rcu_head rcu;
+ } it;
++ struct rcu_head rcu;
+ };
- return 0;
-@@ -384,13 +384,13 @@ static int msm_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+ void run_posix_cpu_timers(struct task_struct *task);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/preempt.h linux-4.14/include/linux/preempt.h
+--- linux-4.14.orig/include/linux/preempt.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/preempt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -51,7 +51,11 @@
+ #define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT)
+ #define NMI_OFFSET (1UL << NMI_SHIFT)
- g = &pctrl->soc->groups[offset];
+-#define SOFTIRQ_DISABLE_OFFSET (2 * SOFTIRQ_OFFSET)
++#ifndef CONFIG_PREEMPT_RT_FULL
++# define SOFTIRQ_DISABLE_OFFSET (2 * SOFTIRQ_OFFSET)
++#else
++# define SOFTIRQ_DISABLE_OFFSET (0)
++#endif
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
+ /* We use the MSB mostly because its available */
+ #define PREEMPT_NEED_RESCHED 0x80000000
+@@ -81,9 +85,15 @@
+ #include <asm/preempt.h>
- val = readl(pctrl->regs + g->ctl_reg);
- val &= ~BIT(g->oe_bit);
- writel(val, pctrl->regs + g->ctl_reg);
+ #define hardirq_count() (preempt_count() & HARDIRQ_MASK)
+-#define softirq_count() (preempt_count() & SOFTIRQ_MASK)
+ #define irq_count() (preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
+ | NMI_MASK))
++#ifndef CONFIG_PREEMPT_RT_FULL
++# define softirq_count() (preempt_count() & SOFTIRQ_MASK)
++# define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET)
++#else
++# define softirq_count() (0UL)
++extern int in_serving_softirq(void);
++#endif
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+ /*
+ * Are we doing bottom half or hardware interrupt processing?
+@@ -101,7 +111,6 @@
+ #define in_irq() (hardirq_count())
+ #define in_softirq() (softirq_count())
+ #define in_interrupt() (irq_count())
+-#define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET)
+ #define in_nmi() (preempt_count() & NMI_MASK)
+ #define in_task() (!(preempt_count() & \
+ (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
+@@ -118,7 +127,11 @@
+ /*
+ * The preempt_count offset after spin_lock()
+ */
++#if !defined(CONFIG_PREEMPT_RT_FULL)
+ #define PREEMPT_LOCK_OFFSET PREEMPT_DISABLE_OFFSET
++#else
++#define PREEMPT_LOCK_OFFSET 0
++#endif
- return 0;
- }
-@@ -404,7 +404,7 @@ static int msm_gpio_direction_output(struct gpio_chip *chip, unsigned offset, in
+ /*
+ * The preempt_count offset needed for things like:
+@@ -167,6 +180,20 @@
+ #define preempt_count_inc() preempt_count_add(1)
+ #define preempt_count_dec() preempt_count_sub(1)
- g = &pctrl->soc->groups[offset];
++#ifdef CONFIG_PREEMPT_LAZY
++#define add_preempt_lazy_count(val) do { preempt_lazy_count() += (val); } while (0)
++#define sub_preempt_lazy_count(val) do { preempt_lazy_count() -= (val); } while (0)
++#define inc_preempt_lazy_count() add_preempt_lazy_count(1)
++#define dec_preempt_lazy_count() sub_preempt_lazy_count(1)
++#define preempt_lazy_count() (current_thread_info()->preempt_lazy_count)
++#else
++#define add_preempt_lazy_count(val) do { } while (0)
++#define sub_preempt_lazy_count(val) do { } while (0)
++#define inc_preempt_lazy_count() do { } while (0)
++#define dec_preempt_lazy_count() do { } while (0)
++#define preempt_lazy_count() (0)
++#endif
++
+ #ifdef CONFIG_PREEMPT_COUNT
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
+ #define preempt_disable() \
+@@ -175,16 +202,53 @@
+ barrier(); \
+ } while (0)
- val = readl(pctrl->regs + g->io_reg);
- if (value)
-@@ -417,7 +417,7 @@ static int msm_gpio_direction_output(struct gpio_chip *chip, unsigned offset, in
- val |= BIT(g->oe_bit);
- writel(val, pctrl->regs + g->ctl_reg);
++#define preempt_lazy_disable() \
++do { \
++ inc_preempt_lazy_count(); \
++ barrier(); \
++} while (0)
++
+ #define sched_preempt_enable_no_resched() \
+ do { \
+ barrier(); \
+ preempt_count_dec(); \
+ } while (0)
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+-#define preempt_enable_no_resched() sched_preempt_enable_no_resched()
++#ifdef CONFIG_PREEMPT_RT_BASE
++# define preempt_enable_no_resched() sched_preempt_enable_no_resched()
++# define preempt_check_resched_rt() preempt_check_resched()
++#else
++# define preempt_enable_no_resched() preempt_enable()
++# define preempt_check_resched_rt() barrier();
++#endif
- return 0;
- }
-@@ -443,7 +443,7 @@ static void msm_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+ #define preemptible() (preempt_count() == 0 && !irqs_disabled())
- g = &pctrl->soc->groups[offset];
++#ifdef CONFIG_SMP
++
++extern void migrate_disable(void);
++extern void migrate_enable(void);
++
++int __migrate_disabled(struct task_struct *p);
++
++#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
++
++extern void migrate_disable(void);
++extern void migrate_enable(void);
++static inline int __migrate_disabled(struct task_struct *p)
++{
++ return 0;
++}
++
++#else
++#define migrate_disable() barrier()
++#define migrate_enable() barrier()
++static inline int __migrate_disabled(struct task_struct *p)
++{
++ return 0;
++}
++#endif
++
+ #ifdef CONFIG_PREEMPT
+ #define preempt_enable() \
+ do { \
+@@ -206,6 +270,13 @@
+ __preempt_schedule(); \
+ } while (0)
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
++#define preempt_lazy_enable() \
++do { \
++ dec_preempt_lazy_count(); \
++ barrier(); \
++ preempt_check_resched(); \
++} while (0)
++
+ #else /* !CONFIG_PREEMPT */
+ #define preempt_enable() \
+ do { \
+@@ -213,6 +284,12 @@
+ preempt_count_dec(); \
+ } while (0)
- val = readl(pctrl->regs + g->io_reg);
- if (value)
-@@ -452,7 +452,7 @@ static void msm_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
- val &= ~BIT(g->out_bit);
- writel(val, pctrl->regs + g->io_reg);
++#define preempt_lazy_enable() \
++do { \
++ dec_preempt_lazy_count(); \
++ barrier(); \
++} while (0)
++
+ #define preempt_enable_notrace() \
+ do { \
+ barrier(); \
+@@ -251,8 +328,16 @@
+ #define preempt_disable_notrace() barrier()
+ #define preempt_enable_no_resched_notrace() barrier()
+ #define preempt_enable_notrace() barrier()
++#define preempt_check_resched_rt() barrier()
+ #define preemptible() 0
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
- }
++#define migrate_disable() barrier()
++#define migrate_enable() barrier()
++
++static inline int __migrate_disabled(struct task_struct *p)
++{
++ return 0;
++}
+ #endif /* CONFIG_PREEMPT_COUNT */
- #ifdef CONFIG_DEBUG_FS
-@@ -571,7 +571,7 @@ static void msm_gpio_irq_mask(struct irq_data *d)
+ #ifdef MODULE
+@@ -271,10 +356,22 @@
+ } while (0)
+ #define preempt_fold_need_resched() \
+ do { \
+- if (tif_need_resched()) \
++ if (tif_need_resched_now()) \
+ set_preempt_need_resched(); \
+ } while (0)
- g = &pctrl->soc->groups[d->hwirq];
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define preempt_disable_rt() preempt_disable()
++# define preempt_enable_rt() preempt_enable()
++# define preempt_disable_nort() barrier()
++# define preempt_enable_nort() barrier()
++#else
++# define preempt_disable_rt() barrier()
++# define preempt_enable_rt() barrier()
++# define preempt_disable_nort() preempt_disable()
++# define preempt_enable_nort() preempt_enable()
++#endif
++
+ #ifdef CONFIG_PREEMPT_NOTIFIERS
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
+ struct preempt_notifier;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/printk.h linux-4.14/include/linux/printk.h
+--- linux-4.14.orig/include/linux/printk.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/printk.h 2018-09-05 11:05:07.000000000 +0200
+@@ -142,9 +142,11 @@
+ #ifdef CONFIG_EARLY_PRINTK
+ extern asmlinkage __printf(1, 2)
+ void early_printk(const char *fmt, ...);
++extern void printk_kill(void);
+ #else
+ static inline __printf(1, 2) __cold
+ void early_printk(const char *s, ...) { }
++static inline void printk_kill(void) { }
+ #endif
- val = readl(pctrl->regs + g->intr_cfg_reg);
- val &= ~BIT(g->intr_enable_bit);
-@@ -579,7 +579,7 @@ static void msm_gpio_irq_mask(struct irq_data *d)
+ #ifdef CONFIG_PRINTK_NMI
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/radix-tree.h linux-4.14/include/linux/radix-tree.h
+--- linux-4.14.orig/include/linux/radix-tree.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/radix-tree.h 2018-09-05 11:05:07.000000000 +0200
+@@ -328,6 +328,8 @@
+ int radix_tree_preload(gfp_t gfp_mask);
+ int radix_tree_maybe_preload(gfp_t gfp_mask);
+ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order);
++void radix_tree_preload_end(void);
++
+ void radix_tree_init(void);
+ void *radix_tree_tag_set(struct radix_tree_root *,
+ unsigned long index, unsigned int tag);
+@@ -347,11 +349,6 @@
+ unsigned int max_items, unsigned int tag);
+ int radix_tree_tagged(const struct radix_tree_root *, unsigned int tag);
- clear_bit(d->hwirq, pctrl->enabled_irqs);
+-static inline void radix_tree_preload_end(void)
+-{
+- preempt_enable();
+-}
+-
+ int radix_tree_split_preload(unsigned old_order, unsigned new_order, gfp_t);
+ int radix_tree_split(struct radix_tree_root *, unsigned long index,
+ unsigned new_order);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/random.h linux-4.14/include/linux/random.h
+--- linux-4.14.orig/include/linux/random.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/random.h 2018-09-05 11:05:07.000000000 +0200
+@@ -32,7 +32,7 @@
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
- }
+ extern void add_input_randomness(unsigned int type, unsigned int code,
+ unsigned int value) __latent_entropy;
+-extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
++extern void add_interrupt_randomness(int irq, int irq_flags, __u64 ip) __latent_entropy;
- static void msm_gpio_irq_unmask(struct irq_data *d)
-@@ -592,7 +592,7 @@ static void msm_gpio_irq_unmask(struct irq_data *d)
+ extern void get_random_bytes(void *buf, int nbytes);
+ extern int wait_for_random_bytes(void);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rbtree_augmented.h linux-4.14/include/linux/rbtree_augmented.h
+--- linux-4.14.orig/include/linux/rbtree_augmented.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/rbtree_augmented.h 2018-09-05 11:05:07.000000000 +0200
+@@ -26,6 +26,7 @@
- g = &pctrl->soc->groups[d->hwirq];
+ #include <linux/compiler.h>
+ #include <linux/rbtree.h>
++#include <linux/rcupdate.h>
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
+ /*
+ * Please note - only struct rb_augment_callbacks and the prototypes for
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rbtree.h linux-4.14/include/linux/rbtree.h
+--- linux-4.14.orig/include/linux/rbtree.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/rbtree.h 2018-09-05 11:05:07.000000000 +0200
+@@ -31,7 +31,7 @@
- val = readl(pctrl->regs + g->intr_status_reg);
- val &= ~BIT(g->intr_status_bit);
-@@ -604,7 +604,7 @@ static void msm_gpio_irq_unmask(struct irq_data *d)
+ #include <linux/kernel.h>
+ #include <linux/stddef.h>
+-#include <linux/rcupdate.h>
++#include <linux/rcu_assign_pointer.h>
- set_bit(d->hwirq, pctrl->enabled_irqs);
+ struct rb_node {
+ unsigned long __rb_parent_color;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rbtree_latch.h linux-4.14/include/linux/rbtree_latch.h
+--- linux-4.14.orig/include/linux/rbtree_latch.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/rbtree_latch.h 2018-09-05 11:05:07.000000000 +0200
+@@ -35,6 +35,7 @@
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
- }
+ #include <linux/rbtree.h>
+ #include <linux/seqlock.h>
++#include <linux/rcupdate.h>
- static void msm_gpio_irq_ack(struct irq_data *d)
-@@ -617,7 +617,7 @@ static void msm_gpio_irq_ack(struct irq_data *d)
+ struct latch_tree_node {
+ struct rb_node node[2];
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rcu_assign_pointer.h linux-4.14/include/linux/rcu_assign_pointer.h
+--- linux-4.14.orig/include/linux/rcu_assign_pointer.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/rcu_assign_pointer.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,54 @@
++#ifndef __LINUX_RCU_ASSIGN_POINTER_H__
++#define __LINUX_RCU_ASSIGN_POINTER_H__
++#include <linux/compiler.h>
++#include <asm/barrier.h>
++
++/**
++ * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
++ * @v: The value to statically initialize with.
++ */
++#define RCU_INITIALIZER(v) (typeof(*(v)) __force __rcu *)(v)
++
++/**
++ * rcu_assign_pointer() - assign to RCU-protected pointer
++ * @p: pointer to assign to
++ * @v: value to assign (publish)
++ *
++ * Assigns the specified value to the specified RCU-protected
++ * pointer, ensuring that any concurrent RCU readers will see
++ * any prior initialization.
++ *
++ * Inserts memory barriers on architectures that require them
++ * (which is most of them), and also prevents the compiler from
++ * reordering the code that initializes the structure after the pointer
++ * assignment. More importantly, this call documents which pointers
++ * will be dereferenced by RCU read-side code.
++ *
++ * In some special cases, you may use RCU_INIT_POINTER() instead
++ * of rcu_assign_pointer(). RCU_INIT_POINTER() is a bit faster due
++ * to the fact that it does not constrain either the CPU or the compiler.
++ * That said, using RCU_INIT_POINTER() when you should have used
++ * rcu_assign_pointer() is a very bad thing that results in
++ * impossible-to-diagnose memory corruption. So please be careful.
++ * See the RCU_INIT_POINTER() comment header for details.
++ *
++ * Note that rcu_assign_pointer() evaluates each of its arguments only
++ * once, appearances notwithstanding. One of the "extra" evaluations
++ * is in typeof() and the other visible only to sparse (__CHECKER__),
++ * neither of which actually execute the argument. As with most cpp
++ * macros, this execute-arguments-only-once property is important, so
++ * please be careful when making changes to rcu_assign_pointer() and the
++ * other macros that it invokes.
++ */
++#define rcu_assign_pointer(p, v) \
++({ \
++ uintptr_t _r_a_p__v = (uintptr_t)(v); \
++ \
++ if (__builtin_constant_p(v) && (_r_a_p__v) == (uintptr_t)NULL) \
++ WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \
++ else \
++ smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
++ _r_a_p__v; \
++})
++
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rcupdate.h linux-4.14/include/linux/rcupdate.h
+--- linux-4.14.orig/include/linux/rcupdate.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/rcupdate.h 2018-09-05 11:05:07.000000000 +0200
+@@ -42,6 +42,7 @@
+ #include <linux/lockdep.h>
+ #include <asm/processor.h>
+ #include <linux/cpumask.h>
++#include <linux/rcu_assign_pointer.h>
- g = &pctrl->soc->groups[d->hwirq];
+ #define ULONG_CMP_GE(a, b) (ULONG_MAX / 2 >= (a) - (b))
+ #define ULONG_CMP_LT(a, b) (ULONG_MAX / 2 < (a) - (b))
+@@ -55,7 +56,11 @@
+ #define call_rcu call_rcu_sched
+ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
++#ifdef CONFIG_PREEMPT_RT_FULL
++#define call_rcu_bh call_rcu
++#else
+ void call_rcu_bh(struct rcu_head *head, rcu_callback_t func);
++#endif
+ void call_rcu_sched(struct rcu_head *head, rcu_callback_t func);
+ void synchronize_sched(void);
+ void rcu_barrier_tasks(void);
+@@ -74,6 +79,11 @@
+ * types of kernel builds, the rcu_read_lock() nesting depth is unknowable.
+ */
+ #define rcu_preempt_depth() (current->rcu_read_lock_nesting)
++#ifndef CONFIG_PREEMPT_RT_FULL
++#define sched_rcu_preempt_depth() rcu_preempt_depth()
++#else
++static inline int sched_rcu_preempt_depth(void) { return 0; }
++#endif
- val = readl(pctrl->regs + g->intr_status_reg);
- if (g->intr_ack_high)
-@@ -629,7 +629,7 @@ static void msm_gpio_irq_ack(struct irq_data *d)
- if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
- msm_gpio_update_dual_edge_pos(pctrl, g, d);
+ #else /* #ifdef CONFIG_PREEMPT_RCU */
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+@@ -99,6 +109,8 @@
+ return 0;
}
- static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
-@@ -642,7 +642,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
-
- g = &pctrl->soc->groups[d->hwirq];
-
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
-
- /*
- * For hw without possibility of detecting both edges
-@@ -716,7 +716,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
- if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
- msm_gpio_update_dual_edge_pos(pctrl, g, d);
-
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
-
- if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
- irq_set_handler_locked(d, handle_level_irq);
-@@ -732,11 +732,11 @@ static int msm_gpio_irq_set_wake(struct irq_data *d, unsigned int on)
- struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
- unsigned long flags;
-
-- spin_lock_irqsave(&pctrl->lock, flags);
-+ raw_spin_lock_irqsave(&pctrl->lock, flags);
++#define sched_rcu_preempt_depth() rcu_preempt_depth()
++
+ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
- irq_set_irq_wake(pctrl->irq, on);
+ /* Internal to kernel */
+@@ -255,7 +267,14 @@
+ extern struct lockdep_map rcu_callback_map;
+ int debug_lockdep_rcu_enabled(void);
+ int rcu_read_lock_held(void);
++#ifdef CONFIG_PREEMPT_RT_FULL
++static inline int rcu_read_lock_bh_held(void)
++{
++ return rcu_read_lock_held();
++}
++#else
+ int rcu_read_lock_bh_held(void);
++#endif
+ int rcu_read_lock_sched_held(void);
-- spin_unlock_irqrestore(&pctrl->lock, flags);
-+ raw_spin_unlock_irqrestore(&pctrl->lock, flags);
+ #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
+@@ -365,54 +384,6 @@
+ })
- return 0;
+ /**
+- * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
+- * @v: The value to statically initialize with.
+- */
+-#define RCU_INITIALIZER(v) (typeof(*(v)) __force __rcu *)(v)
+-
+-/**
+- * rcu_assign_pointer() - assign to RCU-protected pointer
+- * @p: pointer to assign to
+- * @v: value to assign (publish)
+- *
+- * Assigns the specified value to the specified RCU-protected
+- * pointer, ensuring that any concurrent RCU readers will see
+- * any prior initialization.
+- *
+- * Inserts memory barriers on architectures that require them
+- * (which is most of them), and also prevents the compiler from
+- * reordering the code that initializes the structure after the pointer
+- * assignment. More importantly, this call documents which pointers
+- * will be dereferenced by RCU read-side code.
+- *
+- * In some special cases, you may use RCU_INIT_POINTER() instead
+- * of rcu_assign_pointer(). RCU_INIT_POINTER() is a bit faster due
+- * to the fact that it does not constrain either the CPU or the compiler.
+- * That said, using RCU_INIT_POINTER() when you should have used
+- * rcu_assign_pointer() is a very bad thing that results in
+- * impossible-to-diagnose memory corruption. So please be careful.
+- * See the RCU_INIT_POINTER() comment header for details.
+- *
+- * Note that rcu_assign_pointer() evaluates each of its arguments only
+- * once, appearances notwithstanding. One of the "extra" evaluations
+- * is in typeof() and the other visible only to sparse (__CHECKER__),
+- * neither of which actually execute the argument. As with most cpp
+- * macros, this execute-arguments-only-once property is important, so
+- * please be careful when making changes to rcu_assign_pointer() and the
+- * other macros that it invokes.
+- */
+-#define rcu_assign_pointer(p, v) \
+-({ \
+- uintptr_t _r_a_p__v = (uintptr_t)(v); \
+- \
+- if (__builtin_constant_p(v) && (_r_a_p__v) == (uintptr_t)NULL) \
+- WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \
+- else \
+- smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
+- _r_a_p__v; \
+-})
+-
+-/**
+ * rcu_swap_protected() - swap an RCU and a regular pointer
+ * @rcu_ptr: RCU pointer
+ * @ptr: regular pointer
+@@ -707,10 +678,14 @@
+ static inline void rcu_read_lock_bh(void)
+ {
+ local_bh_disable();
++#ifdef CONFIG_PREEMPT_RT_FULL
++ rcu_read_lock();
++#else
+ __acquire(RCU_BH);
+ rcu_lock_acquire(&rcu_bh_lock_map);
+ RCU_LOCKDEP_WARN(!rcu_is_watching(),
+ "rcu_read_lock_bh() used illegally while idle");
++#endif
}
-@@ -882,7 +882,7 @@ int msm_pinctrl_probe(struct platform_device *pdev,
- pctrl->soc = soc_data;
- pctrl->chip = msm_gpio_template;
-
-- spin_lock_init(&pctrl->lock);
-+ raw_spin_lock_init(&pctrl->lock);
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- pctrl->regs = devm_ioremap_resource(&pdev->dev, res);
-diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
-index 9bd41a35a78a..8e2d436c2e3f 100644
---- a/drivers/scsi/fcoe/fcoe.c
-+++ b/drivers/scsi/fcoe/fcoe.c
-@@ -1455,11 +1455,11 @@ static int fcoe_rcv(struct sk_buff *skb, struct net_device *netdev,
- static int fcoe_alloc_paged_crc_eof(struct sk_buff *skb, int tlen)
+ /*
+@@ -720,10 +695,14 @@
+ */
+ static inline void rcu_read_unlock_bh(void)
{
- struct fcoe_percpu_s *fps;
-- int rc;
-+ int rc, cpu = get_cpu_light();
-
-- fps = &get_cpu_var(fcoe_percpu);
-+ fps = &per_cpu(fcoe_percpu, cpu);
- rc = fcoe_get_paged_crc_eof(skb, tlen, fps);
-- put_cpu_var(fcoe_percpu);
-+ put_cpu_light();
-
- return rc;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ rcu_read_unlock();
++#else
+ RCU_LOCKDEP_WARN(!rcu_is_watching(),
+ "rcu_read_unlock_bh() used illegally while idle");
+ rcu_lock_release(&rcu_bh_lock_map);
+ __release(RCU_BH);
++#endif
+ local_bh_enable();
}
-@@ -1646,11 +1646,11 @@ static inline int fcoe_filter_frames(struct fc_lport *lport,
- return 0;
- }
-- stats = per_cpu_ptr(lport->stats, get_cpu());
-+ stats = per_cpu_ptr(lport->stats, get_cpu_light());
- stats->InvalidCRCCount++;
- if (stats->InvalidCRCCount < 5)
- printk(KERN_WARNING "fcoe: dropping frame with CRC error\n");
-- put_cpu();
-+ put_cpu_light();
- return -EINVAL;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rcutree.h linux-4.14/include/linux/rcutree.h
+--- linux-4.14.orig/include/linux/rcutree.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/rcutree.h 2018-09-05 11:05:07.000000000 +0200
+@@ -44,7 +44,11 @@
+ rcu_note_context_switch(false);
}
-@@ -1693,7 +1693,7 @@ static void fcoe_recv_frame(struct sk_buff *skb)
- */
- hp = (struct fcoe_hdr *) skb_network_header(skb);
-
-- stats = per_cpu_ptr(lport->stats, get_cpu());
-+ stats = per_cpu_ptr(lport->stats, get_cpu_light());
- if (unlikely(FC_FCOE_DECAPS_VER(hp) != FC_FCOE_VER)) {
- if (stats->ErrorFrames < 5)
- printk(KERN_WARNING "fcoe: FCoE version "
-@@ -1725,13 +1725,13 @@ static void fcoe_recv_frame(struct sk_buff *skb)
- goto drop;
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define synchronize_rcu_bh synchronize_rcu
++#else
+ void synchronize_rcu_bh(void);
++#endif
+ void synchronize_sched_expedited(void);
+ void synchronize_rcu_expedited(void);
- if (!fcoe_filter_frames(lport, fp)) {
-- put_cpu();
-+ put_cpu_light();
- fc_exch_recv(lport, fp);
- return;
- }
- drop:
- stats->ErrorFrames++;
-- put_cpu();
-+ put_cpu_light();
- kfree_skb(skb);
+@@ -72,7 +76,11 @@
}
-diff --git a/drivers/scsi/fcoe/fcoe_ctlr.c b/drivers/scsi/fcoe/fcoe_ctlr.c
-index dcf36537a767..1a1f2e46452c 100644
---- a/drivers/scsi/fcoe/fcoe_ctlr.c
-+++ b/drivers/scsi/fcoe/fcoe_ctlr.c
-@@ -834,7 +834,7 @@ static unsigned long fcoe_ctlr_age_fcfs(struct fcoe_ctlr *fip)
+ void rcu_barrier(void);
++#ifdef CONFIG_PREEMPT_RT_FULL
++# define rcu_barrier_bh rcu_barrier
++#else
+ void rcu_barrier_bh(void);
++#endif
+ void rcu_barrier_sched(void);
+ unsigned long get_state_synchronize_rcu(void);
+ void cond_synchronize_rcu(unsigned long oldstate);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/ring_buffer.h linux-4.14/include/linux/ring_buffer.h
+--- linux-4.14.orig/include/linux/ring_buffer.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/ring_buffer.h 2018-09-05 11:05:07.000000000 +0200
+@@ -34,10 +34,12 @@
+ * array[0] = time delta (28 .. 59)
+ * size = 8 bytes
+ *
+- * @RINGBUF_TYPE_TIME_STAMP: Sync time stamp with external clock
+- * array[0] = tv_nsec
+- * array[1..2] = tv_sec
+- * size = 16 bytes
++ * @RINGBUF_TYPE_TIME_STAMP: Absolute timestamp
++ * Same format as TIME_EXTEND except that the
++ * value is an absolute timestamp, not a delta
++ * event.time_delta contains bottom 27 bits
++ * array[0] = top (28 .. 59) bits
++ * size = 8 bytes
+ *
+ * <= @RINGBUF_TYPE_DATA_TYPE_LEN_MAX:
+ * Data record
+@@ -54,12 +56,12 @@
+ RINGBUF_TYPE_DATA_TYPE_LEN_MAX = 28,
+ RINGBUF_TYPE_PADDING,
+ RINGBUF_TYPE_TIME_EXTEND,
+- /* FIXME: RINGBUF_TYPE_TIME_STAMP not implemented */
+ RINGBUF_TYPE_TIME_STAMP,
+ };
- INIT_LIST_HEAD(&del_list);
+ unsigned ring_buffer_event_length(struct ring_buffer_event *event);
+ void *ring_buffer_event_data(struct ring_buffer_event *event);
++u64 ring_buffer_event_time_stamp(struct ring_buffer_event *event);
-- stats = per_cpu_ptr(fip->lp->stats, get_cpu());
-+ stats = per_cpu_ptr(fip->lp->stats, get_cpu_light());
+ /*
+ * ring_buffer_discard_commit will remove an event that has not
+@@ -115,6 +117,9 @@
+ int ring_buffer_write(struct ring_buffer *buffer,
+ unsigned long length, void *data);
+
++void ring_buffer_nest_start(struct ring_buffer *buffer);
++void ring_buffer_nest_end(struct ring_buffer *buffer);
++
+ struct ring_buffer_event *
+ ring_buffer_peek(struct ring_buffer *buffer, int cpu, u64 *ts,
+ unsigned long *lost_events);
+@@ -179,6 +184,8 @@
+ int cpu, u64 *ts);
+ void ring_buffer_set_clock(struct ring_buffer *buffer,
+ u64 (*clock)(void));
++void ring_buffer_set_time_stamp_abs(struct ring_buffer *buffer, bool abs);
++bool ring_buffer_time_stamp_abs(struct ring_buffer *buffer);
+
+ size_t ring_buffer_page_len(void *page);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rtmutex.h linux-4.14/include/linux/rtmutex.h
+--- linux-4.14.orig/include/linux/rtmutex.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/rtmutex.h 2018-09-05 11:05:07.000000000 +0200
+@@ -14,11 +14,15 @@
+ #define __LINUX_RT_MUTEX_H
- list_for_each_entry_safe(fcf, next, &fip->fcfs, list) {
- deadline = fcf->time + fcf->fka_period + fcf->fka_period / 2;
-@@ -870,7 +870,7 @@ static unsigned long fcoe_ctlr_age_fcfs(struct fcoe_ctlr *fip)
- sel_time = fcf->time;
- }
- }
-- put_cpu();
-+ put_cpu_light();
+ #include <linux/linkage.h>
++#include <linux/spinlock_types_raw.h>
+ #include <linux/rbtree.h>
+-#include <linux/spinlock_types.h>
- list_for_each_entry_safe(fcf, next, &del_list, list) {
- /* Removes fcf from current list */
-diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
-index 16ca31ad5ec0..c3987347e762 100644
---- a/drivers/scsi/libfc/fc_exch.c
-+++ b/drivers/scsi/libfc/fc_exch.c
-@@ -814,10 +814,10 @@ static struct fc_exch *fc_exch_em_alloc(struct fc_lport *lport,
- }
- memset(ep, 0, sizeof(*ep));
+ extern int max_lock_depth; /* for sysctl */
-- cpu = get_cpu();
-+ cpu = get_cpu_light();
- pool = per_cpu_ptr(mp->pool, cpu);
- spin_lock_bh(&pool->lock);
-- put_cpu();
-+ put_cpu_light();
++#ifdef CONFIG_DEBUG_MUTEXES
++#include <linux/debug_locks.h>
++#endif
++
+ /**
+ * The rt_mutex structure
+ *
+@@ -31,8 +35,8 @@
+ raw_spinlock_t wait_lock;
+ struct rb_root_cached waiters;
+ struct task_struct *owner;
+-#ifdef CONFIG_DEBUG_RT_MUTEXES
+ int save_state;
++#ifdef CONFIG_DEBUG_RT_MUTEXES
+ const char *name, *file;
+ int line;
+ void *magic;
+@@ -82,16 +86,23 @@
+ #define __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)
+ #endif
- /* peek cache of free slot */
- if (pool->left != FC_XID_UNKNOWN) {
-diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
-index 763f012fdeca..d0f61b595470 100644
---- a/drivers/scsi/libsas/sas_ata.c
-+++ b/drivers/scsi/libsas/sas_ata.c
-@@ -190,7 +190,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
- /* TODO: audit callers to ensure they are ready for qc_issue to
- * unconditionally re-enable interrupts
- */
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- spin_unlock(ap->lock);
+-#define __RT_MUTEX_INITIALIZER(mutexname) \
+- { .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
++#define __RT_MUTEX_INITIALIZER_PLAIN(mutexname) \
++ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
+ , .waiters = RB_ROOT_CACHED \
+ , .owner = NULL \
+ __DEBUG_RT_MUTEX_INITIALIZER(mutexname) \
+- __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)}
++ __DEP_MAP_RT_MUTEX_INITIALIZER(mutexname)
++
++#define __RT_MUTEX_INITIALIZER(mutexname) \
++ { __RT_MUTEX_INITIALIZER_PLAIN(mutexname) }
- /* If the device fell off, no sense in issuing commands */
-@@ -252,7 +252,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ #define DEFINE_RT_MUTEX(mutexname) \
+ struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
- out:
- spin_lock(ap->lock);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- return ret;
- }
++#define __RT_MUTEX_INITIALIZER_SAVE_STATE(mutexname) \
++ { __RT_MUTEX_INITIALIZER_PLAIN(mutexname) \
++ , .save_state = 1 }
++
+ /**
+ * rt_mutex_is_locked - is the mutex locked
+ * @lock: the mutex to be queried
+@@ -108,6 +119,7 @@
-diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
-index edc48f3b8230..ee5c6f9dfb6f 100644
---- a/drivers/scsi/qla2xxx/qla_inline.h
-+++ b/drivers/scsi/qla2xxx/qla_inline.h
-@@ -59,12 +59,12 @@ qla2x00_poll(struct rsp_que *rsp)
- {
- unsigned long flags;
- struct qla_hw_data *ha = rsp->hw;
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- if (IS_P3P_TYPE(ha))
- qla82xx_poll(0, rsp);
- else
- ha->isp_ops->intr_handler(0, rsp);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- }
+ extern void rt_mutex_lock(struct rt_mutex *lock);
+ extern int rt_mutex_lock_interruptible(struct rt_mutex *lock);
++extern int rt_mutex_lock_killable(struct rt_mutex *lock);
+ extern int rt_mutex_timed_lock(struct rt_mutex *lock,
+ struct hrtimer_sleeper *timeout);
- static inline uint8_t *
-diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
-index 068c4e47fac9..a2090f640397 100644
---- a/drivers/scsi/qla2xxx/qla_isr.c
-+++ b/drivers/scsi/qla2xxx/qla_isr.c
-@@ -3125,7 +3125,11 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
- * kref_put().
- */
- kref_get(&qentry->irq_notify.kref);
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ swork_queue(&qentry->irq_notify.swork);
-+#else
- schedule_work(&qentry->irq_notify.work);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rwlock_rt.h linux-4.14/include/linux/rwlock_rt.h
+--- linux-4.14.orig/include/linux/rwlock_rt.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/rwlock_rt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,119 @@
++#ifndef __LINUX_RWLOCK_RT_H
++#define __LINUX_RWLOCK_RT_H
++
++#ifndef __LINUX_SPINLOCK_H
++#error Do not include directly. Use spinlock.h
+#endif
- }
-
- /*
-diff --git a/drivers/thermal/x86_pkg_temp_thermal.c b/drivers/thermal/x86_pkg_temp_thermal.c
-index 95f4c1bcdb4c..0be934799bff 100644
---- a/drivers/thermal/x86_pkg_temp_thermal.c
-+++ b/drivers/thermal/x86_pkg_temp_thermal.c
-@@ -29,6 +29,7 @@
- #include <linux/pm.h>
- #include <linux/thermal.h>
- #include <linux/debugfs.h>
-+#include <linux/swork.h>
- #include <asm/cpu_device_id.h>
- #include <asm/mce.h>
-
-@@ -353,7 +354,7 @@ static void pkg_temp_thermal_threshold_work_fn(struct work_struct *work)
- }
- }
-
--static int pkg_temp_thermal_platform_thermal_notify(__u64 msr_val)
-+static void platform_thermal_notify_work(struct swork_event *event)
- {
- unsigned long flags;
- int cpu = smp_processor_id();
-@@ -370,7 +371,7 @@ static int pkg_temp_thermal_platform_thermal_notify(__u64 msr_val)
- pkg_work_scheduled[phy_id]) {
- disable_pkg_thres_interrupt();
- spin_unlock_irqrestore(&pkg_work_lock, flags);
-- return -EINVAL;
-+ return;
- }
- pkg_work_scheduled[phy_id] = 1;
- spin_unlock_irqrestore(&pkg_work_lock, flags);
-@@ -379,9 +380,48 @@ static int pkg_temp_thermal_platform_thermal_notify(__u64 msr_val)
- schedule_delayed_work_on(cpu,
- &per_cpu(pkg_temp_thermal_threshold_work, cpu),
- msecs_to_jiffies(notify_delay_ms));
++
++extern void __lockfunc rt_write_lock(rwlock_t *rwlock);
++extern void __lockfunc rt_read_lock(rwlock_t *rwlock);
++extern int __lockfunc rt_write_trylock(rwlock_t *rwlock);
++extern int __lockfunc rt_read_trylock(rwlock_t *rwlock);
++extern void __lockfunc rt_write_unlock(rwlock_t *rwlock);
++extern void __lockfunc rt_read_unlock(rwlock_t *rwlock);
++extern int __lockfunc rt_read_can_lock(rwlock_t *rwlock);
++extern int __lockfunc rt_write_can_lock(rwlock_t *rwlock);
++extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key);
++
++#define read_can_lock(rwlock) rt_read_can_lock(rwlock)
++#define write_can_lock(rwlock) rt_write_can_lock(rwlock)
++
++#define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock))
++#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock))
++
++static inline int __write_trylock_rt_irqsave(rwlock_t *lock, unsigned long *flags)
++{
++ /* XXX ARCH_IRQ_ENABLED */
++ *flags = 0;
++ return rt_write_trylock(lock);
+}
+
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static struct swork_event notify_work;
++#define write_trylock_irqsave(lock, flags) \
++ __cond_lock(lock, __write_trylock_rt_irqsave(lock, &(flags)))
++
++#define read_lock_irqsave(lock, flags) \
++ do { \
++ typecheck(unsigned long, flags); \
++ rt_read_lock(lock); \
++ flags = 0; \
++ } while (0)
++
++#define write_lock_irqsave(lock, flags) \
++ do { \
++ typecheck(unsigned long, flags); \
++ rt_write_lock(lock); \
++ flags = 0; \
++ } while (0)
+
-+static int thermal_notify_work_init(void)
-+{
-+ int err;
++#define read_lock(lock) rt_read_lock(lock)
+
-+ err = swork_get();
-+ if (err)
-+ return err;
++#define read_lock_bh(lock) \
++ do { \
++ local_bh_disable(); \
++ rt_read_lock(lock); \
++ } while (0)
+
-+ INIT_SWORK(¬ify_work, platform_thermal_notify_work);
- return 0;
- }
-
-+static void thermal_notify_work_cleanup(void)
-+{
-+ swork_put();
-+}
++#define read_lock_irq(lock) read_lock(lock)
+
-+static int pkg_temp_thermal_platform_thermal_notify(__u64 msr_val)
-+{
-+ swork_queue(¬ify_work);
-+ return 0;
-+}
++#define write_lock(lock) rt_write_lock(lock)
+
-+#else /* !CONFIG_PREEMPT_RT_FULL */
++#define write_lock_bh(lock) \
++ do { \
++ local_bh_disable(); \
++ rt_write_lock(lock); \
++ } while (0)
+
-+static int thermal_notify_work_init(void) { return 0; }
++#define write_lock_irq(lock) write_lock(lock)
+
-+static void thermal_notify_work_cleanup(void) { }
++#define read_unlock(lock) rt_read_unlock(lock)
+
-+static int pkg_temp_thermal_platform_thermal_notify(__u64 msr_val)
-+{
-+ platform_thermal_notify_work(NULL);
++#define read_unlock_bh(lock) \
++ do { \
++ rt_read_unlock(lock); \
++ local_bh_enable(); \
++ } while (0)
+
-+ return 0;
-+}
-+#endif /* CONFIG_PREEMPT_RT_FULL */
++#define read_unlock_irq(lock) read_unlock(lock)
+
- static int find_siblings_cpu(int cpu)
- {
- int i;
-@@ -585,6 +625,9 @@ static int __init pkg_temp_thermal_init(void)
- if (!x86_match_cpu(pkg_temp_thermal_ids))
- return -ENODEV;
-
-+ if (!thermal_notify_work_init())
-+ return -ENODEV;
++#define write_unlock(lock) rt_write_unlock(lock)
++
++#define write_unlock_bh(lock) \
++ do { \
++ rt_write_unlock(lock); \
++ local_bh_enable(); \
++ } while (0)
++
++#define write_unlock_irq(lock) write_unlock(lock)
++
++#define read_unlock_irqrestore(lock, flags) \
++ do { \
++ typecheck(unsigned long, flags); \
++ (void) flags; \
++ rt_read_unlock(lock); \
++ } while (0)
++
++#define write_unlock_irqrestore(lock, flags) \
++ do { \
++ typecheck(unsigned long, flags); \
++ (void) flags; \
++ rt_write_unlock(lock); \
++ } while (0)
++
++#define rwlock_init(rwl) \
++do { \
++ static struct lock_class_key __key; \
++ \
++ __rt_rwlock_init(rwl, #rwl, &__key); \
++} while (0)
+
- spin_lock_init(&pkg_work_lock);
- platform_thermal_package_notify =
- pkg_temp_thermal_platform_thermal_notify;
-@@ -609,7 +652,7 @@ static int __init pkg_temp_thermal_init(void)
- kfree(pkg_work_scheduled);
- platform_thermal_package_notify = NULL;
- platform_thermal_package_rate_control = NULL;
--
-+ thermal_notify_work_cleanup();
- return -ENODEV;
- }
-
-@@ -634,6 +677,7 @@ static void __exit pkg_temp_thermal_exit(void)
- mutex_unlock(&phy_dev_list_mutex);
- platform_thermal_package_notify = NULL;
- platform_thermal_package_rate_control = NULL;
-+ thermal_notify_work_cleanup();
- for_each_online_cpu(i)
- cancel_delayed_work_sync(
- &per_cpu(pkg_temp_thermal_threshold_work, i));
-diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
-index e8819aa20415..dd7f9bf45d6c 100644
---- a/drivers/tty/serial/8250/8250_core.c
-+++ b/drivers/tty/serial/8250/8250_core.c
-@@ -58,7 +58,16 @@ static struct uart_driver serial8250_reg;
-
- static unsigned int skip_txen_test; /* force skip of txen test at init time */
-
--#define PASS_LIMIT 512
+/*
-+ * On -rt we can have a more delays, and legitimately
-+ * so - so don't drop work spuriously and spam the
-+ * syslog:
++ * Internal functions made global for CPU pinning
+ */
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# define PASS_LIMIT 1000000
-+#else
-+# define PASS_LIMIT 512
++void __read_rt_lock(struct rt_rw_lock *lock);
++int __read_rt_trylock(struct rt_rw_lock *lock);
++void __write_rt_lock(struct rt_rw_lock *lock);
++int __write_rt_trylock(struct rt_rw_lock *lock);
++void __read_rt_unlock(struct rt_rw_lock *lock);
++void __write_rt_unlock(struct rt_rw_lock *lock);
++
+#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rwlock_types.h linux-4.14/include/linux/rwlock_types.h
+--- linux-4.14.orig/include/linux/rwlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/rwlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -1,6 +1,10 @@
+ #ifndef __LINUX_RWLOCK_TYPES_H
+ #define __LINUX_RWLOCK_TYPES_H
- #include <asm/serial.h>
++#if !defined(__LINUX_SPINLOCK_TYPES_H)
++# error "Do not include directly, include spinlock_types.h"
++#endif
++
/*
-diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
-index 080d5a59d0a7..eecc4f111473 100644
---- a/drivers/tty/serial/8250/8250_port.c
-+++ b/drivers/tty/serial/8250/8250_port.c
-@@ -35,6 +35,7 @@
- #include <linux/nmi.h>
- #include <linux/mutex.h>
- #include <linux/slab.h>
-+#include <linux/kdb.h>
- #include <linux/uaccess.h>
- #include <linux/pm_runtime.h>
- #include <linux/timer.h>
-@@ -3144,9 +3145,9 @@ void serial8250_console_write(struct uart_8250_port *up, const char *s,
-
- serial8250_rpm_get(up);
-
-- if (port->sysrq)
-+ if (port->sysrq || oops_in_progress)
- locked = 0;
-- else if (oops_in_progress)
-+ else if (in_kdb_printk())
- locked = spin_trylock_irqsave(&port->lock, flags);
- else
- spin_lock_irqsave(&port->lock, flags);
-diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
-index e2c33b9528d8..53af53c43e8c 100644
---- a/drivers/tty/serial/amba-pl011.c
-+++ b/drivers/tty/serial/amba-pl011.c
-@@ -2194,13 +2194,19 @@ pl011_console_write(struct console *co, const char *s, unsigned int count)
-
- clk_enable(uap->clk);
-
-- local_irq_save(flags);
-+ /*
-+ * local_irq_save(flags);
-+ *
-+ * This local_irq_save() is nonsense. If we come in via sysrq
-+ * handling then interrupts are already disabled. Aside of
-+ * that the port.sysrq check is racy on SMP regardless.
-+ */
- if (uap->port.sysrq)
- locked = 0;
- else if (oops_in_progress)
-- locked = spin_trylock(&uap->port.lock);
-+ locked = spin_trylock_irqsave(&uap->port.lock, flags);
- else
-- spin_lock(&uap->port.lock);
-+ spin_lock_irqsave(&uap->port.lock, flags);
-
- /*
- * First save the CR then disable the interrupts
-@@ -2224,8 +2230,7 @@ pl011_console_write(struct console *co, const char *s, unsigned int count)
- pl011_write(old_cr, uap, REG_CR);
-
- if (locked)
-- spin_unlock(&uap->port.lock);
-- local_irq_restore(flags);
-+ spin_unlock_irqrestore(&uap->port.lock, flags);
-
- clk_disable(uap->clk);
- }
-diff --git a/drivers/tty/serial/omap-serial.c b/drivers/tty/serial/omap-serial.c
-index a2a529994ba5..0ee7c4c518df 100644
---- a/drivers/tty/serial/omap-serial.c
-+++ b/drivers/tty/serial/omap-serial.c
-@@ -1257,13 +1257,10 @@ serial_omap_console_write(struct console *co, const char *s,
-
- pm_runtime_get_sync(up->dev);
-
-- local_irq_save(flags);
-- if (up->port.sysrq)
-- locked = 0;
-- else if (oops_in_progress)
-- locked = spin_trylock(&up->port.lock);
-+ if (up->port.sysrq || oops_in_progress)
-+ locked = spin_trylock_irqsave(&up->port.lock, flags);
- else
-- spin_lock(&up->port.lock);
-+ spin_lock_irqsave(&up->port.lock, flags);
-
- /*
- * First save the IER then disable the interrupts
-@@ -1292,8 +1289,7 @@ serial_omap_console_write(struct console *co, const char *s,
- pm_runtime_mark_last_busy(up->dev);
- pm_runtime_put_autosuspend(up->dev);
- if (locked)
-- spin_unlock(&up->port.lock);
-- local_irq_restore(flags);
-+ spin_unlock_irqrestore(&up->port.lock, flags);
- }
-
- static int __init
-diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
-index 479e223f9cff..3418a54b4131 100644
---- a/drivers/usb/core/hcd.c
-+++ b/drivers/usb/core/hcd.c
-@@ -1761,9 +1761,9 @@ static void __usb_hcd_giveback_urb(struct urb *urb)
- * and no one may trigger the above deadlock situation when
- * running complete() in tasklet.
- */
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- urb->complete(urb);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
-
- usb_anchor_resume_wakeups(anchor);
- atomic_dec(&urb->use_count);
-diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
-index 17989b72cdae..88c6574b5992 100644
---- a/drivers/usb/gadget/function/f_fs.c
-+++ b/drivers/usb/gadget/function/f_fs.c
-@@ -1593,7 +1593,7 @@ static void ffs_data_put(struct ffs_data *ffs)
- pr_info("%s(): freeing\n", __func__);
- ffs_data_clear(ffs);
- BUG_ON(waitqueue_active(&ffs->ev.waitq) ||
-- waitqueue_active(&ffs->ep0req_completion.wait));
-+ swait_active(&ffs->ep0req_completion.wait));
- kfree(ffs->dev_name);
- kfree(ffs);
- }
-diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
-index 1468d8f085a3..6aae3ae25c18 100644
---- a/drivers/usb/gadget/legacy/inode.c
-+++ b/drivers/usb/gadget/legacy/inode.c
-@@ -346,7 +346,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len)
- spin_unlock_irq (&epdata->dev->lock);
-
- if (likely (value == 0)) {
-- value = wait_event_interruptible (done.wait, done.done);
-+ value = swait_event_interruptible (done.wait, done.done);
- if (value != 0) {
- spin_lock_irq (&epdata->dev->lock);
- if (likely (epdata->ep != NULL)) {
-@@ -355,7 +355,7 @@ ep_io (struct ep_data *epdata, void *buf, unsigned len)
- usb_ep_dequeue (epdata->ep, epdata->req);
- spin_unlock_irq (&epdata->dev->lock);
-
-- wait_event (done.wait, done.done);
-+ swait_event (done.wait, done.done);
- if (epdata->status == -ECONNRESET)
- epdata->status = -EINTR;
- } else {
-diff --git a/fs/aio.c b/fs/aio.c
-index 428484f2f841..2b02e2eb2158 100644
---- a/fs/aio.c
-+++ b/fs/aio.c
-@@ -40,6 +40,7 @@
- #include <linux/ramfs.h>
- #include <linux/percpu-refcount.h>
- #include <linux/mount.h>
-+#include <linux/swork.h>
-
- #include <asm/kmap_types.h>
- #include <asm/uaccess.h>
-@@ -115,7 +116,7 @@ struct kioctx {
- struct page **ring_pages;
- long nr_pages;
-
-- struct work_struct free_work;
-+ struct swork_event free_work;
-
- /*
- * signals when all in-flight requests are done
-@@ -258,6 +259,7 @@ static int __init aio_setup(void)
- .mount = aio_mount,
- .kill_sb = kill_anon_super,
- };
-+ BUG_ON(swork_get());
- aio_mnt = kern_mount(&aio_fs);
- if (IS_ERR(aio_mnt))
- panic("Failed to create aio fs mount.");
-@@ -581,9 +583,9 @@ static int kiocb_cancel(struct aio_kiocb *kiocb)
- return cancel(&kiocb->common);
- }
-
--static void free_ioctx(struct work_struct *work)
-+static void free_ioctx(struct swork_event *sev)
- {
-- struct kioctx *ctx = container_of(work, struct kioctx, free_work);
-+ struct kioctx *ctx = container_of(sev, struct kioctx, free_work);
-
- pr_debug("freeing %p\n", ctx);
+ * include/linux/rwlock_types.h - generic rwlock type definitions
+ * and initializers
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rwlock_types_rt.h linux-4.14/include/linux/rwlock_types_rt.h
+--- linux-4.14.orig/include/linux/rwlock_types_rt.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/rwlock_types_rt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,55 @@
++#ifndef __LINUX_RWLOCK_TYPES_RT_H
++#define __LINUX_RWLOCK_TYPES_RT_H
++
++#ifndef __LINUX_SPINLOCK_TYPES_H
++#error "Do not include directly. Include spinlock_types.h instead"
++#endif
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define RW_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
++#else
++# define RW_DEP_MAP_INIT(lockname)
++#endif
++
++typedef struct rt_rw_lock rwlock_t;
++
++#define __RW_LOCK_UNLOCKED(name) __RWLOCK_RT_INITIALIZER(name)
++
++#define DEFINE_RWLOCK(name) \
++ rwlock_t name = __RW_LOCK_UNLOCKED(name)
++
++/*
++ * A reader biased implementation primarily for CPU pinning.
++ *
++ * Can be selected as general replacement for the single reader RT rwlock
++ * variant
++ */
++struct rt_rw_lock {
++ struct rt_mutex rtmutex;
++ atomic_t readers;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ struct lockdep_map dep_map;
++#endif
++};
++
++#define READER_BIAS (1U << 31)
++#define WRITER_BIAS (1U << 30)
++
++#define __RWLOCK_RT_INITIALIZER(name) \
++{ \
++ .readers = ATOMIC_INIT(READER_BIAS), \
++ .rtmutex = __RT_MUTEX_INITIALIZER_SAVE_STATE(name.rtmutex), \
++ RW_DEP_MAP_INIT(name) \
++}
++
++void __rwlock_biased_rt_init(struct rt_rw_lock *lock, const char *name,
++ struct lock_class_key *key);
++
++#define rwlock_biased_rt_init(rwlock) \
++ do { \
++ static struct lock_class_key __key; \
++ \
++ __rwlock_biased_rt_init((rwlock), #rwlock, &__key); \
++ } while (0)
++
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rwsem.h linux-4.14/include/linux/rwsem.h
+--- linux-4.14.orig/include/linux/rwsem.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/rwsem.h 2018-09-05 11:05:07.000000000 +0200
+@@ -20,6 +20,10 @@
+ #include <linux/osq_lock.h>
+ #endif
-@@ -602,8 +604,8 @@ static void free_ioctx_reqs(struct percpu_ref *ref)
- if (ctx->rq_wait && atomic_dec_and_test(&ctx->rq_wait->count))
- complete(&ctx->rq_wait->comp);
++#ifdef CONFIG_PREEMPT_RT_FULL
++#include <linux/rwsem_rt.h>
++#else /* PREEMPT_RT_FULL */
++
+ struct rw_semaphore;
-- INIT_WORK(&ctx->free_work, free_ioctx);
-- schedule_work(&ctx->free_work);
-+ INIT_SWORK(&ctx->free_work, free_ioctx);
-+ swork_queue(&ctx->free_work);
+ #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK
+@@ -114,6 +118,13 @@
+ return !list_empty(&sem->wait_list);
}
++#endif /* !PREEMPT_RT_FULL */
++
++/*
++ * The functions below are the same for all rwsem implementations including
++ * the RT specific variant.
++ */
++
/*
-@@ -611,9 +613,9 @@ static void free_ioctx_reqs(struct percpu_ref *ref)
- * and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -
- * now it's safe to cancel any that need to be.
+ * lock for reading
*/
--static void free_ioctx_users(struct percpu_ref *ref)
-+static void free_ioctx_users_work(struct swork_event *sev)
- {
-- struct kioctx *ctx = container_of(ref, struct kioctx, users);
-+ struct kioctx *ctx = container_of(sev, struct kioctx, free_work);
- struct aio_kiocb *req;
-
- spin_lock_irq(&ctx->ctx_lock);
-@@ -632,6 +634,14 @@ static void free_ioctx_users(struct percpu_ref *ref)
- percpu_ref_put(&ctx->reqs);
- }
-
-+static void free_ioctx_users(struct percpu_ref *ref)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/rwsem_rt.h linux-4.14/include/linux/rwsem_rt.h
+--- linux-4.14.orig/include/linux/rwsem_rt.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/rwsem_rt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,67 @@
++#ifndef _LINUX_RWSEM_RT_H
++#define _LINUX_RWSEM_RT_H
++
++#ifndef _LINUX_RWSEM_H
++#error "Include rwsem.h"
++#endif
++
++#include <linux/rtmutex.h>
++#include <linux/swait.h>
++
++#define READER_BIAS (1U << 31)
++#define WRITER_BIAS (1U << 30)
++
++struct rw_semaphore {
++ atomic_t readers;
++ struct rt_mutex rtmutex;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ struct lockdep_map dep_map;
++#endif
++};
++
++#define __RWSEM_INITIALIZER(name) \
++{ \
++ .readers = ATOMIC_INIT(READER_BIAS), \
++ .rtmutex = __RT_MUTEX_INITIALIZER(name.rtmutex), \
++ RW_DEP_MAP_INIT(name) \
++}
++
++#define DECLARE_RWSEM(lockname) \
++ struct rw_semaphore lockname = __RWSEM_INITIALIZER(lockname)
++
++extern void __rwsem_init(struct rw_semaphore *rwsem, const char *name,
++ struct lock_class_key *key);
++
++#define __init_rwsem(sem, name, key) \
++do { \
++ rt_mutex_init(&(sem)->rtmutex); \
++ __rwsem_init((sem), (name), (key)); \
++} while (0)
++
++#define init_rwsem(sem) \
++do { \
++ static struct lock_class_key __key; \
++ \
++ __init_rwsem((sem), #sem, &__key); \
++} while (0)
++
++static inline int rwsem_is_locked(struct rw_semaphore *sem)
+{
-+ struct kioctx *ctx = container_of(ref, struct kioctx, users);
++ return atomic_read(&sem->readers) != READER_BIAS;
++}
+
-+ INIT_SWORK(&ctx->free_work, free_ioctx_users_work);
-+ swork_queue(&ctx->free_work);
++static inline int rwsem_is_contended(struct rw_semaphore *sem)
++{
++ return atomic_read(&sem->readers) > 0;
+}
+
- static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm)
- {
- unsigned i, new_nr;
-diff --git a/fs/autofs4/autofs_i.h b/fs/autofs4/autofs_i.h
-index a1fba4285277..3796769b4cd1 100644
---- a/fs/autofs4/autofs_i.h
-+++ b/fs/autofs4/autofs_i.h
-@@ -31,6 +31,7 @@
- #include <linux/sched.h>
- #include <linux/mount.h>
- #include <linux/namei.h>
-+#include <linux/delay.h>
- #include <asm/current.h>
- #include <linux/uaccess.h>
-
-diff --git a/fs/autofs4/expire.c b/fs/autofs4/expire.c
-index d8e6d421c27f..2e689ab1306b 100644
---- a/fs/autofs4/expire.c
-+++ b/fs/autofs4/expire.c
-@@ -148,7 +148,7 @@ static struct dentry *get_next_positive_dentry(struct dentry *prev,
- parent = p->d_parent;
- if (!spin_trylock(&parent->d_lock)) {
- spin_unlock(&p->d_lock);
-- cpu_relax();
-+ cpu_chill();
- goto relock;
- }
- spin_unlock(&p->d_lock);
-diff --git a/fs/buffer.c b/fs/buffer.c
-index b205a629001d..5646afc022ba 100644
---- a/fs/buffer.c
-+++ b/fs/buffer.c
-@@ -301,8 +301,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
- * decide that the page is now completely done.
- */
- first = page_buffers(page);
-- local_irq_save(flags);
-- bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
-+ flags = bh_uptodate_lock_irqsave(first);
- clear_buffer_async_read(bh);
- unlock_buffer(bh);
- tmp = bh;
-@@ -315,8 +314,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
- }
- tmp = tmp->b_this_page;
- } while (tmp != bh);
-- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-- local_irq_restore(flags);
-+ bh_uptodate_unlock_irqrestore(first, flags);
-
- /*
- * If none of the buffers had errors and they are all
-@@ -328,9 +326,7 @@ static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
- return;
-
- still_busy:
-- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-- local_irq_restore(flags);
-- return;
-+ bh_uptodate_unlock_irqrestore(first, flags);
- }
-
- /*
-@@ -358,8 +354,7 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate)
- }
-
- first = page_buffers(page);
-- local_irq_save(flags);
-- bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
-+ flags = bh_uptodate_lock_irqsave(first);
-
- clear_buffer_async_write(bh);
- unlock_buffer(bh);
-@@ -371,15 +366,12 @@ void end_buffer_async_write(struct buffer_head *bh, int uptodate)
- }
- tmp = tmp->b_this_page;
- }
-- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-- local_irq_restore(flags);
-+ bh_uptodate_unlock_irqrestore(first, flags);
- end_page_writeback(page);
- return;
-
- still_busy:
-- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-- local_irq_restore(flags);
-- return;
-+ bh_uptodate_unlock_irqrestore(first, flags);
++extern void __down_read(struct rw_semaphore *sem);
++extern int __down_read_trylock(struct rw_semaphore *sem);
++extern void __down_write(struct rw_semaphore *sem);
++extern int __must_check __down_write_killable(struct rw_semaphore *sem);
++extern int __down_write_trylock(struct rw_semaphore *sem);
++extern void __up_read(struct rw_semaphore *sem);
++extern void __up_write(struct rw_semaphore *sem);
++extern void __downgrade_write(struct rw_semaphore *sem);
++
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/sched/mm.h linux-4.14/include/linux/sched/mm.h
+--- linux-4.14.orig/include/linux/sched/mm.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/sched/mm.h 2018-09-05 11:05:07.000000000 +0200
+@@ -43,6 +43,17 @@
+ __mmdrop(mm);
}
- EXPORT_SYMBOL(end_buffer_async_write);
-
-@@ -3383,6 +3375,7 @@ struct buffer_head *alloc_buffer_head(gfp_t gfp_flags)
- struct buffer_head *ret = kmem_cache_zalloc(bh_cachep, gfp_flags);
- if (ret) {
- INIT_LIST_HEAD(&ret->b_assoc_buffers);
-+ buffer_head_init_locks(ret);
- preempt_disable();
- __this_cpu_inc(bh_accounting.nr);
- recalc_bh_state();
-diff --git a/fs/cifs/readdir.c b/fs/cifs/readdir.c
-index 8f6a2a5863b9..4217828d0b68 100644
---- a/fs/cifs/readdir.c
-+++ b/fs/cifs/readdir.c
-@@ -80,7 +80,7 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
- struct inode *inode;
- struct super_block *sb = parent->d_sb;
- struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
-
- cifs_dbg(FYI, "%s: for %s\n", __func__, name->name);
-diff --git a/fs/dcache.c b/fs/dcache.c
-index 4485a48f4091..691039a6a872 100644
---- a/fs/dcache.c
-+++ b/fs/dcache.c
-@@ -19,6 +19,7 @@
- #include <linux/mm.h>
- #include <linux/fs.h>
- #include <linux/fsnotify.h>
-+#include <linux/delay.h>
- #include <linux/slab.h>
- #include <linux/init.h>
- #include <linux/hash.h>
-@@ -750,6 +751,8 @@ static inline bool fast_dput(struct dentry *dentry)
- */
- void dput(struct dentry *dentry)
- {
-+ struct dentry *parent;
++#ifdef CONFIG_PREEMPT_RT_BASE
++extern void __mmdrop_delayed(struct rcu_head *rhp);
++static inline void mmdrop_delayed(struct mm_struct *mm)
++{
++ if (atomic_dec_and_test(&mm->mm_count))
++ call_rcu(&mm->delayed_drop, __mmdrop_delayed);
++}
++#else
++# define mmdrop_delayed(mm) mmdrop(mm)
++#endif
+
- if (unlikely(!dentry))
- return;
-
-@@ -788,9 +791,18 @@ void dput(struct dentry *dentry)
- return;
+ static inline void mmdrop_async_fn(struct work_struct *work)
+ {
+ struct mm_struct *mm = container_of(work, struct mm_struct, async_put_work);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/sched/task.h linux-4.14/include/linux/sched/task.h
+--- linux-4.14.orig/include/linux/sched/task.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/sched/task.h 2018-09-05 11:05:07.000000000 +0200
+@@ -88,6 +88,15 @@
- kill_it:
-- dentry = dentry_kill(dentry);
-- if (dentry) {
-- cond_resched();
-+ parent = dentry_kill(dentry);
-+ if (parent) {
-+ int r;
-+
-+ if (parent == dentry) {
-+ /* the task with the highest priority won't schedule */
-+ r = cond_resched();
-+ if (!r)
-+ cpu_chill();
-+ } else {
-+ dentry = parent;
-+ }
- goto repeat;
- }
- }
-@@ -2324,7 +2336,7 @@ void d_delete(struct dentry * dentry)
- if (dentry->d_lockref.count == 1) {
- if (!spin_trylock(&inode->i_lock)) {
- spin_unlock(&dentry->d_lock);
-- cpu_relax();
-+ cpu_chill();
- goto again;
- }
- dentry->d_flags &= ~DCACHE_CANT_MOUNT;
-@@ -2384,21 +2396,24 @@ static inline void end_dir_add(struct inode *dir, unsigned n)
+ #define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0)
- static void d_wait_lookup(struct dentry *dentry)
- {
-- if (d_in_lookup(dentry)) {
-- DECLARE_WAITQUEUE(wait, current);
-- add_wait_queue(dentry->d_wait, &wait);
-- do {
-- set_current_state(TASK_UNINTERRUPTIBLE);
-- spin_unlock(&dentry->d_lock);
-- schedule();
-- spin_lock(&dentry->d_lock);
-- } while (d_in_lookup(dentry));
-- }
-+ struct swait_queue __wait;
-+
-+ if (!d_in_lookup(dentry))
-+ return;
++#ifdef CONFIG_PREEMPT_RT_BASE
++extern void __put_task_struct_cb(struct rcu_head *rhp);
+
-+ INIT_LIST_HEAD(&__wait.task_list);
-+ do {
-+ prepare_to_swait(dentry->d_wait, &__wait, TASK_UNINTERRUPTIBLE);
-+ spin_unlock(&dentry->d_lock);
-+ schedule();
-+ spin_lock(&dentry->d_lock);
-+ } while (d_in_lookup(dentry));
-+ finish_swait(dentry->d_wait, &__wait);
++static inline void put_task_struct(struct task_struct *t)
++{
++ if (atomic_dec_and_test(&t->usage))
++ call_rcu(&t->put_rcu, __put_task_struct_cb);
++}
++#else
+ extern void __put_task_struct(struct task_struct *t);
+
+ static inline void put_task_struct(struct task_struct *t)
+@@ -95,7 +104,7 @@
+ if (atomic_dec_and_test(&t->usage))
+ __put_task_struct(t);
}
+-
++#endif
+ struct task_struct *task_rcu_dereference(struct task_struct **ptask);
- struct dentry *d_alloc_parallel(struct dentry *parent,
- const struct qstr *name,
-- wait_queue_head_t *wq)
-+ struct swait_queue_head *wq)
- {
- unsigned int hash = name->hash;
- struct hlist_bl_head *b = in_lookup_hash(parent, hash);
-@@ -2507,7 +2522,7 @@ void __d_lookup_done(struct dentry *dentry)
- hlist_bl_lock(b);
- dentry->d_flags &= ~DCACHE_PAR_LOOKUP;
- __hlist_bl_del(&dentry->d_u.d_in_lookup_hash);
-- wake_up_all(dentry->d_wait);
-+ swake_up_all(dentry->d_wait);
- dentry->d_wait = NULL;
- hlist_bl_unlock(b);
- INIT_HLIST_NODE(&dentry->d_u.d_alias);
-@@ -3604,6 +3619,11 @@ EXPORT_SYMBOL(d_genocide);
+ #ifdef CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/sched/wake_q.h linux-4.14/include/linux/sched/wake_q.h
+--- linux-4.14.orig/include/linux/sched/wake_q.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/sched/wake_q.h 2018-09-05 11:05:07.000000000 +0200
+@@ -47,8 +47,29 @@
+ head->lastp = &head->first;
+ }
- void __init vfs_caches_init_early(void)
- {
-+ int i;
+-extern void wake_q_add(struct wake_q_head *head,
+- struct task_struct *task);
+-extern void wake_up_q(struct wake_q_head *head);
++extern void __wake_q_add(struct wake_q_head *head,
++ struct task_struct *task, bool sleeper);
++static inline void wake_q_add(struct wake_q_head *head,
++ struct task_struct *task)
++{
++ __wake_q_add(head, task, false);
++}
++
++static inline void wake_q_add_sleeper(struct wake_q_head *head,
++ struct task_struct *task)
++{
++ __wake_q_add(head, task, true);
++}
+
-+ for (i = 0; i < ARRAY_SIZE(in_lookup_hashtable); i++)
-+ INIT_HLIST_BL_HEAD(&in_lookup_hashtable[i]);
++extern void __wake_up_q(struct wake_q_head *head, bool sleeper);
++static inline void wake_up_q(struct wake_q_head *head)
++{
++ __wake_up_q(head, false);
++}
+
- dcache_init_early();
- inode_init_early();
- }
-diff --git a/fs/eventpoll.c b/fs/eventpoll.c
-index 10db91218933..42af0a06f657 100644
---- a/fs/eventpoll.c
-+++ b/fs/eventpoll.c
-@@ -510,12 +510,12 @@ static int ep_poll_wakeup_proc(void *priv, void *cookie, int call_nests)
- */
- static void ep_poll_safewake(wait_queue_head_t *wq)
- {
-- int this_cpu = get_cpu();
-+ int this_cpu = get_cpu_light();
++static inline void wake_up_q_sleeper(struct wake_q_head *head)
++{
++ __wake_up_q(head, true);
++}
- ep_call_nested(&poll_safewake_ncalls, EP_MAX_NESTS,
- ep_poll_wakeup_proc, NULL, wq, (void *) (long) this_cpu);
+ #endif /* _LINUX_SCHED_WAKE_Q_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/sched.h linux-4.14/include/linux/sched.h
+--- linux-4.14.orig/include/linux/sched.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/sched.h 2018-09-05 11:05:07.000000000 +0200
+@@ -27,6 +27,7 @@
+ #include <linux/signal_types.h>
+ #include <linux/mm_types_task.h>
+ #include <linux/task_io_accounting.h>
++#include <asm/kmap_types.h>
-- put_cpu();
-+ put_cpu_light();
- }
+ /* task_struct member predeclarations (sorted alphabetically): */
+ struct audit_context;
+@@ -93,7 +94,6 @@
- static void ep_remove_wait_queue(struct eppoll_entry *pwq)
-diff --git a/fs/exec.c b/fs/exec.c
-index 67e86571685a..fe14cdd84016 100644
---- a/fs/exec.c
-+++ b/fs/exec.c
-@@ -1017,12 +1017,14 @@ static int exec_mmap(struct mm_struct *mm)
- }
- }
- task_lock(tsk);
-+ preempt_disable_rt();
- active_mm = tsk->active_mm;
- tsk->mm = mm;
- tsk->active_mm = mm;
- activate_mm(active_mm, mm);
- tsk->mm->vmacache_seqnum = 0;
- vmacache_flush(tsk);
-+ preempt_enable_rt();
- task_unlock(tsk);
- if (old_mm) {
- up_read(&old_mm->mmap_sem);
-diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
-index 642c57b8de7b..8494b9308333 100644
---- a/fs/fuse/dir.c
-+++ b/fs/fuse/dir.c
-@@ -1191,7 +1191,7 @@ static int fuse_direntplus_link(struct file *file,
- struct inode *dir = d_inode(parent);
- struct fuse_conn *fc;
- struct inode *inode;
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+ /* Convenience macros for the sake of wake_up(): */
+ #define TASK_NORMAL (TASK_INTERRUPTIBLE | TASK_UNINTERRUPTIBLE)
+-#define TASK_ALL (TASK_NORMAL | __TASK_STOPPED | __TASK_TRACED)
- if (!o->nodeid) {
- /*
-diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
-index 684996c8a3a4..6e18a06aaabe 100644
---- a/fs/jbd2/checkpoint.c
-+++ b/fs/jbd2/checkpoint.c
-@@ -116,6 +116,8 @@ void __jbd2_log_wait_for_space(journal_t *journal)
- nblocks = jbd2_space_needed(journal);
- while (jbd2_log_space_left(journal) < nblocks) {
- write_unlock(&journal->j_state_lock);
-+ if (current->plug)
-+ io_schedule();
- mutex_lock(&journal->j_checkpoint_mutex);
+ /* get_task_state(): */
+ #define TASK_REPORT (TASK_RUNNING | TASK_INTERRUPTIBLE | \
+@@ -101,12 +101,8 @@
+ __TASK_TRACED | EXIT_DEAD | EXIT_ZOMBIE | \
+ TASK_PARKED)
- /*
-diff --git a/fs/locks.c b/fs/locks.c
-index 22c5b4aa4961..269c6a44449a 100644
---- a/fs/locks.c
-+++ b/fs/locks.c
-@@ -935,7 +935,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
- return -ENOMEM;
- }
+-#define task_is_traced(task) ((task->state & __TASK_TRACED) != 0)
+-
+ #define task_is_stopped(task) ((task->state & __TASK_STOPPED) != 0)
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
- if (request->fl_flags & FL_ACCESS)
- goto find_conflict;
-@@ -976,7 +976,7 @@ static int flock_lock_inode(struct inode *inode, struct file_lock *request)
+-#define task_is_stopped_or_traced(task) ((task->state & (__TASK_STOPPED | __TASK_TRACED)) != 0)
+-
+ #define task_contributes_to_load(task) ((task->state & TASK_UNINTERRUPTIBLE) != 0 && \
+ (task->flags & PF_FROZEN) == 0 && \
+ (task->state & TASK_NOLOAD) == 0)
+@@ -134,6 +130,11 @@
+ smp_store_mb(current->state, (state_value)); \
+ } while (0)
- out:
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
- if (new_fl)
- locks_free_lock(new_fl);
- locks_dispose_list(&dispose);
-@@ -1013,7 +1013,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
- new_fl2 = locks_alloc_lock();
- }
++#define __set_current_state_no_track(state_value) \
++ current->state = (state_value);
++#define set_current_state_no_track(state_value) \
++ smp_store_mb(current->state, (state_value));
++
+ #define set_special_state(state_value) \
+ do { \
+ unsigned long flags; /* may shadow */ \
+@@ -187,6 +188,9 @@
+ #define set_current_state(state_value) \
+ smp_store_mb(current->state, (state_value))
+
++#define __set_current_state_no_track(state_value) __set_current_state(state_value)
++#define set_current_state_no_track(state_value) set_current_state(state_value)
++
+ /*
+ * set_special_state() should be used for those states when the blocking task
+ * can not use the regular condition based wait-loop. In that case we must
+@@ -566,6 +570,8 @@
+ #endif
+ /* -1 unrunnable, 0 runnable, >0 stopped: */
+ volatile long state;
++ /* saved state for "spinlock sleepers" */
++ volatile long saved_state;
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
- /*
- * New lock request. Walk all POSIX locks and look for conflicts. If
-@@ -1185,7 +1185,7 @@ static int posix_lock_inode(struct inode *inode, struct file_lock *request,
- }
- out:
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
/*
- * Free any unused locks.
- */
-@@ -1460,7 +1460,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
- return error;
- }
+ * This begins the randomizable portion of task_struct. Only
+@@ -618,7 +624,25 @@
+
+ unsigned int policy;
+ int nr_cpus_allowed;
+- cpumask_t cpus_allowed;
++ const cpumask_t *cpus_ptr;
++ cpumask_t cpus_mask;
++#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++ int migrate_disable;
++ int migrate_disable_update;
++ int pinned_on_cpu;
++# ifdef CONFIG_SCHED_DEBUG
++ int migrate_disable_atomic;
++# endif
++
++#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
++ int migrate_disable;
++# ifdef CONFIG_SCHED_DEBUG
++ int migrate_disable_atomic;
++# endif
++#endif
++#ifdef CONFIG_PREEMPT_RT_FULL
++ int sleeping_lock;
++#endif
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
+ #ifdef CONFIG_PREEMPT_RCU
+ int rcu_read_lock_nesting;
+@@ -777,6 +801,9 @@
+ #ifdef CONFIG_POSIX_TIMERS
+ struct task_cputime cputime_expires;
+ struct list_head cpu_timers[3];
++#ifdef CONFIG_PREEMPT_RT_BASE
++ struct task_struct *posix_timer_list;
++#endif
+ #endif
- time_out_leases(inode, &dispose);
-@@ -1512,13 +1512,13 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
- locks_insert_block(fl, new_fl);
- trace_break_lease_block(inode, new_fl);
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
+ /* Process credentials: */
+@@ -820,11 +847,17 @@
+ /* Signal handlers: */
+ struct signal_struct *signal;
+ struct sighand_struct *sighand;
++ struct sigqueue *sigqueue_cache;
++
+ sigset_t blocked;
+ sigset_t real_blocked;
+ /* Restored if set_restore_sigmask() was used: */
+ sigset_t saved_sigmask;
+ struct sigpending pending;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ /* TODO: move me into ->restart_block ? */
++ struct siginfo forced_info;
++#endif
+ unsigned long sas_ss_sp;
+ size_t sas_ss_size;
+ unsigned int sas_ss_flags;
+@@ -849,6 +882,7 @@
+ raw_spinlock_t pi_lock;
+
+ struct wake_q_node wake_q;
++ struct wake_q_node wake_q_sleeper;
+
+ #ifdef CONFIG_RT_MUTEXES
+ /* PI waiters blocked on a rt_mutex held by this task: */
+@@ -1116,9 +1150,23 @@
+ unsigned int sequential_io;
+ unsigned int sequential_io_avg;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_BASE
++ struct rcu_head put_rcu;
++ int softirq_nestcnt;
++ unsigned int softirqs_raised;
++#endif
++#ifdef CONFIG_PREEMPT_RT_FULL
++# if defined CONFIG_HIGHMEM || defined CONFIG_X86_32
++ int kmap_idx;
++ pte_t kmap_pte[KM_TYPE_NR];
++# endif
++#endif
+ #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
+ unsigned long task_state_change;
+ #endif
++#ifdef CONFIG_PREEMPT_RT_FULL
++ int xmit_recursion;
++#endif
+ int pagefault_disabled;
+ #ifdef CONFIG_MMU
+ struct task_struct *oom_reaper_list;
+@@ -1332,6 +1380,7 @@
+ /*
+ * Per process flags
+ */
++#define PF_IN_SOFTIRQ 0x00000001 /* Task is serving softirq */
+ #define PF_IDLE 0x00000002 /* I am an IDLE thread */
+ #define PF_EXITING 0x00000004 /* Getting shut down */
+ #define PF_EXITPIDONE 0x00000008 /* PI exit done on shut down */
+@@ -1355,7 +1404,7 @@
+ #define PF_KTHREAD 0x00200000 /* I am a kernel thread */
+ #define PF_RANDOMIZE 0x00400000 /* Randomize virtual address space */
+ #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */
+-#define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */
++#define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_mask */
+ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */
+ #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */
+ #define PF_FREEZER_SKIP 0x40000000 /* Freezer should not count it as freezable */
+@@ -1535,6 +1584,7 @@
+
+ extern int wake_up_state(struct task_struct *tsk, unsigned int state);
+ extern int wake_up_process(struct task_struct *tsk);
++extern int wake_up_lock_sleeper(struct task_struct *tsk);
+ extern void wake_up_new_task(struct task_struct *tsk);
+
+ #ifdef CONFIG_SMP
+@@ -1611,6 +1661,89 @@
+ return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED));
+ }
+
++#ifdef CONFIG_PREEMPT_LAZY
++static inline void set_tsk_need_resched_lazy(struct task_struct *tsk)
++{
++ set_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY);
++}
++
++static inline void clear_tsk_need_resched_lazy(struct task_struct *tsk)
++{
++ clear_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY);
++}
++
++static inline int test_tsk_need_resched_lazy(struct task_struct *tsk)
++{
++ return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY));
++}
++
++static inline int need_resched_lazy(void)
++{
++ return test_thread_flag(TIF_NEED_RESCHED_LAZY);
++}
++
++static inline int need_resched_now(void)
++{
++ return test_thread_flag(TIF_NEED_RESCHED);
++}
++
++#else
++static inline void clear_tsk_need_resched_lazy(struct task_struct *tsk) { }
++static inline int need_resched_lazy(void) { return 0; }
++
++static inline int need_resched_now(void)
++{
++ return test_thread_flag(TIF_NEED_RESCHED);
++}
++
++#endif
++
++
++static inline bool __task_is_stopped_or_traced(struct task_struct *task)
++{
++ if (task->state & (__TASK_STOPPED | __TASK_TRACED))
++ return true;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ if (task->saved_state & (__TASK_STOPPED | __TASK_TRACED))
++ return true;
++#endif
++ return false;
++}
++
++static inline bool task_is_stopped_or_traced(struct task_struct *task)
++{
++ bool traced_stopped;
++
++#ifdef CONFIG_PREEMPT_RT_FULL
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&task->pi_lock, flags);
++ traced_stopped = __task_is_stopped_or_traced(task);
++ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
++#else
++ traced_stopped = __task_is_stopped_or_traced(task);
++#endif
++ return traced_stopped;
++}
++
++static inline bool task_is_traced(struct task_struct *task)
++{
++ bool traced = false;
++
++ if (task->state & __TASK_TRACED)
++ return true;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ /* in case the task is sleeping on tasklist_lock */
++ raw_spin_lock_irq(&task->pi_lock);
++ if (task->state & __TASK_TRACED)
++ traced = true;
++ else if (task->saved_state & __TASK_TRACED)
++ traced = true;
++ raw_spin_unlock_irq(&task->pi_lock);
++#endif
++ return traced;
++}
++
+ /*
+ * cond_resched() and cond_resched_lock(): latency reduction via
+ * explicit rescheduling in places that are safe. The return
+@@ -1636,12 +1769,16 @@
+ __cond_resched_lock(lock); \
+ })
+
++#ifndef CONFIG_PREEMPT_RT_FULL
+ extern int __cond_resched_softirq(void);
+
+ #define cond_resched_softirq() ({ \
+ ___might_sleep(__FILE__, __LINE__, SOFTIRQ_DISABLE_OFFSET); \
+ __cond_resched_softirq(); \
+ })
++#else
++# define cond_resched_softirq() cond_resched()
++#endif
- locks_dispose_list(&dispose);
- error = wait_event_interruptible_timeout(new_fl->fl_wait,
- !new_fl->fl_next, break_time);
+ static inline void cond_resched_rcu(void)
+ {
+@@ -1671,6 +1808,23 @@
+ return unlikely(tif_need_resched());
+ }
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
- trace_break_lease_unblock(inode, new_fl);
- locks_delete_block(new_fl);
-@@ -1535,7 +1535,7 @@ int __break_lease(struct inode *inode, unsigned int mode, unsigned int type)
- }
- out:
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
- locks_dispose_list(&dispose);
- locks_free_lock(new_fl);
- return error;
-@@ -1609,7 +1609,7 @@ int fcntl_getlease(struct file *filp)
++#ifdef CONFIG_PREEMPT_RT_FULL
++static inline void sleeping_lock_inc(void)
++{
++ current->sleeping_lock++;
++}
++
++static inline void sleeping_lock_dec(void)
++{
++ current->sleeping_lock--;
++}
++
++#else
++
++static inline void sleeping_lock_inc(void) { }
++static inline void sleeping_lock_dec(void) { }
++#endif
++
+ /*
+ * Wrappers for p->thread_info->cpu access. No-op on UP.
+ */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/seqlock.h linux-4.14/include/linux/seqlock.h
+--- linux-4.14.orig/include/linux/seqlock.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/seqlock.h 2018-09-05 11:05:07.000000000 +0200
+@@ -221,20 +221,30 @@
+ return __read_seqcount_retry(s, start);
+ }
- ctx = smp_load_acquire(&inode->i_flctx);
- if (ctx && !list_empty_careful(&ctx->flc_lease)) {
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
- time_out_leases(inode, &dispose);
- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
-@@ -1619,7 +1619,7 @@ int fcntl_getlease(struct file *filp)
- break;
- }
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
+-
+-
+-static inline void raw_write_seqcount_begin(seqcount_t *s)
++static inline void __raw_write_seqcount_begin(seqcount_t *s)
+ {
+ s->sequence++;
+ smp_wmb();
+ }
- locks_dispose_list(&dispose);
- }
-@@ -1694,7 +1694,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
- return -EINVAL;
- }
+-static inline void raw_write_seqcount_end(seqcount_t *s)
++static inline void raw_write_seqcount_begin(seqcount_t *s)
++{
++ preempt_disable_rt();
++ __raw_write_seqcount_begin(s);
++}
++
++static inline void __raw_write_seqcount_end(seqcount_t *s)
+ {
+ smp_wmb();
+ s->sequence++;
+ }
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
- time_out_leases(inode, &dispose);
- error = check_conflicting_open(dentry, arg, lease->fl_flags);
-@@ -1765,7 +1765,7 @@ generic_add_lease(struct file *filp, long arg, struct file_lock **flp, void **pr
- lease->fl_lmops->lm_setup(lease, priv);
- out:
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
- locks_dispose_list(&dispose);
- if (is_deleg)
- inode_unlock(inode);
-@@ -1788,7 +1788,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
- return error;
- }
++static inline void raw_write_seqcount_end(seqcount_t *s)
++{
++ __raw_write_seqcount_end(s);
++ preempt_enable_rt();
++}
++
+ /**
+ * raw_write_seqcount_barrier - do a seq write barrier
+ * @s: pointer to seqcount_t
+@@ -429,10 +439,32 @@
+ /*
+ * Read side functions for starting and finalizing a read side section.
+ */
++#ifndef CONFIG_PREEMPT_RT_FULL
+ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ {
+ return read_seqcount_begin(&sl->seqcount);
+ }
++#else
++/*
++ * Starvation safe read side for RT
++ */
++static inline unsigned read_seqbegin(seqlock_t *sl)
++{
++ unsigned ret;
++
++repeat:
++ ret = ACCESS_ONCE(sl->seqcount.sequence);
++ if (unlikely(ret & 1)) {
++ /*
++ * Take the lock and let the writer proceed (i.e. evtl
++ * boost it), otherwise we could loop here forever.
++ */
++ spin_unlock_wait(&sl->lock);
++ goto repeat;
++ }
++ return ret;
++}
++#endif
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
- list_for_each_entry(fl, &ctx->flc_lease, fl_list) {
- if (fl->fl_file == filp &&
-@@ -1801,7 +1801,7 @@ static int generic_delete_lease(struct file *filp, void *owner)
- if (victim)
- error = fl->fl_lmops->lm_change(victim, F_UNLCK, &dispose);
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
- locks_dispose_list(&dispose);
- return error;
+ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ {
+@@ -447,36 +479,45 @@
+ static inline void write_seqlock(seqlock_t *sl)
+ {
+ spin_lock(&sl->lock);
+- write_seqcount_begin(&sl->seqcount);
++ __raw_write_seqcount_begin(&sl->seqcount);
++}
++
++static inline int try_write_seqlock(seqlock_t *sl)
++{
++ if (spin_trylock(&sl->lock)) {
++ __raw_write_seqcount_begin(&sl->seqcount);
++ return 1;
++ }
++ return 0;
}
-@@ -2532,13 +2532,13 @@ locks_remove_lease(struct file *filp, struct file_lock_context *ctx)
- if (list_empty(&ctx->flc_lease))
- return;
-- percpu_down_read_preempt_disable(&file_rwsem);
-+ percpu_down_read(&file_rwsem);
- spin_lock(&ctx->flc_lock);
- list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list)
- if (filp == fl->fl_file)
- lease_modify(fl, F_UNLCK, &dispose);
- spin_unlock(&ctx->flc_lock);
-- percpu_up_read_preempt_enable(&file_rwsem);
-+ percpu_up_read(&file_rwsem);
+ static inline void write_sequnlock(seqlock_t *sl)
+ {
+- write_seqcount_end(&sl->seqcount);
++ __raw_write_seqcount_end(&sl->seqcount);
+ spin_unlock(&sl->lock);
+ }
- locks_dispose_list(&dispose);
+ static inline void write_seqlock_bh(seqlock_t *sl)
+ {
+ spin_lock_bh(&sl->lock);
+- write_seqcount_begin(&sl->seqcount);
++ __raw_write_seqcount_begin(&sl->seqcount);
}
-diff --git a/fs/namei.c b/fs/namei.c
-index 5b4eed221530..9c8dd3c83a80 100644
---- a/fs/namei.c
-+++ b/fs/namei.c
-@@ -1629,7 +1629,7 @@ static struct dentry *lookup_slow(const struct qstr *name,
+
+ static inline void write_sequnlock_bh(seqlock_t *sl)
{
- struct dentry *dentry = ERR_PTR(-ENOENT), *old;
- struct inode *inode = dir->d_inode;
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+- write_seqcount_end(&sl->seqcount);
++ __raw_write_seqcount_end(&sl->seqcount);
+ spin_unlock_bh(&sl->lock);
+ }
- inode_lock_shared(inode);
- /* Don't go there if it's already dead */
-@@ -3086,7 +3086,7 @@ static int lookup_open(struct nameidata *nd, struct path *path,
- struct dentry *dentry;
- int error, create_error = 0;
- umode_t mode = op->mode;
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
+ static inline void write_seqlock_irq(seqlock_t *sl)
+ {
+ spin_lock_irq(&sl->lock);
+- write_seqcount_begin(&sl->seqcount);
++ __raw_write_seqcount_begin(&sl->seqcount);
+ }
- if (unlikely(IS_DEADDIR(dir_inode)))
- return -ENOENT;
-diff --git a/fs/namespace.c b/fs/namespace.c
-index 7cea503ae06d..cb15f5397991 100644
---- a/fs/namespace.c
-+++ b/fs/namespace.c
-@@ -14,6 +14,7 @@
- #include <linux/mnt_namespace.h>
- #include <linux/user_namespace.h>
- #include <linux/namei.h>
-+#include <linux/delay.h>
- #include <linux/security.h>
- #include <linux/idr.h>
- #include <linux/init.h> /* init_rootfs */
-@@ -356,8 +357,11 @@ int __mnt_want_write(struct vfsmount *m)
- * incremented count after it has set MNT_WRITE_HOLD.
- */
- smp_mb();
-- while (ACCESS_ONCE(mnt->mnt.mnt_flags) & MNT_WRITE_HOLD)
-- cpu_relax();
-+ while (ACCESS_ONCE(mnt->mnt.mnt_flags) & MNT_WRITE_HOLD) {
-+ preempt_enable();
-+ cpu_chill();
-+ preempt_disable();
-+ }
- /*
- * After the slowpath clears MNT_WRITE_HOLD, mnt_is_readonly will
- * be set to match its requirements. So we must not load that until
-diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
-index dff600ae0d74..d726d2e09353 100644
---- a/fs/nfs/delegation.c
-+++ b/fs/nfs/delegation.c
-@@ -150,11 +150,11 @@ static int nfs_delegation_claim_opens(struct inode *inode,
- sp = state->owner;
- /* Block nfs4_proc_unlck */
- mutex_lock(&sp->so_delegreturn_mutex);
-- seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
-+ seq = read_seqbegin(&sp->so_reclaim_seqlock);
- err = nfs4_open_delegation_recall(ctx, state, stateid, type);
- if (!err)
- err = nfs_delegation_claim_locks(ctx, state, stateid);
-- if (!err && read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
-+ if (!err && read_seqretry(&sp->so_reclaim_seqlock, seq))
- err = -EAGAIN;
- mutex_unlock(&sp->so_delegreturn_mutex);
- put_nfs_open_context(ctx);
-diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
-index 53e02b8bd9bd..a66e7d77cfbb 100644
---- a/fs/nfs/dir.c
-+++ b/fs/nfs/dir.c
-@@ -485,7 +485,7 @@ static
- void nfs_prime_dcache(struct dentry *parent, struct nfs_entry *entry)
+ static inline void write_sequnlock_irq(seqlock_t *sl)
{
- struct qstr filename = QSTR_INIT(entry->name, entry->len);
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
- struct dentry *dentry;
- struct dentry *alias;
- struct inode *dir = d_inode(parent);
-@@ -1487,7 +1487,7 @@ int nfs_atomic_open(struct inode *dir, struct dentry *dentry,
- struct file *file, unsigned open_flags,
- umode_t mode, int *opened)
+- write_seqcount_end(&sl->seqcount);
++ __raw_write_seqcount_end(&sl->seqcount);
+ spin_unlock_irq(&sl->lock);
+ }
+
+@@ -485,7 +526,7 @@
+ unsigned long flags;
+
+ spin_lock_irqsave(&sl->lock, flags);
+- write_seqcount_begin(&sl->seqcount);
++ __raw_write_seqcount_begin(&sl->seqcount);
+ return flags;
+ }
+
+@@ -495,7 +536,7 @@
+ static inline void
+ write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
{
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
- struct nfs_open_context *ctx;
- struct dentry *res;
- struct iattr attr = { .ia_valid = ATTR_OPEN };
-@@ -1802,7 +1802,11 @@ int nfs_rmdir(struct inode *dir, struct dentry *dentry)
+- write_seqcount_end(&sl->seqcount);
++ __raw_write_seqcount_end(&sl->seqcount);
+ spin_unlock_irqrestore(&sl->lock, flags);
+ }
- trace_nfs_rmdir_enter(dir, dentry);
- if (d_really_is_positive(dentry)) {
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ down(&NFS_I(d_inode(dentry))->rmdir_sem);
-+#else
- down_write(&NFS_I(d_inode(dentry))->rmdir_sem);
-+#endif
- error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
- /* Ensure the VFS deletes this inode */
- switch (error) {
-@@ -1812,7 +1816,11 @@ int nfs_rmdir(struct inode *dir, struct dentry *dentry)
- case -ENOENT:
- nfs_dentry_handle_enoent(dentry);
- }
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ up(&NFS_I(d_inode(dentry))->rmdir_sem);
-+#else
- up_write(&NFS_I(d_inode(dentry))->rmdir_sem);
-+#endif
- } else
- error = NFS_PROTO(dir)->rmdir(dir, &dentry->d_name);
- trace_nfs_rmdir_exit(dir, dentry, error);
-diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
-index bf4ec5ecc97e..36cd5fc9192c 100644
---- a/fs/nfs/inode.c
-+++ b/fs/nfs/inode.c
-@@ -1957,7 +1957,11 @@ static void init_once(void *foo)
- nfsi->nrequests = 0;
- nfsi->commit_info.ncommit = 0;
- atomic_set(&nfsi->commit_info.rpcs_out, 0);
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ sema_init(&nfsi->rmdir_sem, 1);
-+#else
- init_rwsem(&nfsi->rmdir_sem);
-+#endif
- nfs4_init_once(nfsi);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/signal.h linux-4.14/include/linux/signal.h
+--- linux-4.14.orig/include/linux/signal.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/signal.h 2018-09-05 11:05:07.000000000 +0200
+@@ -243,6 +243,7 @@
}
-diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
-index 1452177c822d..f43b01d54c59 100644
---- a/fs/nfs/nfs4_fs.h
-+++ b/fs/nfs/nfs4_fs.h
-@@ -111,7 +111,7 @@ struct nfs4_state_owner {
- unsigned long so_flags;
- struct list_head so_states;
- struct nfs_seqid_counter so_seqid;
-- seqcount_t so_reclaim_seqcount;
-+ seqlock_t so_reclaim_seqlock;
- struct mutex so_delegreturn_mutex;
+ extern void flush_sigqueue(struct sigpending *queue);
++extern void flush_task_sigqueue(struct task_struct *tsk);
+
+ /* Test if 'sig' is valid signal. Use this instead of testing _NSIG directly */
+ static inline int valid_signal(unsigned long sig)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/skbuff.h linux-4.14/include/linux/skbuff.h
+--- linux-4.14.orig/include/linux/skbuff.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/skbuff.h 2018-09-05 11:05:07.000000000 +0200
+@@ -287,6 +287,7 @@
+
+ __u32 qlen;
+ spinlock_t lock;
++ raw_spinlock_t raw_lock;
};
-diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
-index 241da19b7da4..8f9636cc298f 100644
---- a/fs/nfs/nfs4proc.c
-+++ b/fs/nfs/nfs4proc.c
-@@ -2697,7 +2697,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
- unsigned int seq;
- int ret;
+ struct sk_buff;
+@@ -1667,6 +1668,12 @@
+ __skb_queue_head_init(list);
+ }
-- seq = raw_seqcount_begin(&sp->so_reclaim_seqcount);
-+ seq = raw_seqcount_begin(&sp->so_reclaim_seqlock.seqcount);
++static inline void skb_queue_head_init_raw(struct sk_buff_head *list)
++{
++ raw_spin_lock_init(&list->raw_lock);
++ __skb_queue_head_init(list);
++}
++
+ static inline void skb_queue_head_init_class(struct sk_buff_head *list,
+ struct lock_class_key *class)
+ {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/smp.h linux-4.14/include/linux/smp.h
+--- linux-4.14.orig/include/linux/smp.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/smp.h 2018-09-05 11:05:07.000000000 +0200
+@@ -202,6 +202,9 @@
+ #define get_cpu() ({ preempt_disable(); smp_processor_id(); })
+ #define put_cpu() preempt_enable()
- ret = _nfs4_proc_open(opendata);
- if (ret != 0)
-@@ -2735,7 +2735,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
- ctx->state = state;
- if (d_inode(dentry) == state->inode) {
- nfs_inode_attach_open_context(ctx);
-- if (read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
-+ if (read_seqretry(&sp->so_reclaim_seqlock, seq))
- nfs4_schedule_stateid_recovery(server, state);
- }
- out:
-diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
-index 0959c9661662..dabd834d7686 100644
---- a/fs/nfs/nfs4state.c
-+++ b/fs/nfs/nfs4state.c
-@@ -488,7 +488,7 @@ nfs4_alloc_state_owner(struct nfs_server *server,
- nfs4_init_seqid_counter(&sp->so_seqid);
- atomic_set(&sp->so_count, 1);
- INIT_LIST_HEAD(&sp->so_lru);
-- seqcount_init(&sp->so_reclaim_seqcount);
-+ seqlock_init(&sp->so_reclaim_seqlock);
- mutex_init(&sp->so_delegreturn_mutex);
- return sp;
++#define get_cpu_light() ({ migrate_disable(); smp_processor_id(); })
++#define put_cpu_light() migrate_enable()
++
+ /*
+ * Callback to arch code if there's nosmp or maxcpus=0 on the
+ * boot command line:
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock_api_smp.h linux-4.14/include/linux/spinlock_api_smp.h
+--- linux-4.14.orig/include/linux/spinlock_api_smp.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/spinlock_api_smp.h 2018-09-05 11:05:07.000000000 +0200
+@@ -187,6 +187,8 @@
+ return 0;
}
-@@ -1497,8 +1497,12 @@ static int nfs4_reclaim_open_state(struct nfs4_state_owner *sp, const struct nfs
- * recovering after a network partition or a reboot from a
- * server that doesn't support a grace period.
- */
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ write_seqlock(&sp->so_reclaim_seqlock);
-+#else
-+ write_seqcount_begin(&sp->so_reclaim_seqlock.seqcount);
-+#endif
- spin_lock(&sp->so_lock);
-- raw_write_seqcount_begin(&sp->so_reclaim_seqcount);
- restart:
- list_for_each_entry(state, &sp->so_states, open_states) {
- if (!test_and_clear_bit(ops->state_flag_bit, &state->flags))
-@@ -1567,14 +1571,20 @@ static int nfs4_reclaim_open_state(struct nfs4_state_owner *sp, const struct nfs
- spin_lock(&sp->so_lock);
- goto restart;
- }
-- raw_write_seqcount_end(&sp->so_reclaim_seqcount);
- spin_unlock(&sp->so_lock);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ write_sequnlock(&sp->so_reclaim_seqlock);
-+#else
-+ write_seqcount_end(&sp->so_reclaim_seqlock.seqcount);
+
+-#include <linux/rwlock_api_smp.h>
++#ifndef CONFIG_PREEMPT_RT_FULL
++# include <linux/rwlock_api_smp.h>
+#endif
- return 0;
- out_err:
- nfs4_put_open_state(state);
-- spin_lock(&sp->so_lock);
-- raw_write_seqcount_end(&sp->so_reclaim_seqcount);
-- spin_unlock(&sp->so_lock);
+
+ #endif /* __LINUX_SPINLOCK_API_SMP_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock.h linux-4.14/include/linux/spinlock.h
+--- linux-4.14.orig/include/linux/spinlock.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/spinlock.h 2018-09-05 11:05:07.000000000 +0200
+@@ -286,7 +286,11 @@
+ #define raw_spin_can_lock(lock) (!raw_spin_is_locked(lock))
+
+ /* Include rwlock functions */
+-#include <linux/rwlock.h>
+#ifdef CONFIG_PREEMPT_RT_FULL
-+ write_sequnlock(&sp->so_reclaim_seqlock);
++# include <linux/rwlock_rt.h>
+#else
-+ write_seqcount_end(&sp->so_reclaim_seqlock.seqcount);
++# include <linux/rwlock.h>
+#endif
- return status;
- }
-diff --git a/fs/nfs/unlink.c b/fs/nfs/unlink.c
-index 191aa577dd1f..58990c8f52e0 100644
---- a/fs/nfs/unlink.c
-+++ b/fs/nfs/unlink.c
-@@ -12,7 +12,7 @@
- #include <linux/sunrpc/clnt.h>
- #include <linux/nfs_fs.h>
- #include <linux/sched.h>
--#include <linux/wait.h>
-+#include <linux/swait.h>
- #include <linux/namei.h>
- #include <linux/fsnotify.h>
+ /*
+ * Pull the _spin_*()/_read_*()/_write_*() functions/declarations:
+@@ -297,6 +301,10 @@
+ # include <linux/spinlock_api_up.h>
+ #endif
-@@ -51,6 +51,29 @@ static void nfs_async_unlink_done(struct rpc_task *task, void *calldata)
- rpc_restart_call_prepare(task);
- }
++#ifdef CONFIG_PREEMPT_RT_FULL
++# include <linux/spinlock_rt.h>
++#else /* PREEMPT_RT_FULL */
++
+ /*
+ * Map the spin_lock functions to the raw variants for PREEMPT_RT=n
+ */
+@@ -421,4 +429,6 @@
+ #define atomic_dec_and_lock(atomic, lock) \
+ __cond_lock(lock, _atomic_dec_and_lock(atomic, lock))
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+static void nfs_down_anon(struct semaphore *sema)
++#endif /* !PREEMPT_RT_FULL */
++
+ #endif /* __LINUX_SPINLOCK_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock_rt.h linux-4.14/include/linux/spinlock_rt.h
+--- linux-4.14.orig/include/linux/spinlock_rt.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/spinlock_rt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,159 @@
++#ifndef __LINUX_SPINLOCK_RT_H
++#define __LINUX_SPINLOCK_RT_H
++
++#ifndef __LINUX_SPINLOCK_H
++#error Do not include directly. Use spinlock.h
++#endif
++
++#include <linux/bug.h>
++
++extern void
++__rt_spin_lock_init(spinlock_t *lock, const char *name, struct lock_class_key *key);
++
++#define spin_lock_init(slock) \
++do { \
++ static struct lock_class_key __key; \
++ \
++ rt_mutex_init(&(slock)->lock); \
++ __rt_spin_lock_init(slock, #slock, &__key); \
++} while (0)
++
++extern void __lockfunc rt_spin_lock(spinlock_t *lock);
++extern unsigned long __lockfunc rt_spin_lock_trace_flags(spinlock_t *lock);
++extern void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass);
++extern void __lockfunc rt_spin_unlock(spinlock_t *lock);
++extern void __lockfunc rt_spin_unlock_wait(spinlock_t *lock);
++extern int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags);
++extern int __lockfunc rt_spin_trylock_bh(spinlock_t *lock);
++extern int __lockfunc rt_spin_trylock(spinlock_t *lock);
++extern int atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock);
++
++/*
++ * lockdep-less calls, for derived types like rwlock:
++ * (for trylock they can use rt_mutex_trylock() directly.
++ * Migrate disable handling must be done at the call site.
++ */
++extern void __lockfunc __rt_spin_lock(struct rt_mutex *lock);
++extern void __lockfunc __rt_spin_trylock(struct rt_mutex *lock);
++extern void __lockfunc __rt_spin_unlock(struct rt_mutex *lock);
++
++#define spin_lock(lock) rt_spin_lock(lock)
++
++#define spin_lock_bh(lock) \
++ do { \
++ local_bh_disable(); \
++ rt_spin_lock(lock); \
++ } while (0)
++
++#define spin_lock_irq(lock) spin_lock(lock)
++
++#define spin_do_trylock(lock) __cond_lock(lock, rt_spin_trylock(lock))
++
++#define spin_trylock(lock) \
++({ \
++ int __locked; \
++ __locked = spin_do_trylock(lock); \
++ __locked; \
++})
++
++#ifdef CONFIG_LOCKDEP
++# define spin_lock_nested(lock, subclass) \
++ do { \
++ rt_spin_lock_nested(lock, subclass); \
++ } while (0)
++
++#define spin_lock_bh_nested(lock, subclass) \
++ do { \
++ local_bh_disable(); \
++ rt_spin_lock_nested(lock, subclass); \
++ } while (0)
++
++# define spin_lock_irqsave_nested(lock, flags, subclass) \
++ do { \
++ typecheck(unsigned long, flags); \
++ flags = 0; \
++ rt_spin_lock_nested(lock, subclass); \
++ } while (0)
++#else
++# define spin_lock_nested(lock, subclass) spin_lock(lock)
++# define spin_lock_bh_nested(lock, subclass) spin_lock_bh(lock)
++
++# define spin_lock_irqsave_nested(lock, flags, subclass) \
++ do { \
++ typecheck(unsigned long, flags); \
++ flags = 0; \
++ spin_lock(lock); \
++ } while (0)
++#endif
++
++#define spin_lock_irqsave(lock, flags) \
++ do { \
++ typecheck(unsigned long, flags); \
++ flags = 0; \
++ spin_lock(lock); \
++ } while (0)
++
++static inline unsigned long spin_lock_trace_flags(spinlock_t *lock)
+{
-+ down(sema);
++ unsigned long flags = 0;
++#ifdef CONFIG_TRACE_IRQFLAGS
++ flags = rt_spin_lock_trace_flags(lock);
++#else
++ spin_lock(lock); /* lock_local */
++#endif
++ return flags;
+}
+
-+static void nfs_up_anon(struct semaphore *sema)
++/* FIXME: we need rt_spin_lock_nest_lock */
++#define spin_lock_nest_lock(lock, nest_lock) spin_lock_nested(lock, 0)
++
++#define spin_unlock(lock) rt_spin_unlock(lock)
++
++#define spin_unlock_bh(lock) \
++ do { \
++ rt_spin_unlock(lock); \
++ local_bh_enable(); \
++ } while (0)
++
++#define spin_unlock_irq(lock) spin_unlock(lock)
++
++#define spin_unlock_irqrestore(lock, flags) \
++ do { \
++ typecheck(unsigned long, flags); \
++ (void) flags; \
++ spin_unlock(lock); \
++ } while (0)
++
++#define spin_trylock_bh(lock) __cond_lock(lock, rt_spin_trylock_bh(lock))
++#define spin_trylock_irq(lock) spin_trylock(lock)
++
++#define spin_trylock_irqsave(lock, flags) \
++ rt_spin_trylock_irqsave(lock, &(flags))
++
++#define spin_unlock_wait(lock) rt_spin_unlock_wait(lock)
++
++#ifdef CONFIG_GENERIC_LOCKBREAK
++# define spin_is_contended(lock) ((lock)->break_lock)
++#else
++# define spin_is_contended(lock) (((void)(lock), 0))
++#endif
++
++static inline int spin_can_lock(spinlock_t *lock)
+{
-+ up(sema);
++ return !rt_mutex_is_locked(&lock->lock);
+}
+
-+#else
-+static void nfs_down_anon(struct rw_semaphore *rwsem)
++static inline int spin_is_locked(spinlock_t *lock)
+{
-+ down_read_non_owner(rwsem);
++ return rt_mutex_is_locked(&lock->lock);
+}
+
-+static void nfs_up_anon(struct rw_semaphore *rwsem)
++static inline void assert_spin_locked(spinlock_t *lock)
+{
-+ up_read_non_owner(rwsem);
++ BUG_ON(!spin_is_locked(lock));
+}
-+#endif
+
- /**
- * nfs_async_unlink_release - Release the sillydelete data.
- * @task: rpc_task of the sillydelete
-@@ -64,7 +87,7 @@ static void nfs_async_unlink_release(void *calldata)
- struct dentry *dentry = data->dentry;
- struct super_block *sb = dentry->d_sb;
-
-- up_read_non_owner(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
-+ nfs_up_anon(&NFS_I(d_inode(dentry->d_parent))->rmdir_sem);
- d_lookup_done(dentry);
- nfs_free_unlinkdata(data);
- dput(dentry);
-@@ -117,10 +140,10 @@ static int nfs_call_unlink(struct dentry *dentry, struct nfs_unlinkdata *data)
- struct inode *dir = d_inode(dentry->d_parent);
- struct dentry *alias;
-
-- down_read_non_owner(&NFS_I(dir)->rmdir_sem);
-+ nfs_down_anon(&NFS_I(dir)->rmdir_sem);
- alias = d_alloc_parallel(dentry->d_parent, &data->args.name, &data->wq);
- if (IS_ERR(alias)) {
-- up_read_non_owner(&NFS_I(dir)->rmdir_sem);
-+ nfs_up_anon(&NFS_I(dir)->rmdir_sem);
- return 0;
- }
- if (!d_in_lookup(alias)) {
-@@ -142,7 +165,7 @@ static int nfs_call_unlink(struct dentry *dentry, struct nfs_unlinkdata *data)
- ret = 0;
- spin_unlock(&alias->d_lock);
- dput(alias);
-- up_read_non_owner(&NFS_I(dir)->rmdir_sem);
-+ nfs_up_anon(&NFS_I(dir)->rmdir_sem);
- /*
- * If we'd displaced old cached devname, free it. At that
- * point dentry is definitely not a root, so we won't need
-@@ -182,7 +205,7 @@ nfs_async_unlink(struct dentry *dentry, const struct qstr *name)
- goto out_free_name;
- }
- data->res.dir_attr = &data->dir_attr;
-- init_waitqueue_head(&data->wq);
-+ init_swait_queue_head(&data->wq);
-
- status = -EBUSY;
- spin_lock(&dentry->d_lock);
-diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
-index fe251f187ff8..e89da4fb14c2 100644
---- a/fs/ntfs/aops.c
-+++ b/fs/ntfs/aops.c
-@@ -92,13 +92,13 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
- ofs = 0;
- if (file_ofs < init_size)
- ofs = init_size - file_ofs;
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- kaddr = kmap_atomic(page);
- memset(kaddr + bh_offset(bh) + ofs, 0,
- bh->b_size - ofs);
- flush_dcache_page(page);
- kunmap_atomic(kaddr);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- }
- } else {
- clear_buffer_uptodate(bh);
-@@ -107,8 +107,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
- "0x%llx.", (unsigned long long)bh->b_blocknr);
- }
- first = page_buffers(page);
-- local_irq_save(flags);
-- bit_spin_lock(BH_Uptodate_Lock, &first->b_state);
-+ flags = bh_uptodate_lock_irqsave(first);
- clear_buffer_async_read(bh);
- unlock_buffer(bh);
- tmp = bh;
-@@ -123,8 +122,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
- }
- tmp = tmp->b_this_page;
- } while (tmp != bh);
-- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-- local_irq_restore(flags);
-+ bh_uptodate_unlock_irqrestore(first, flags);
- /*
- * If none of the buffers had errors then we can set the page uptodate,
- * but we first have to perform the post read mst fixups, if the
-@@ -145,13 +143,13 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
- recs = PAGE_SIZE / rec_size;
- /* Should have been verified before we got here... */
- BUG_ON(!recs);
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
- kaddr = kmap_atomic(page);
- for (i = 0; i < recs; i++)
- post_read_mst_fixup((NTFS_RECORD*)(kaddr +
- i * rec_size), rec_size);
- kunmap_atomic(kaddr);
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- flush_dcache_page(page);
- if (likely(page_uptodate && !PageError(page)))
- SetPageUptodate(page);
-@@ -159,9 +157,7 @@ static void ntfs_end_buffer_async_read(struct buffer_head *bh, int uptodate)
- unlock_page(page);
- return;
- still_busy:
-- bit_spin_unlock(BH_Uptodate_Lock, &first->b_state);
-- local_irq_restore(flags);
-- return;
-+ bh_uptodate_unlock_irqrestore(first, flags);
- }
-
- /**
-diff --git a/fs/proc/base.c b/fs/proc/base.c
-index ca651ac00660..41d9dc789285 100644
---- a/fs/proc/base.c
-+++ b/fs/proc/base.c
-@@ -1834,7 +1834,7 @@ bool proc_fill_cache(struct file *file, struct dir_context *ctx,
-
- child = d_hash_and_lookup(dir, &qname);
- if (!child) {
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
- child = d_alloc_parallel(dir, &qname, &wq);
- if (IS_ERR(child))
- goto end_instantiate;
-diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
-index d4e37acd4821..000cea46434a 100644
---- a/fs/proc/proc_sysctl.c
-+++ b/fs/proc/proc_sysctl.c
-@@ -632,7 +632,7 @@ static bool proc_sys_fill_cache(struct file *file,
-
- child = d_lookup(dir, &qname);
- if (!child) {
-- DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
-+ DECLARE_SWAIT_QUEUE_HEAD_ONSTACK(wq);
- child = d_alloc_parallel(dir, &qname, &wq);
- if (IS_ERR(child))
- return false;
-diff --git a/fs/timerfd.c b/fs/timerfd.c
-index 9ae4abb4110b..8644b67c48fd 100644
---- a/fs/timerfd.c
-+++ b/fs/timerfd.c
-@@ -460,7 +460,10 @@ static int do_timerfd_settime(int ufd, int flags,
- break;
- }
- spin_unlock_irq(&ctx->wqh.lock);
-- cpu_relax();
-+ if (isalarm(ctx))
-+ hrtimer_wait_for_timer(&ctx->t.alarm.timer);
-+ else
-+ hrtimer_wait_for_timer(&ctx->t.tmr);
- }
-
- /*
-diff --git a/include/acpi/platform/aclinux.h b/include/acpi/platform/aclinux.h
-index e861a24f06f2..b5c97d3059c7 100644
---- a/include/acpi/platform/aclinux.h
-+++ b/include/acpi/platform/aclinux.h
-@@ -133,6 +133,7 @@
++#define atomic_dec_and_lock(atomic, lock) \
++ atomic_dec_and_spin_lock(atomic, lock)
++
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock_types.h linux-4.14/include/linux/spinlock_types.h
+--- linux-4.14.orig/include/linux/spinlock_types.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/spinlock_types.h 2018-09-05 11:05:07.000000000 +0200
+@@ -9,80 +9,15 @@
+ * Released under the General Public License (GPL).
+ */
- #define acpi_cache_t struct kmem_cache
- #define acpi_spinlock spinlock_t *
-+#define acpi_raw_spinlock raw_spinlock_t *
- #define acpi_cpu_flags unsigned long
+-#if defined(CONFIG_SMP)
+-# include <asm/spinlock_types.h>
+-#else
+-# include <linux/spinlock_types_up.h>
+-#endif
+-
+-#include <linux/lockdep.h>
+-
+-typedef struct raw_spinlock {
+- arch_spinlock_t raw_lock;
+-#ifdef CONFIG_GENERIC_LOCKBREAK
+- unsigned int break_lock;
+-#endif
+-#ifdef CONFIG_DEBUG_SPINLOCK
+- unsigned int magic, owner_cpu;
+- void *owner;
+-#endif
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+- struct lockdep_map dep_map;
+-#endif
+-} raw_spinlock_t;
+-
+-#define SPINLOCK_MAGIC 0xdead4ead
+-
+-#define SPINLOCK_OWNER_INIT ((void *)-1L)
+-
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
+-#else
+-# define SPIN_DEP_MAP_INIT(lockname)
+-#endif
++#include <linux/spinlock_types_raw.h>
- /* Use native linux version of acpi_os_allocate_zeroed */
-@@ -151,6 +152,20 @@
- #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_thread_id
- #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_create_lock
+-#ifdef CONFIG_DEBUG_SPINLOCK
+-# define SPIN_DEBUG_INIT(lockname) \
+- .magic = SPINLOCK_MAGIC, \
+- .owner_cpu = -1, \
+- .owner = SPINLOCK_OWNER_INIT,
++#ifndef CONFIG_PREEMPT_RT_FULL
++# include <linux/spinlock_types_nort.h>
++# include <linux/rwlock_types.h>
+ #else
+-# define SPIN_DEBUG_INIT(lockname)
+-#endif
+-
+-#define __RAW_SPIN_LOCK_INITIALIZER(lockname) \
+- { \
+- .raw_lock = __ARCH_SPIN_LOCK_UNLOCKED, \
+- SPIN_DEBUG_INIT(lockname) \
+- SPIN_DEP_MAP_INIT(lockname) }
+-
+-#define __RAW_SPIN_LOCK_UNLOCKED(lockname) \
+- (raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
+-
+-#define DEFINE_RAW_SPINLOCK(x) raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
+-
+-typedef struct spinlock {
+- union {
+- struct raw_spinlock rlock;
+-
+-#ifdef CONFIG_DEBUG_LOCK_ALLOC
+-# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
+- struct {
+- u8 __padding[LOCK_PADSIZE];
+- struct lockdep_map dep_map;
+- };
++# include <linux/rtmutex.h>
++# include <linux/spinlock_types_rt.h>
++# include <linux/rwlock_types_rt.h>
+ #endif
+- };
+-} spinlock_t;
+-
+-#define __SPIN_LOCK_INITIALIZER(lockname) \
+- { { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
+-
+-#define __SPIN_LOCK_UNLOCKED(lockname) \
+- (spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
+-
+-#define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+-
+-#include <linux/rwlock_types.h>
-+#define acpi_os_create_raw_lock(__handle) \
-+({ \
-+ raw_spinlock_t *lock = ACPI_ALLOCATE(sizeof(*lock)); \
-+ \
-+ if (lock) { \
-+ *(__handle) = lock; \
-+ raw_spin_lock_init(*(__handle)); \
-+ } \
-+ lock ? AE_OK : AE_NO_MEMORY; \
-+ })
+ #endif /* __LINUX_SPINLOCK_TYPES_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock_types_nort.h linux-4.14/include/linux/spinlock_types_nort.h
+--- linux-4.14.orig/include/linux/spinlock_types_nort.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/spinlock_types_nort.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,33 @@
++#ifndef __LINUX_SPINLOCK_TYPES_NORT_H
++#define __LINUX_SPINLOCK_TYPES_NORT_H
+
-+#define acpi_os_delete_raw_lock(__handle) kfree(__handle)
++#ifndef __LINUX_SPINLOCK_TYPES_H
++#error "Do not include directly. Include spinlock_types.h instead"
++#endif
+
++/*
++ * The non RT version maps spinlocks to raw_spinlocks
++ */
++typedef struct spinlock {
++ union {
++ struct raw_spinlock rlock;
+
- /*
- * OSL interfaces used by debugger/disassembler
- */
-diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
-index 6f96247226a4..fa53a21263c2 100644
---- a/include/asm-generic/bug.h
-+++ b/include/asm-generic/bug.h
-@@ -215,6 +215,20 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
- # define WARN_ON_SMP(x) ({0;})
- #endif
-
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+# define BUG_ON_RT(c) BUG_ON(c)
-+# define BUG_ON_NONRT(c) do { } while (0)
-+# define WARN_ON_RT(condition) WARN_ON(condition)
-+# define WARN_ON_NONRT(condition) do { } while (0)
-+# define WARN_ON_ONCE_NONRT(condition) do { } while (0)
-+#else
-+# define BUG_ON_RT(c) do { } while (0)
-+# define BUG_ON_NONRT(c) BUG_ON(c)
-+# define WARN_ON_RT(condition) do { } while (0)
-+# define WARN_ON_NONRT(condition) WARN_ON(condition)
-+# define WARN_ON_ONCE_NONRT(condition) WARN_ON_ONCE(condition)
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
++ struct {
++ u8 __padding[LOCK_PADSIZE];
++ struct lockdep_map dep_map;
++ };
+#endif
++ };
++} spinlock_t;
+
- #endif /* __ASSEMBLY__ */
-
- #endif
-diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
-index 535ab2e13d2e..cfc246899473 100644
---- a/include/linux/blk-mq.h
-+++ b/include/linux/blk-mq.h
-@@ -209,7 +209,7 @@ static inline u16 blk_mq_unique_tag_to_tag(u32 unique_tag)
- return unique_tag & BLK_MQ_UNIQUE_TAG_MASK;
- }
-
--
-+void __blk_mq_complete_request_remote_work(struct work_struct *work);
- int blk_mq_request_started(struct request *rq);
- void blk_mq_start_request(struct request *rq);
- void blk_mq_end_request(struct request *rq, int error);
-diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
-index f6a816129856..ec7a4676f8a8 100644
---- a/include/linux/blkdev.h
-+++ b/include/linux/blkdev.h
-@@ -89,6 +89,7 @@ struct request {
- struct list_head queuelist;
- union {
- struct call_single_data csd;
-+ struct work_struct work;
- u64 fifo_time;
- };
-
-@@ -467,7 +468,7 @@ struct request_queue {
- struct throtl_data *td;
- #endif
- struct rcu_head rcu_head;
-- wait_queue_head_t mq_freeze_wq;
-+ struct swait_queue_head mq_freeze_wq;
- struct percpu_ref q_usage_counter;
- struct list_head all_q_node;
-
-diff --git a/include/linux/bottom_half.h b/include/linux/bottom_half.h
-index 8fdcb783197d..d07dbeec7bc1 100644
---- a/include/linux/bottom_half.h
-+++ b/include/linux/bottom_half.h
-@@ -3,6 +3,39 @@
-
- #include <linux/preempt.h>
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
++#define __SPIN_LOCK_INITIALIZER(lockname) \
++ { { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
+
-+extern void __local_bh_disable(void);
-+extern void _local_bh_enable(void);
-+extern void __local_bh_enable(void);
++#define __SPIN_LOCK_UNLOCKED(lockname) \
++ (spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
+
-+static inline void local_bh_disable(void)
-+{
-+ __local_bh_disable();
-+}
++#define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+
-+static inline void __local_bh_disable_ip(unsigned long ip, unsigned int cnt)
-+{
-+ __local_bh_disable();
-+}
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock_types_raw.h linux-4.14/include/linux/spinlock_types_raw.h
+--- linux-4.14.orig/include/linux/spinlock_types_raw.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/spinlock_types_raw.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,58 @@
++#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
++#define __LINUX_SPINLOCK_TYPES_RAW_H
+
-+static inline void local_bh_enable(void)
-+{
-+ __local_bh_enable();
-+}
++#include <linux/types.h>
+
-+static inline void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
-+{
-+ __local_bh_enable();
-+}
++#if defined(CONFIG_SMP)
++# include <asm/spinlock_types.h>
++#else
++# include <linux/spinlock_types_up.h>
++#endif
+
-+static inline void local_bh_enable_ip(unsigned long ip)
-+{
-+ __local_bh_enable();
-+}
++#include <linux/lockdep.h>
++
++typedef struct raw_spinlock {
++ arch_spinlock_t raw_lock;
++#ifdef CONFIG_GENERIC_LOCKBREAK
++ unsigned int break_lock;
++#endif
++#ifdef CONFIG_DEBUG_SPINLOCK
++ unsigned int magic, owner_cpu;
++ void *owner;
++#endif
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ struct lockdep_map dep_map;
++#endif
++} raw_spinlock_t;
++
++#define SPINLOCK_MAGIC 0xdead4ead
++
++#define SPINLOCK_OWNER_INIT ((void *)-1L)
+
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
+#else
++# define SPIN_DEP_MAP_INIT(lockname)
++#endif
+
- #ifdef CONFIG_TRACE_IRQFLAGS
- extern void __local_bh_disable_ip(unsigned long ip, unsigned int cnt);
- #else
-@@ -30,5 +63,6 @@ static inline void local_bh_enable(void)
- {
- __local_bh_enable_ip(_THIS_IP_, SOFTIRQ_DISABLE_OFFSET);
- }
++#ifdef CONFIG_DEBUG_SPINLOCK
++# define SPIN_DEBUG_INIT(lockname) \
++ .magic = SPINLOCK_MAGIC, \
++ .owner_cpu = -1, \
++ .owner = SPINLOCK_OWNER_INIT,
++#else
++# define SPIN_DEBUG_INIT(lockname)
+#endif
-
- #endif /* _LINUX_BH_H */
-diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
-index ebbacd14d450..be5e87f6360a 100644
---- a/include/linux/buffer_head.h
-+++ b/include/linux/buffer_head.h
-@@ -75,8 +75,50 @@ struct buffer_head {
- struct address_space *b_assoc_map; /* mapping this buffer is
- associated with */
- atomic_t b_count; /* users using this buffer_head */
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ spinlock_t b_uptodate_lock;
-+#if IS_ENABLED(CONFIG_JBD2)
-+ spinlock_t b_state_lock;
-+ spinlock_t b_journal_head_lock;
++
++#define __RAW_SPIN_LOCK_INITIALIZER(lockname) \
++ { \
++ .raw_lock = __ARCH_SPIN_LOCK_UNLOCKED, \
++ SPIN_DEBUG_INIT(lockname) \
++ SPIN_DEP_MAP_INIT(lockname) }
++
++#define __RAW_SPIN_LOCK_UNLOCKED(lockname) \
++ (raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
++
++#define DEFINE_RAW_SPINLOCK(x) raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
++
+#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock_types_rt.h linux-4.14/include/linux/spinlock_types_rt.h
+--- linux-4.14.orig/include/linux/spinlock_types_rt.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/spinlock_types_rt.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,48 @@
++#ifndef __LINUX_SPINLOCK_TYPES_RT_H
++#define __LINUX_SPINLOCK_TYPES_RT_H
++
++#ifndef __LINUX_SPINLOCK_TYPES_H
++#error "Do not include directly. Include spinlock_types.h instead"
+#endif
- };
-
-+static inline unsigned long bh_uptodate_lock_irqsave(struct buffer_head *bh)
-+{
-+ unsigned long flags;
+
-+#ifndef CONFIG_PREEMPT_RT_BASE
-+ local_irq_save(flags);
-+ bit_spin_lock(BH_Uptodate_Lock, &bh->b_state);
-+#else
-+ spin_lock_irqsave(&bh->b_uptodate_lock, flags);
++#include <linux/cache.h>
++
++/*
++ * PREEMPT_RT: spinlocks - an RT mutex plus lock-break field:
++ */
++typedef struct spinlock {
++ struct rt_mutex lock;
++ unsigned int break_lock;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ struct lockdep_map dep_map;
+#endif
-+ return flags;
-+}
++} spinlock_t;
+
-+static inline void
-+bh_uptodate_unlock_irqrestore(struct buffer_head *bh, unsigned long flags)
-+{
-+#ifndef CONFIG_PREEMPT_RT_BASE
-+ bit_spin_unlock(BH_Uptodate_Lock, &bh->b_state);
-+ local_irq_restore(flags);
++#ifdef CONFIG_DEBUG_RT_MUTEXES
++# define __RT_SPIN_INITIALIZER(name) \
++ { \
++ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
++ .save_state = 1, \
++ .file = __FILE__, \
++ .line = __LINE__ , \
++ }
+#else
-+ spin_unlock_irqrestore(&bh->b_uptodate_lock, flags);
++# define __RT_SPIN_INITIALIZER(name) \
++ { \
++ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
++ .save_state = 1, \
++ }
+#endif
-+}
+
-+static inline void buffer_head_init_locks(struct buffer_head *bh)
-+{
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ spin_lock_init(&bh->b_uptodate_lock);
-+#if IS_ENABLED(CONFIG_JBD2)
-+ spin_lock_init(&bh->b_state_lock);
-+ spin_lock_init(&bh->b_journal_head_lock);
++/*
++.wait_list = PLIST_HEAD_INIT_RAW((name).lock.wait_list, (name).lock.wait_lock)
++*/
++
++#define __SPIN_LOCK_UNLOCKED(name) \
++ { .lock = __RT_SPIN_INITIALIZER(name.lock), \
++ SPIN_DEP_MAP_INIT(name) }
++
++#define DEFINE_SPINLOCK(name) \
++ spinlock_t name = __SPIN_LOCK_UNLOCKED(name)
++
+#endif
-+#endif
-+}
-+
- /*
- * macro tricks to expand the set_buffer_foo(), clear_buffer_foo()
- * and buffer_foo() functions.
-diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
-index 5b17de62c962..56027cc01a56 100644
---- a/include/linux/cgroup-defs.h
-+++ b/include/linux/cgroup-defs.h
-@@ -16,6 +16,7 @@
- #include <linux/percpu-refcount.h>
- #include <linux/percpu-rwsem.h>
- #include <linux/workqueue.h>
-+#include <linux/swork.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/spinlock_types_up.h linux-4.14/include/linux/spinlock_types_up.h
+--- linux-4.14.orig/include/linux/spinlock_types_up.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/spinlock_types_up.h 2018-09-05 11:05:07.000000000 +0200
+@@ -1,10 +1,6 @@
+ #ifndef __LINUX_SPINLOCK_TYPES_UP_H
+ #define __LINUX_SPINLOCK_TYPES_UP_H
- #ifdef CONFIG_CGROUPS
+-#ifndef __LINUX_SPINLOCK_TYPES_H
+-# error "please don't include this file directly"
+-#endif
+-
+ /*
+ * include/linux/spinlock_types_up.h - spinlock type definitions for UP
+ *
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/srcutiny.h linux-4.14/include/linux/srcutiny.h
+--- linux-4.14.orig/include/linux/srcutiny.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/srcutiny.h 2018-09-05 11:05:07.000000000 +0200
+@@ -43,7 +43,7 @@
-@@ -137,6 +138,7 @@ struct cgroup_subsys_state {
- /* percpu_ref killing and RCU release */
- struct rcu_head rcu_head;
- struct work_struct destroy_work;
-+ struct swork_event destroy_swork;
- };
+ void srcu_drive_gp(struct work_struct *wp);
- /*
-diff --git a/include/linux/completion.h b/include/linux/completion.h
-index 5d5aaae3af43..3bca1590e29f 100644
---- a/include/linux/completion.h
-+++ b/include/linux/completion.h
-@@ -7,8 +7,7 @@
- * Atomic wait-for-completion handler data structures.
- * See kernel/sched/completion.c for details.
+-#define __SRCU_STRUCT_INIT(name) \
++#define __SRCU_STRUCT_INIT(name, __ignored) \
+ { \
+ .srcu_wq = __SWAIT_QUEUE_HEAD_INITIALIZER(name.srcu_wq), \
+ .srcu_cb_tail = &name.srcu_cb_head, \
+@@ -56,9 +56,9 @@
+ * Tree SRCU, which needs some per-CPU data.
*/
--
--#include <linux/wait.h>
-+#include <linux/swait.h>
+ #define DEFINE_SRCU(name) \
+- struct srcu_struct name = __SRCU_STRUCT_INIT(name)
++ struct srcu_struct name = __SRCU_STRUCT_INIT(name, name)
+ #define DEFINE_STATIC_SRCU(name) \
+- static struct srcu_struct name = __SRCU_STRUCT_INIT(name)
++ static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name)
+
+ void synchronize_srcu(struct srcu_struct *sp);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/srcutree.h linux-4.14/include/linux/srcutree.h
+--- linux-4.14.orig/include/linux/srcutree.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/srcutree.h 2018-09-05 11:05:07.000000000 +0200
+@@ -40,7 +40,7 @@
+ unsigned long srcu_unlock_count[2]; /* Unlocks per CPU. */
+
+ /* Update-side state. */
+- raw_spinlock_t __private lock ____cacheline_internodealigned_in_smp;
++ spinlock_t __private lock ____cacheline_internodealigned_in_smp;
+ struct rcu_segcblist srcu_cblist; /* List of callbacks.*/
+ unsigned long srcu_gp_seq_needed; /* Furthest future GP needed. */
+ unsigned long srcu_gp_seq_needed_exp; /* Furthest future exp GP. */
+@@ -58,7 +58,7 @@
+ * Node in SRCU combining tree, similar in function to rcu_data.
+ */
+ struct srcu_node {
+- raw_spinlock_t __private lock;
++ spinlock_t __private lock;
+ unsigned long srcu_have_cbs[4]; /* GP seq for children */
+ /* having CBs, but only */
+ /* is > ->srcu_gq_seq. */
+@@ -78,7 +78,7 @@
+ struct srcu_node *level[RCU_NUM_LVLS + 1];
+ /* First node at each level. */
+ struct mutex srcu_cb_mutex; /* Serialize CB preparation. */
+- raw_spinlock_t __private lock; /* Protect counters */
++ spinlock_t __private lock; /* Protect counters */
+ struct mutex srcu_gp_mutex; /* Serialize GP work. */
+ unsigned int srcu_idx; /* Current rdr array element. */
+ unsigned long srcu_gp_seq; /* Grace-period seq #. */
+@@ -104,10 +104,10 @@
+ #define SRCU_STATE_SCAN1 1
+ #define SRCU_STATE_SCAN2 2
- /*
- * struct completion - structure used to maintain state for a "completion"
-@@ -24,11 +23,11 @@
+-#define __SRCU_STRUCT_INIT(name) \
++#define __SRCU_STRUCT_INIT(name, pcpu_name) \
+ { \
+- .sda = &name##_srcu_data, \
+- .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \
++ .sda = &pcpu_name, \
++ .lock = __SPIN_LOCK_UNLOCKED(name.lock), \
+ .srcu_gp_seq_needed = 0 - 1, \
+ __SRCU_DEP_MAP_INIT(name) \
+ }
+@@ -133,7 +133,7 @@
*/
- struct completion {
- unsigned int done;
-- wait_queue_head_t wait;
-+ struct swait_queue_head wait;
+ #define __DEFINE_SRCU(name, is_static) \
+ static DEFINE_PER_CPU(struct srcu_data, name##_srcu_data);\
+- is_static struct srcu_struct name = __SRCU_STRUCT_INIT(name)
++ is_static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name##_srcu_data)
+ #define DEFINE_SRCU(name) __DEFINE_SRCU(name, /* not static */)
+ #define DEFINE_STATIC_SRCU(name) __DEFINE_SRCU(name, static)
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/suspend.h linux-4.14/include/linux/suspend.h
+--- linux-4.14.orig/include/linux/suspend.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/suspend.h 2018-09-05 11:05:07.000000000 +0200
+@@ -196,6 +196,12 @@
+ void (*end)(void);
};
- #define COMPLETION_INITIALIZER(work) \
-- { 0, __WAIT_QUEUE_HEAD_INITIALIZER((work).wait) }
-+ { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) }
++#if defined(CONFIG_SUSPEND) || defined(CONFIG_HIBERNATION)
++extern bool pm_in_action;
++#else
++# define pm_in_action false
++#endif
++
+ #ifdef CONFIG_SUSPEND
+ extern suspend_state_t mem_sleep_current;
+ extern suspend_state_t mem_sleep_default;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/swait.h linux-4.14/include/linux/swait.h
+--- linux-4.14.orig/include/linux/swait.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/swait.h 2018-09-05 11:05:07.000000000 +0200
+@@ -5,6 +5,7 @@
+ #include <linux/list.h>
+ #include <linux/stddef.h>
+ #include <linux/spinlock.h>
++#include <linux/wait.h>
+ #include <asm/current.h>
- #define COMPLETION_INITIALIZER_ONSTACK(work) \
- ({ init_completion(&work); work; })
-@@ -73,7 +72,7 @@ struct completion {
- static inline void init_completion(struct completion *x)
- {
- x->done = 0;
-- init_waitqueue_head(&x->wait);
-+ init_swait_queue_head(&x->wait);
- }
+ /*
+@@ -147,6 +148,7 @@
+ extern void swake_up(struct swait_queue_head *q);
+ extern void swake_up_all(struct swait_queue_head *q);
+ extern void swake_up_locked(struct swait_queue_head *q);
++extern void swake_up_all_locked(struct swait_queue_head *q);
- /**
-diff --git a/include/linux/cpu.h b/include/linux/cpu.h
-index e571128ad99a..5e52d28c20c1 100644
---- a/include/linux/cpu.h
-+++ b/include/linux/cpu.h
-@@ -182,6 +182,8 @@ extern void get_online_cpus(void);
- extern void put_online_cpus(void);
- extern void cpu_hotplug_disable(void);
- extern void cpu_hotplug_enable(void);
-+extern void pin_current_cpu(void);
-+extern void unpin_current_cpu(void);
- #define hotcpu_notifier(fn, pri) cpu_notifier(fn, pri)
- #define __hotcpu_notifier(fn, pri) __cpu_notifier(fn, pri)
- #define register_hotcpu_notifier(nb) register_cpu_notifier(nb)
-@@ -199,6 +201,8 @@ static inline void cpu_hotplug_done(void) {}
- #define put_online_cpus() do { } while (0)
- #define cpu_hotplug_disable() do { } while (0)
- #define cpu_hotplug_enable() do { } while (0)
-+static inline void pin_current_cpu(void) { }
-+static inline void unpin_current_cpu(void) { }
- #define hotcpu_notifier(fn, pri) do { (void)(fn); } while (0)
- #define __hotcpu_notifier(fn, pri) do { (void)(fn); } while (0)
- /* These aren't inline functions due to a GCC bug. */
-diff --git a/include/linux/dcache.h b/include/linux/dcache.h
-index 5beed7b30561..61cab7ef458e 100644
---- a/include/linux/dcache.h
-+++ b/include/linux/dcache.h
-@@ -11,6 +11,7 @@
- #include <linux/rcupdate.h>
- #include <linux/lockref.h>
- #include <linux/stringhash.h>
-+#include <linux/wait.h>
+ extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
+ extern void prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait, int state);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/swap.h linux-4.14/include/linux/swap.h
+--- linux-4.14.orig/include/linux/swap.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/swap.h 2018-09-05 11:05:07.000000000 +0200
+@@ -12,6 +12,7 @@
+ #include <linux/fs.h>
+ #include <linux/atomic.h>
+ #include <linux/page-flags.h>
++#include <linux/locallock.h>
+ #include <asm/page.h>
+
+ struct notifier_block;
+@@ -297,7 +298,8 @@
+ void *workingset_eviction(struct address_space *mapping, struct page *page);
+ bool workingset_refault(void *shadow);
+ void workingset_activation(struct page *page);
+-void workingset_update_node(struct radix_tree_node *node, void *private);
++void __workingset_update_node(struct radix_tree_node *node, void *private);
++DECLARE_LOCAL_IRQ_LOCK(shadow_nodes_lock);
- struct path;
- struct vfsmount;
-@@ -100,7 +101,7 @@ struct dentry {
+ /* linux/mm/page_alloc.c */
+ extern unsigned long totalram_pages;
+@@ -310,6 +312,7 @@
- union {
- struct list_head d_lru; /* LRU list */
-- wait_queue_head_t *d_wait; /* in-lookup ones only */
-+ struct swait_queue_head *d_wait; /* in-lookup ones only */
- };
- struct list_head d_child; /* child of parent list */
- struct list_head d_subdirs; /* our children */
-@@ -230,7 +231,7 @@ extern void d_set_d_op(struct dentry *dentry, const struct dentry_operations *op
- extern struct dentry * d_alloc(struct dentry *, const struct qstr *);
- extern struct dentry * d_alloc_pseudo(struct super_block *, const struct qstr *);
- extern struct dentry * d_alloc_parallel(struct dentry *, const struct qstr *,
-- wait_queue_head_t *);
-+ struct swait_queue_head *);
- extern struct dentry * d_splice_alias(struct inode *, struct dentry *);
- extern struct dentry * d_add_ci(struct dentry *, struct inode *, struct qstr *);
- extern struct dentry * d_exact_alias(struct dentry *, struct inode *);
-diff --git a/include/linux/delay.h b/include/linux/delay.h
-index a6ecb34cf547..37caab306336 100644
---- a/include/linux/delay.h
-+++ b/include/linux/delay.h
-@@ -52,4 +52,10 @@ static inline void ssleep(unsigned int seconds)
- msleep(seconds * 1000);
- }
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+extern void cpu_chill(void);
+ /* linux/mm/swap.c */
++DECLARE_LOCAL_IRQ_LOCK(swapvec_lock);
+ extern void lru_cache_add(struct page *);
+ extern void lru_cache_add_anon(struct page *page);
+ extern void lru_cache_add_file(struct page *page);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/swork.h linux-4.14/include/linux/swork.h
+--- linux-4.14.orig/include/linux/swork.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/linux/swork.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,24 @@
++#ifndef _LINUX_SWORK_H
++#define _LINUX_SWORK_H
++
++#include <linux/list.h>
++
++struct swork_event {
++ struct list_head item;
++ unsigned long flags;
++ void (*func)(struct swork_event *);
++};
++
++static inline void INIT_SWORK(struct swork_event *event,
++ void (*func)(struct swork_event *))
++{
++ event->flags = 0;
++ event->func = func;
++}
++
++bool swork_queue(struct swork_event *sev);
++
++int swork_get(void);
++void swork_put(void);
++
++#endif /* _LINUX_SWORK_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/thread_info.h linux-4.14/include/linux/thread_info.h
+--- linux-4.14.orig/include/linux/thread_info.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/thread_info.h 2018-09-05 11:05:07.000000000 +0200
+@@ -86,7 +86,17 @@
+ #define test_thread_flag(flag) \
+ test_ti_thread_flag(current_thread_info(), flag)
+
+-#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED)
++#ifdef CONFIG_PREEMPT_LAZY
++#define tif_need_resched() (test_thread_flag(TIF_NEED_RESCHED) || \
++ test_thread_flag(TIF_NEED_RESCHED_LAZY))
++#define tif_need_resched_now() (test_thread_flag(TIF_NEED_RESCHED))
++#define tif_need_resched_lazy() test_thread_flag(TIF_NEED_RESCHED_LAZY))
++
+#else
-+# define cpu_chill() cpu_relax()
++#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED)
++#define tif_need_resched_now() test_thread_flag(TIF_NEED_RESCHED)
++#define tif_need_resched_lazy() 0
+#endif
-+
- #endif /* defined(_LINUX_DELAY_H) */
-diff --git a/include/linux/highmem.h b/include/linux/highmem.h
-index bb3f3297062a..a117a33ef72c 100644
---- a/include/linux/highmem.h
-+++ b/include/linux/highmem.h
-@@ -7,6 +7,7 @@
- #include <linux/mm.h>
- #include <linux/uaccess.h>
- #include <linux/hardirq.h>
-+#include <linux/sched.h>
- #include <asm/cacheflush.h>
+ #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+ static inline int arch_within_stack_frames(const void * const stack,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/timer.h linux-4.14/include/linux/timer.h
+--- linux-4.14.orig/include/linux/timer.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/linux/timer.h 2018-09-05 11:05:07.000000000 +0200
+@@ -213,7 +213,7 @@
-@@ -65,7 +66,7 @@ static inline void kunmap(struct page *page)
+ extern int try_to_del_timer_sync(struct timer_list *timer);
- static inline void *kmap_atomic(struct page *page)
- {
-- preempt_disable();
-+ preempt_disable_nort();
- pagefault_disable();
- return page_address(page);
- }
-@@ -74,7 +75,7 @@ static inline void *kmap_atomic(struct page *page)
- static inline void __kunmap_atomic(void *addr)
- {
- pagefault_enable();
-- preempt_enable();
-+ preempt_enable_nort();
- }
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+ extern int del_timer_sync(struct timer_list *timer);
+ #else
+ # define del_timer_sync(t) del_timer(t)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/trace_events.h linux-4.14/include/linux/trace_events.h
+--- linux-4.14.orig/include/linux/trace_events.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/trace_events.h 2018-09-05 11:05:07.000000000 +0200
+@@ -62,6 +62,9 @@
+ unsigned char flags;
+ unsigned char preempt_count;
+ int pid;
++ unsigned short migrate_disable;
++ unsigned short padding;
++ unsigned char preempt_lazy_count;
+ };
- #define kmap_atomic_pfn(pfn) kmap_atomic(pfn_to_page(pfn))
-@@ -86,32 +87,51 @@ static inline void __kunmap_atomic(void *addr)
+ #define TRACE_EVENT_TYPE_MAX \
+@@ -402,11 +405,13 @@
+
+ extern int filter_match_preds(struct event_filter *filter, void *rec);
+
+-extern enum event_trigger_type event_triggers_call(struct trace_event_file *file,
+- void *rec);
+-extern void event_triggers_post_call(struct trace_event_file *file,
+- enum event_trigger_type tt,
+- void *rec);
++extern enum event_trigger_type
++event_triggers_call(struct trace_event_file *file, void *rec,
++ struct ring_buffer_event *event);
++extern void
++event_triggers_post_call(struct trace_event_file *file,
++ enum event_trigger_type tt,
++ void *rec, struct ring_buffer_event *event);
- #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32)
+ bool trace_event_ignore_this_pid(struct trace_event_file *trace_file);
-+#ifndef CONFIG_PREEMPT_RT_FULL
- DECLARE_PER_CPU(int, __kmap_atomic_idx);
-+#endif
+@@ -426,7 +431,7 @@
- static inline int kmap_atomic_idx_push(void)
+ if (!(eflags & EVENT_FILE_FL_TRIGGER_COND)) {
+ if (eflags & EVENT_FILE_FL_TRIGGER_MODE)
+- event_triggers_call(file, NULL);
++ event_triggers_call(file, NULL, NULL);
+ if (eflags & EVENT_FILE_FL_SOFT_DISABLED)
+ return true;
+ if (eflags & EVENT_FILE_FL_PID_FILTER)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/uaccess.h linux-4.14/include/linux/uaccess.h
+--- linux-4.14.orig/include/linux/uaccess.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/uaccess.h 2018-09-05 11:05:07.000000000 +0200
+@@ -185,6 +185,7 @@
+ */
+ static inline void pagefault_disable(void)
{
-+#ifndef CONFIG_PREEMPT_RT_FULL
- int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1;
++ migrate_disable();
+ pagefault_disabled_inc();
+ /*
+ * make sure to have issued the store before a pagefault
+@@ -201,6 +202,7 @@
+ */
+ barrier();
+ pagefault_disabled_dec();
++ migrate_enable();
+ }
--#ifdef CONFIG_DEBUG_HIGHMEM
-+# ifdef CONFIG_DEBUG_HIGHMEM
- WARN_ON_ONCE(in_irq() && !irqs_disabled());
- BUG_ON(idx >= KM_TYPE_NR);
--#endif
-+# endif
- return idx;
-+#else
-+ current->kmap_idx++;
-+ BUG_ON(current->kmap_idx > KM_TYPE_NR);
-+ return current->kmap_idx - 1;
-+#endif
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/vmstat.h linux-4.14/include/linux/vmstat.h
+--- linux-4.14.orig/include/linux/vmstat.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/vmstat.h 2018-09-05 11:05:07.000000000 +0200
+@@ -33,7 +33,9 @@
+ */
+ static inline void __count_vm_event(enum vm_event_item item)
+ {
++ preempt_disable_rt();
+ raw_cpu_inc(vm_event_states.event[item]);
++ preempt_enable_rt();
}
- static inline int kmap_atomic_idx(void)
+ static inline void count_vm_event(enum vm_event_item item)
+@@ -43,7 +45,9 @@
+
+ static inline void __count_vm_events(enum vm_event_item item, long delta)
{
-+#ifndef CONFIG_PREEMPT_RT_FULL
- return __this_cpu_read(__kmap_atomic_idx) - 1;
-+#else
-+ return current->kmap_idx - 1;
-+#endif
++ preempt_disable_rt();
+ raw_cpu_add(vm_event_states.event[item], delta);
++ preempt_enable_rt();
}
- static inline void kmap_atomic_idx_pop(void)
- {
--#ifdef CONFIG_DEBUG_HIGHMEM
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+# ifdef CONFIG_DEBUG_HIGHMEM
- int idx = __this_cpu_dec_return(__kmap_atomic_idx);
+ static inline void count_vm_events(enum vm_event_item item, long delta)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/linux/wait.h linux-4.14/include/linux/wait.h
+--- linux-4.14.orig/include/linux/wait.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/linux/wait.h 2018-09-05 11:05:07.000000000 +0200
+@@ -10,6 +10,7 @@
+
+ #include <asm/current.h>
+ #include <uapi/linux/wait.h>
++#include <linux/atomic.h>
+
+ typedef struct wait_queue_entry wait_queue_entry_t;
+
+@@ -486,8 +487,8 @@
+ int __ret = 0; \
+ struct hrtimer_sleeper __t; \
+ \
+- hrtimer_init_on_stack(&__t.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); \
+- hrtimer_init_sleeper(&__t, current); \
++ hrtimer_init_sleeper_on_stack(&__t, CLOCK_MONOTONIC, HRTIMER_MODE_REL, \
++ current); \
+ if ((timeout) != KTIME_MAX) \
+ hrtimer_start_range_ns(&__t.timer, timeout, \
+ current->timer_slack_ns, \
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/net/gen_stats.h linux-4.14/include/net/gen_stats.h
+--- linux-4.14.orig/include/net/gen_stats.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/net/gen_stats.h 2018-09-05 11:05:07.000000000 +0200
+@@ -6,6 +6,7 @@
+ #include <linux/socket.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/pkt_sched.h>
++#include <net/net_seq_lock.h>
+
+ struct gnet_stats_basic_cpu {
+ struct gnet_stats_basic_packed bstats;
+@@ -36,11 +37,11 @@
+ spinlock_t *lock, struct gnet_dump *d,
+ int padattr);
- BUG_ON(idx < 0);
--#else
-+# else
- __this_cpu_dec(__kmap_atomic_idx);
-+# endif
-+#else
-+ current->kmap_idx--;
-+# ifdef CONFIG_DEBUG_HIGHMEM
-+ BUG_ON(current->kmap_idx < 0);
-+# endif
- #endif
+-int gnet_stats_copy_basic(const seqcount_t *running,
++int gnet_stats_copy_basic(net_seqlock_t *running,
+ struct gnet_dump *d,
+ struct gnet_stats_basic_cpu __percpu *cpu,
+ struct gnet_stats_basic_packed *b);
+-void __gnet_stats_copy_basic(const seqcount_t *running,
++void __gnet_stats_copy_basic(net_seqlock_t *running,
+ struct gnet_stats_basic_packed *bstats,
+ struct gnet_stats_basic_cpu __percpu *cpu,
+ struct gnet_stats_basic_packed *b);
+@@ -57,13 +58,13 @@
+ struct gnet_stats_basic_cpu __percpu *cpu_bstats,
+ struct net_rate_estimator __rcu **rate_est,
+ spinlock_t *stats_lock,
+- seqcount_t *running, struct nlattr *opt);
++ net_seqlock_t *running, struct nlattr *opt);
+ void gen_kill_estimator(struct net_rate_estimator __rcu **ptr);
+ int gen_replace_estimator(struct gnet_stats_basic_packed *bstats,
+ struct gnet_stats_basic_cpu __percpu *cpu_bstats,
+ struct net_rate_estimator __rcu **ptr,
+ spinlock_t *stats_lock,
+- seqcount_t *running, struct nlattr *opt);
++ net_seqlock_t *running, struct nlattr *opt);
+ bool gen_estimator_active(struct net_rate_estimator __rcu **ptr);
+ bool gen_estimator_read(struct net_rate_estimator __rcu **ptr,
+ struct gnet_stats_rate_est64 *sample);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/net/neighbour.h linux-4.14/include/net/neighbour.h
+--- linux-4.14.orig/include/net/neighbour.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/include/net/neighbour.h 2018-09-05 11:05:07.000000000 +0200
+@@ -450,7 +450,7 @@
}
+ #endif
-diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
-index 5e00f80b1535..65d0671f20b4 100644
---- a/include/linux/hrtimer.h
-+++ b/include/linux/hrtimer.h
-@@ -87,6 +87,9 @@ enum hrtimer_restart {
- * @function: timer expiry callback function
- * @base: pointer to the timer base (per cpu and per clock)
- * @state: state information (See bit values above)
-+ * @cb_entry: list entry to defer timers from hardirq context
-+ * @irqsafe: timer can run in hardirq context
-+ * @praecox: timer expiry time if expired at the time of programming
- * @is_rel: Set if the timer was armed relative
- * @start_pid: timer statistics field to store the pid of the task which
- * started the timer
-@@ -103,6 +106,11 @@ struct hrtimer {
- enum hrtimer_restart (*function)(struct hrtimer *);
- struct hrtimer_clock_base *base;
- u8 state;
-+ struct list_head cb_entry;
-+ int irqsafe;
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ ktime_t praecox;
-+#endif
- u8 is_rel;
- #ifdef CONFIG_TIMER_STATS
- int start_pid;
-@@ -123,11 +131,7 @@ struct hrtimer_sleeper {
- struct task_struct *task;
- };
+-static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb)
++static inline int neigh_hh_output(struct hh_cache *hh, struct sk_buff *skb)
+ {
+ unsigned int seq;
+ unsigned int hh_len;
+@@ -474,7 +474,7 @@
--#ifdef CONFIG_64BIT
- # define HRTIMER_CLOCK_BASE_ALIGN 64
--#else
--# define HRTIMER_CLOCK_BASE_ALIGN 32
--#endif
+ static inline int neigh_output(struct neighbour *n, struct sk_buff *skb)
+ {
+- const struct hh_cache *hh = &n->hh;
++ struct hh_cache *hh = &n->hh;
- /**
- * struct hrtimer_clock_base - the timer base for a specific clock
-@@ -136,6 +140,7 @@ struct hrtimer_sleeper {
- * timer to a base on another cpu.
- * @clockid: clock id for per_cpu support
- * @active: red black tree root node for the active timers
-+ * @expired: list head for deferred timers.
- * @get_time: function to retrieve the current time of the clock
- * @offset: offset of this clock to the monotonic base
- */
-@@ -144,6 +149,7 @@ struct hrtimer_clock_base {
- int index;
- clockid_t clockid;
- struct timerqueue_head active;
-+ struct list_head expired;
- ktime_t (*get_time)(void);
- ktime_t offset;
- } __attribute__((__aligned__(HRTIMER_CLOCK_BASE_ALIGN)));
-@@ -187,6 +193,7 @@ struct hrtimer_cpu_base {
- raw_spinlock_t lock;
- seqcount_t seq;
- struct hrtimer *running;
-+ struct hrtimer *running_soft;
- unsigned int cpu;
- unsigned int active_bases;
- unsigned int clock_was_set_seq;
-@@ -203,6 +210,9 @@ struct hrtimer_cpu_base {
- unsigned int nr_hangs;
- unsigned int max_hang_time;
- #endif
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ wait_queue_head_t wait;
-+#endif
- struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES];
- } ____cacheline_aligned;
+ if ((n->nud_state & NUD_CONNECTED) && hh->hh_len)
+ return neigh_hh_output(hh, skb);
+@@ -515,7 +515,7 @@
-@@ -412,6 +422,13 @@ static inline void hrtimer_restart(struct hrtimer *timer)
- hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
- }
+ #define NEIGH_CB(skb) ((struct neighbour_cb *)(skb)->cb)
-+/* Softirq preemption could deadlock timer removal */
+-static inline void neigh_ha_snapshot(char *dst, const struct neighbour *n,
++static inline void neigh_ha_snapshot(char *dst, struct neighbour *n,
+ const struct net_device *dev)
+ {
+ unsigned int seq;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/net/net_seq_lock.h linux-4.14/include/net/net_seq_lock.h
+--- linux-4.14.orig/include/net/net_seq_lock.h 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/include/net/net_seq_lock.h 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,15 @@
++#ifndef __NET_NET_SEQ_LOCK_H__
++#define __NET_NET_SEQ_LOCK_H__
++
+#ifdef CONFIG_PREEMPT_RT_BASE
-+ extern void hrtimer_wait_for_timer(const struct hrtimer *timer);
++# define net_seqlock_t seqlock_t
++# define net_seq_begin(__r) read_seqbegin(__r)
++# define net_seq_retry(__r, __s) read_seqretry(__r, __s)
++
+#else
-+# define hrtimer_wait_for_timer(timer) do { cpu_relax(); } while (0)
++# define net_seqlock_t seqcount_t
++# define net_seq_begin(__r) read_seqcount_begin(__r)
++# define net_seq_retry(__r, __s) read_seqcount_retry(__r, __s)
+#endif
+
- /* Query timers: */
- extern ktime_t __hrtimer_get_remaining(const struct hrtimer *timer, bool adjust);
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/net/sch_generic.h linux-4.14/include/net/sch_generic.h
+--- linux-4.14.orig/include/net/sch_generic.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/net/sch_generic.h 2018-09-05 11:05:07.000000000 +0200
+@@ -10,6 +10,7 @@
+ #include <linux/percpu.h>
+ #include <linux/dynamic_queue_limits.h>
+ #include <linux/list.h>
++#include <net/net_seq_lock.h>
+ #include <linux/refcount.h>
+ #include <linux/workqueue.h>
+ #include <net/gen_stats.h>
+@@ -90,7 +91,7 @@
+ struct sk_buff *gso_skb ____cacheline_aligned_in_smp;
+ struct qdisc_skb_head q;
+ struct gnet_stats_basic_packed bstats;
+- seqcount_t running;
++ net_seqlock_t running;
+ struct gnet_stats_queue qstats;
+ unsigned long state;
+ struct Qdisc *next_sched;
+@@ -109,13 +110,22 @@
+ refcount_inc(&qdisc->refcnt);
+ }
-@@ -436,7 +453,7 @@ static inline int hrtimer_is_queued(struct hrtimer *timer)
- * Helper function to check, whether the timer is running the callback
- * function
- */
--static inline int hrtimer_callback_running(struct hrtimer *timer)
-+static inline int hrtimer_callback_running(const struct hrtimer *timer)
+-static inline bool qdisc_is_running(const struct Qdisc *qdisc)
++static inline bool qdisc_is_running(struct Qdisc *qdisc)
{
- return timer->base->cpu_base->running == timer;
- }
-diff --git a/include/linux/idr.h b/include/linux/idr.h
-index 083d61e92706..5899796f50cb 100644
---- a/include/linux/idr.h
-+++ b/include/linux/idr.h
-@@ -95,10 +95,14 @@ bool idr_is_empty(struct idr *idp);
- * Each idr_preload() should be matched with an invocation of this
- * function. See idr_preload() for details.
- */
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+void idr_preload_end(void);
++#ifdef CONFIG_PREEMPT_RT_BASE
++ return spin_is_locked(&qdisc->running.lock) ? true : false;
+#else
- static inline void idr_preload_end(void)
- {
- preempt_enable();
- }
+ return (raw_read_seqcount(&qdisc->running) & 1) ? true : false;
+#endif
+ }
- /**
- * idr_find - return pointer for given id
-diff --git a/include/linux/init_task.h b/include/linux/init_task.h
-index 325f649d77ff..8af70bcc799b 100644
---- a/include/linux/init_task.h
-+++ b/include/linux/init_task.h
-@@ -150,6 +150,12 @@ extern struct task_group root_task_group;
- # define INIT_PERF_EVENTS(tsk)
- #endif
-
+ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
+ {
+#ifdef CONFIG_PREEMPT_RT_BASE
-+# define INIT_TIMER_LIST .posix_timer_list = NULL,
++ if (try_write_seqlock(&qdisc->running))
++ return true;
++ return false;
+#else
-+# define INIT_TIMER_LIST
+ if (qdisc_is_running(qdisc))
+ return false;
+ /* Variant of write_seqcount_begin() telling lockdep a trylock
+@@ -124,11 +134,16 @@
+ raw_write_seqcount_begin(&qdisc->running);
+ seqcount_acquire(&qdisc->running.dep_map, 0, 1, _RET_IP_);
+ return true;
+#endif
-+
- #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN
- # define INIT_VTIME(tsk) \
- .vtime_seqcount = SEQCNT_ZERO(tsk.vtime_seqcount), \
-@@ -250,6 +256,7 @@ extern struct task_group root_task_group;
- .cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \
- .pi_lock = __RAW_SPIN_LOCK_UNLOCKED(tsk.pi_lock), \
- .timer_slack_ns = 50000, /* 50 usec default slack */ \
-+ INIT_TIMER_LIST \
- .pids = { \
- [PIDTYPE_PID] = INIT_PID_LINK(PIDTYPE_PID), \
- [PIDTYPE_PGID] = INIT_PID_LINK(PIDTYPE_PGID), \
-diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
-index 72f0721f75e7..480972ae47d3 100644
---- a/include/linux/interrupt.h
-+++ b/include/linux/interrupt.h
-@@ -14,6 +14,7 @@
- #include <linux/hrtimer.h>
- #include <linux/kref.h>
- #include <linux/workqueue.h>
-+#include <linux/swork.h>
-
- #include <linux/atomic.h>
- #include <asm/ptrace.h>
-@@ -61,6 +62,7 @@
- * interrupt handler after suspending interrupts. For system
- * wakeup devices users need to implement wakeup detection in
- * their interrupt handlers.
-+ * IRQF_NO_SOFTIRQ_CALL - Do not process softirqs in the irq thread context (RT)
- */
- #define IRQF_SHARED 0x00000080
- #define IRQF_PROBE_SHARED 0x00000100
-@@ -74,6 +76,7 @@
- #define IRQF_NO_THREAD 0x00010000
- #define IRQF_EARLY_RESUME 0x00020000
- #define IRQF_COND_SUSPEND 0x00040000
-+#define IRQF_NO_SOFTIRQ_CALL 0x00080000
-
- #define IRQF_TIMER (__IRQF_TIMER | IRQF_NO_SUSPEND | IRQF_NO_THREAD)
-
-@@ -196,7 +199,7 @@ extern void devm_free_irq(struct device *dev, unsigned int irq, void *dev_id);
- #ifdef CONFIG_LOCKDEP
- # define local_irq_enable_in_hardirq() do { } while (0)
- #else
--# define local_irq_enable_in_hardirq() local_irq_enable()
-+# define local_irq_enable_in_hardirq() local_irq_enable_nort()
- #endif
+ }
- extern void disable_irq_nosync(unsigned int irq);
-@@ -216,6 +219,7 @@ extern void resume_device_irqs(void);
- * struct irq_affinity_notify - context for notification of IRQ affinity changes
- * @irq: Interrupt to which notification applies
- * @kref: Reference count, for internal use
-+ * @swork: Swork item, for internal use
- * @work: Work item, for internal use
- * @notify: Function to be called on change. This will be
- * called in process context.
-@@ -227,7 +231,11 @@ extern void resume_device_irqs(void);
- struct irq_affinity_notify {
- unsigned int irq;
- struct kref kref;
+ static inline void qdisc_run_end(struct Qdisc *qdisc)
+ {
+#ifdef CONFIG_PREEMPT_RT_BASE
-+ struct swork_event swork;
++ write_sequnlock(&qdisc->running);
+#else
- struct work_struct work;
+ write_seqcount_end(&qdisc->running);
+#endif
- void (*notify)(struct irq_affinity_notify *, const cpumask_t *mask);
- void (*release)(struct kref *ref);
- };
-@@ -406,9 +414,13 @@ extern int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
- bool state);
+ }
- #ifdef CONFIG_IRQ_FORCED_THREADING
-+# ifndef CONFIG_PREEMPT_RT_BASE
- extern bool force_irqthreads;
-+# else
-+# define force_irqthreads (true)
-+# endif
- #else
--#define force_irqthreads (0)
-+#define force_irqthreads (false)
- #endif
+ static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
+@@ -337,7 +352,7 @@
+ return qdisc_lock(root);
+ }
- #ifndef __ARCH_SET_SOFTIRQ_PENDING
-@@ -465,9 +477,10 @@ struct softirq_action
- void (*action)(struct softirq_action *);
- };
+-static inline seqcount_t *qdisc_root_sleeping_running(const struct Qdisc *qdisc)
++static inline net_seqlock_t *qdisc_root_sleeping_running(const struct Qdisc *qdisc)
+ {
+ struct Qdisc *root = qdisc_root_sleeping(qdisc);
-+#ifndef CONFIG_PREEMPT_RT_FULL
- asmlinkage void do_softirq(void);
- asmlinkage void __do_softirq(void);
--
-+static inline void thread_do_softirq(void) { do_softirq(); }
- #ifdef __ARCH_HAS_DO_SOFTIRQ
- void do_softirq_own_stack(void);
- #else
-@@ -476,13 +489,25 @@ static inline void do_softirq_own_stack(void)
- __do_softirq();
- }
- #endif
-+#else
-+extern void thread_do_softirq(void);
-+#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/net/xfrm.h linux-4.14/include/net/xfrm.h
+--- linux-4.14.orig/include/net/xfrm.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/net/xfrm.h 2018-09-05 11:05:07.000000000 +0200
+@@ -217,7 +217,7 @@
+ struct xfrm_stats stats;
+
+ struct xfrm_lifetime_cur curlft;
+- struct tasklet_hrtimer mtimer;
++ struct hrtimer mtimer;
+
+ struct xfrm_state_offload xso;
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/include/trace/events/timer.h linux-4.14/include/trace/events/timer.h
+--- linux-4.14.orig/include/trace/events/timer.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/include/trace/events/timer.h 2018-09-05 11:05:07.000000000 +0200
+@@ -148,7 +148,11 @@
+ { HRTIMER_MODE_ABS, "ABS" }, \
+ { HRTIMER_MODE_REL, "REL" }, \
+ { HRTIMER_MODE_ABS_PINNED, "ABS|PINNED" }, \
+- { HRTIMER_MODE_REL_PINNED, "REL|PINNED" })
++ { HRTIMER_MODE_REL_PINNED, "REL|PINNED" }, \
++ { HRTIMER_MODE_ABS_SOFT, "ABS|SOFT" }, \
++ { HRTIMER_MODE_REL_SOFT, "REL|SOFT" }, \
++ { HRTIMER_MODE_ABS_PINNED_SOFT, "ABS|PINNED|SOFT" }, \
++ { HRTIMER_MODE_REL_PINNED_SOFT, "REL|PINNED|SOFT" })
- extern void open_softirq(int nr, void (*action)(struct softirq_action *));
- extern void softirq_init(void);
- extern void __raise_softirq_irqoff(unsigned int nr);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+extern void __raise_softirq_irqoff_ksoft(unsigned int nr);
-+#else
-+static inline void __raise_softirq_irqoff_ksoft(unsigned int nr)
-+{
-+ __raise_softirq_irqoff(nr);
-+}
-+#endif
+ /**
+ * hrtimer_init - called when the hrtimer is initialized
+@@ -186,15 +190,16 @@
+ */
+ TRACE_EVENT(hrtimer_start,
+
+- TP_PROTO(struct hrtimer *hrtimer),
++ TP_PROTO(struct hrtimer *hrtimer, enum hrtimer_mode mode),
+
+- TP_ARGS(hrtimer),
++ TP_ARGS(hrtimer, mode),
+
+ TP_STRUCT__entry(
+ __field( void *, hrtimer )
+ __field( void *, function )
+ __field( s64, expires )
+ __field( s64, softexpires )
++ __field( enum hrtimer_mode, mode )
+ ),
+
+ TP_fast_assign(
+@@ -202,12 +207,14 @@
+ __entry->function = hrtimer->function;
+ __entry->expires = hrtimer_get_expires(hrtimer);
+ __entry->softexpires = hrtimer_get_softexpires(hrtimer);
++ __entry->mode = mode;
+ ),
+
+- TP_printk("hrtimer=%p function=%pf expires=%llu softexpires=%llu",
+- __entry->hrtimer, __entry->function,
++ TP_printk("hrtimer=%p function=%pf expires=%llu softexpires=%llu "
++ "mode=%s", __entry->hrtimer, __entry->function,
+ (unsigned long long) __entry->expires,
+- (unsigned long long) __entry->softexpires)
++ (unsigned long long) __entry->softexpires,
++ decode_hrtimer_mode(__entry->mode))
+ );
- extern void raise_softirq_irqoff(unsigned int nr);
- extern void raise_softirq(unsigned int nr);
-+extern void softirq_check_pending_idle(void);
+ /**
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/init/Kconfig linux-4.14/init/Kconfig
+--- linux-4.14.orig/init/Kconfig 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/init/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -744,6 +744,7 @@
+ config RT_GROUP_SCHED
+ bool "Group scheduling for SCHED_RR/FIFO"
+ depends on CGROUP_SCHED
++ depends on !PREEMPT_RT_FULL
+ default n
+ help
+ This feature lets you explicitly allocate real CPU bandwidth
+@@ -1533,6 +1534,7 @@
- DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
+ config SLAB
+ bool "SLAB"
++ depends on !PREEMPT_RT_FULL
+ select HAVE_HARDENED_USERCOPY_ALLOCATOR
+ help
+ The regular slab allocator that is established and known to work
+@@ -1553,6 +1555,7 @@
+ config SLOB
+ depends on EXPERT
+ bool "SLOB (Simple Allocator)"
++ depends on !PREEMPT_RT_FULL
+ help
+ SLOB replaces the stock allocator with a drastically simpler
+ allocator. SLOB is generally more space efficient but
+@@ -1594,7 +1597,7 @@
-@@ -504,8 +529,9 @@ static inline struct task_struct *this_cpu_ksoftirqd(void)
- to be executed on some cpu at least once after this.
- * If the tasklet is already scheduled, but its execution is still not
- started, it will be executed only once.
-- * If this tasklet is already running on another CPU (or schedule is called
-- from tasklet itself), it is rescheduled for later.
-+ * If this tasklet is already running on another CPU, it is rescheduled
-+ for later.
-+ * Schedule must not be called from the tasklet itself (a lockup occurs)
- * Tasklet is strictly serialized wrt itself, but not
- wrt another tasklets. If client needs some intertask synchronization,
- he makes it with spinlocks.
-@@ -530,27 +556,36 @@ struct tasklet_struct name = { NULL, 0, ATOMIC_INIT(1), func, data }
- enum
- {
- TASKLET_STATE_SCHED, /* Tasklet is scheduled for execution */
-- TASKLET_STATE_RUN /* Tasklet is running (SMP only) */
-+ TASKLET_STATE_RUN, /* Tasklet is running (SMP only) */
-+ TASKLET_STATE_PENDING /* Tasklet is pending */
- };
+ config SLUB_CPU_PARTIAL
+ default y
+- depends on SLUB && SMP
++ depends on SLUB && SMP && !PREEMPT_RT_FULL
+ bool "SLUB per cpu partial cache"
+ help
+ Per cpu partial caches accellerate objects allocation and freeing
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/init/main.c linux-4.14/init/main.c
+--- linux-4.14.orig/init/main.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/init/main.c 2018-09-05 11:05:07.000000000 +0200
+@@ -543,6 +543,7 @@
+ setup_command_line(command_line);
+ setup_nr_cpu_ids();
+ setup_per_cpu_areas();
++ softirq_early_init();
+ smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */
+ boot_cpu_hotplug_init();
--#ifdef CONFIG_SMP
-+#define TASKLET_STATEF_SCHED (1 << TASKLET_STATE_SCHED)
-+#define TASKLET_STATEF_RUN (1 << TASKLET_STATE_RUN)
-+#define TASKLET_STATEF_PENDING (1 << TASKLET_STATE_PENDING)
-+
-+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
- static inline int tasklet_trylock(struct tasklet_struct *t)
- {
- return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/init/Makefile linux-4.14/init/Makefile
+--- linux-4.14.orig/init/Makefile 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/init/Makefile 2018-09-05 11:05:07.000000000 +0200
+@@ -36,4 +36,4 @@
+ include/generated/compile.h: FORCE
+ @$($(quiet)chk_compile.h)
+ $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkcompile_h $@ \
+- "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CC) $(KBUILD_CFLAGS)"
++ "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CONFIG_PREEMPT_RT_FULL)" "$(CC) $(KBUILD_CFLAGS)"
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/cgroup/cgroup.c linux-4.14/kernel/cgroup/cgroup.c
+--- linux-4.14.orig/kernel/cgroup/cgroup.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/cgroup/cgroup.c 2018-09-05 11:05:07.000000000 +0200
+@@ -4508,10 +4508,10 @@
+ queue_work(cgroup_destroy_wq, &css->destroy_work);
}
-+static inline int tasklet_tryunlock(struct tasklet_struct *t)
-+{
-+ return cmpxchg(&t->state, TASKLET_STATEF_RUN, 0) == TASKLET_STATEF_RUN;
-+}
-+
- static inline void tasklet_unlock(struct tasklet_struct *t)
+-static void css_release_work_fn(struct work_struct *work)
++static void css_release_work_fn(struct swork_event *sev)
{
- smp_mb__before_atomic();
- clear_bit(TASKLET_STATE_RUN, &(t)->state);
- }
+ struct cgroup_subsys_state *css =
+- container_of(work, struct cgroup_subsys_state, destroy_work);
++ container_of(sev, struct cgroup_subsys_state, destroy_swork);
+ struct cgroup_subsys *ss = css->ss;
+ struct cgroup *cgrp = css->cgroup;
--static inline void tasklet_unlock_wait(struct tasklet_struct *t)
--{
-- while (test_bit(TASKLET_STATE_RUN, &(t)->state)) { barrier(); }
--}
-+extern void tasklet_unlock_wait(struct tasklet_struct *t);
-+
- #else
- #define tasklet_trylock(t) 1
-+#define tasklet_tryunlock(t) 1
- #define tasklet_unlock_wait(t) do { } while (0)
- #define tasklet_unlock(t) do { } while (0)
- #endif
-@@ -599,12 +634,7 @@ static inline void tasklet_disable(struct tasklet_struct *t)
- smp_mb();
- }
+@@ -4562,8 +4562,8 @@
+ struct cgroup_subsys_state *css =
+ container_of(ref, struct cgroup_subsys_state, refcnt);
--static inline void tasklet_enable(struct tasklet_struct *t)
--{
-- smp_mb__before_atomic();
-- atomic_dec(&t->count);
--}
--
-+extern void tasklet_enable(struct tasklet_struct *t);
- extern void tasklet_kill(struct tasklet_struct *t);
- extern void tasklet_kill_immediate(struct tasklet_struct *t, unsigned int cpu);
- extern void tasklet_init(struct tasklet_struct *t,
-@@ -635,6 +665,12 @@ void tasklet_hrtimer_cancel(struct tasklet_hrtimer *ttimer)
- tasklet_kill(&ttimer->tasklet);
+- INIT_WORK(&css->destroy_work, css_release_work_fn);
+- queue_work(cgroup_destroy_wq, &css->destroy_work);
++ INIT_SWORK(&css->destroy_swork, css_release_work_fn);
++ swork_queue(&css->destroy_swork);
}
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+extern void softirq_early_init(void);
-+#else
-+static inline void softirq_early_init(void) { }
-+#endif
-+
- /*
- * Autoprobing for irqs:
- *
-diff --git a/include/linux/irq.h b/include/linux/irq.h
-index e79875574b39..177cee0c3305 100644
---- a/include/linux/irq.h
-+++ b/include/linux/irq.h
-@@ -72,6 +72,7 @@ enum irqchip_irq_state;
- * IRQ_IS_POLLED - Always polled by another interrupt. Exclude
- * it from the spurious interrupt detection
- * mechanism and from core side polling.
-+ * IRQ_NO_SOFTIRQ_CALL - No softirq processing in the irq thread context (RT)
- * IRQ_DISABLE_UNLAZY - Disable lazy irq disable
+ static void init_and_link_css(struct cgroup_subsys_state *css,
+@@ -5269,6 +5269,7 @@
+ */
+ cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
+ BUG_ON(!cgroup_destroy_wq);
++ BUG_ON(swork_get());
+ return 0;
+ }
+ core_initcall(cgroup_wq_init);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/cgroup/cpuset.c linux-4.14/kernel/cgroup/cpuset.c
+--- linux-4.14.orig/kernel/cgroup/cpuset.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/cgroup/cpuset.c 2018-09-05 11:05:07.000000000 +0200
+@@ -288,7 +288,7 @@
*/
- enum {
-@@ -99,13 +100,14 @@ enum {
- IRQ_PER_CPU_DEVID = (1 << 17),
- IRQ_IS_POLLED = (1 << 18),
- IRQ_DISABLE_UNLAZY = (1 << 19),
-+ IRQ_NO_SOFTIRQ_CALL = (1 << 20),
- };
- #define IRQF_MODIFY_MASK \
- (IRQ_TYPE_SENSE_MASK | IRQ_NOPROBE | IRQ_NOREQUEST | \
- IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL | IRQ_NO_BALANCING | \
- IRQ_PER_CPU | IRQ_NESTED_THREAD | IRQ_NOTHREAD | IRQ_PER_CPU_DEVID | \
-- IRQ_IS_POLLED | IRQ_DISABLE_UNLAZY)
-+ IRQ_IS_POLLED | IRQ_DISABLE_UNLAZY | IRQ_NO_SOFTIRQ_CALL)
+ static DEFINE_MUTEX(cpuset_mutex);
+-static DEFINE_SPINLOCK(callback_lock);
++static DEFINE_RAW_SPINLOCK(callback_lock);
- #define IRQ_NO_BALANCING_MASK (IRQ_PER_CPU | IRQ_NO_BALANCING)
+ static struct workqueue_struct *cpuset_migrate_mm_wq;
-diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
-index 47b9ebd4a74f..2543aab05daa 100644
---- a/include/linux/irq_work.h
-+++ b/include/linux/irq_work.h
-@@ -16,6 +16,7 @@
- #define IRQ_WORK_BUSY 2UL
- #define IRQ_WORK_FLAGS 3UL
- #define IRQ_WORK_LAZY 4UL /* Doesn't want IPI, wait for tick */
-+#define IRQ_WORK_HARD_IRQ 8UL /* Run hard IRQ context, even on RT */
+@@ -926,9 +926,9 @@
+ continue;
+ rcu_read_unlock();
- struct irq_work {
- unsigned long flags;
-@@ -51,4 +52,10 @@ static inline bool irq_work_needs_cpu(void) { return false; }
- static inline void irq_work_run(void) { }
- #endif
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cpumask_copy(cp->effective_cpus, new_cpus);
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
-+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
-+void irq_work_tick_soft(void);
-+#else
-+static inline void irq_work_tick_soft(void) { }
-+#endif
-+
- #endif /* _LINUX_IRQ_WORK_H */
-diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
-index c9be57931b58..eeeb540971ae 100644
---- a/include/linux/irqdesc.h
-+++ b/include/linux/irqdesc.h
-@@ -66,6 +66,7 @@ struct irq_desc {
- unsigned int irqs_unhandled;
- atomic_t threads_handled;
- int threads_handled_last;
-+ u64 random_ip;
- raw_spinlock_t lock;
- struct cpumask *percpu_enabled;
- const struct cpumask *percpu_affinity;
-diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
-index 5dd1272d1ab2..9b77034f7c5e 100644
---- a/include/linux/irqflags.h
-+++ b/include/linux/irqflags.h
-@@ -25,8 +25,6 @@
- # define trace_softirqs_enabled(p) ((p)->softirqs_enabled)
- # define trace_hardirq_enter() do { current->hardirq_context++; } while (0)
- # define trace_hardirq_exit() do { current->hardirq_context--; } while (0)
--# define lockdep_softirq_enter() do { current->softirq_context++; } while (0)
--# define lockdep_softirq_exit() do { current->softirq_context--; } while (0)
- # define INIT_TRACE_IRQFLAGS .softirqs_enabled = 1,
- #else
- # define trace_hardirqs_on() do { } while (0)
-@@ -39,9 +37,15 @@
- # define trace_softirqs_enabled(p) 0
- # define trace_hardirq_enter() do { } while (0)
- # define trace_hardirq_exit() do { } while (0)
-+# define INIT_TRACE_IRQFLAGS
-+#endif
-+
-+#if defined(CONFIG_TRACE_IRQFLAGS) && !defined(CONFIG_PREEMPT_RT_FULL)
-+# define lockdep_softirq_enter() do { current->softirq_context++; } while (0)
-+# define lockdep_softirq_exit() do { current->softirq_context--; } while (0)
-+#else
- # define lockdep_softirq_enter() do { } while (0)
- # define lockdep_softirq_exit() do { } while (0)
--# define INIT_TRACE_IRQFLAGS
- #endif
+ WARN_ON(!is_in_v2_mode() &&
+ !cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
+@@ -993,9 +993,9 @@
+ if (retval < 0)
+ return retval;
- #if defined(CONFIG_IRQSOFF_TRACER) || \
-@@ -148,4 +152,23 @@
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+
+ /* use trialcs->cpus_allowed as a temp variable */
+ update_cpumasks_hier(cs, trialcs->cpus_allowed);
+@@ -1179,9 +1179,9 @@
+ continue;
+ rcu_read_unlock();
+
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cp->effective_mems = *new_mems;
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+
+ WARN_ON(!is_in_v2_mode() &&
+ !nodes_equal(cp->mems_allowed, cp->effective_mems));
+@@ -1249,9 +1249,9 @@
+ if (retval < 0)
+ goto done;
+
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cs->mems_allowed = trialcs->mems_allowed;
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+
+ /* use trialcs->mems_allowed as a temp variable */
+ update_nodemasks_hier(cs, &trialcs->mems_allowed);
+@@ -1342,9 +1342,9 @@
+ spread_flag_changed = ((is_spread_slab(cs) != is_spread_slab(trialcs))
+ || (is_spread_page(cs) != is_spread_page(trialcs)));
+
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cs->flags = trialcs->flags;
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+
+ if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
+ rebuild_sched_domains_locked();
+@@ -1759,7 +1759,7 @@
+ cpuset_filetype_t type = seq_cft(sf)->private;
+ int ret = 0;
- #define irqs_disabled_flags(flags) raw_irqs_disabled_flags(flags)
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
-+/*
-+ * local_irq* variants depending on RT/!RT
-+ */
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# define local_irq_disable_nort() do { } while (0)
-+# define local_irq_enable_nort() do { } while (0)
-+# define local_irq_save_nort(flags) local_save_flags(flags)
-+# define local_irq_restore_nort(flags) (void)(flags)
-+# define local_irq_disable_rt() local_irq_disable()
-+# define local_irq_enable_rt() local_irq_enable()
-+#else
-+# define local_irq_disable_nort() local_irq_disable()
-+# define local_irq_enable_nort() local_irq_enable()
-+# define local_irq_save_nort(flags) local_irq_save(flags)
-+# define local_irq_restore_nort(flags) local_irq_restore(flags)
-+# define local_irq_disable_rt() do { } while (0)
-+# define local_irq_enable_rt() do { } while (0)
-+#endif
-+
- #endif
-diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
-index dfaa1f4dcb0c..d57dd06544a1 100644
---- a/include/linux/jbd2.h
-+++ b/include/linux/jbd2.h
-@@ -347,32 +347,56 @@ static inline struct journal_head *bh2jh(struct buffer_head *bh)
+ switch (type) {
+ case FILE_CPULIST:
+@@ -1778,7 +1778,7 @@
+ ret = -EINVAL;
+ }
- static inline void jbd_lock_bh_state(struct buffer_head *bh)
- {
-+#ifndef CONFIG_PREEMPT_RT_BASE
- bit_spin_lock(BH_State, &bh->b_state);
-+#else
-+ spin_lock(&bh->b_state_lock);
-+#endif
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+ return ret;
}
- static inline int jbd_trylock_bh_state(struct buffer_head *bh)
- {
-+#ifndef CONFIG_PREEMPT_RT_BASE
- return bit_spin_trylock(BH_State, &bh->b_state);
-+#else
-+ return spin_trylock(&bh->b_state_lock);
-+#endif
- }
+@@ -1993,12 +1993,12 @@
- static inline int jbd_is_locked_bh_state(struct buffer_head *bh)
- {
-+#ifndef CONFIG_PREEMPT_RT_BASE
- return bit_spin_is_locked(BH_State, &bh->b_state);
-+#else
-+ return spin_is_locked(&bh->b_state_lock);
-+#endif
- }
+ cpuset_inc();
- static inline void jbd_unlock_bh_state(struct buffer_head *bh)
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ if (is_in_v2_mode()) {
+ cpumask_copy(cs->effective_cpus, parent->effective_cpus);
+ cs->effective_mems = parent->effective_mems;
+ }
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+
+ if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags))
+ goto out_unlock;
+@@ -2025,12 +2025,12 @@
+ }
+ rcu_read_unlock();
+
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cs->mems_allowed = parent->mems_allowed;
+ cs->effective_mems = parent->mems_allowed;
+ cpumask_copy(cs->cpus_allowed, parent->cpus_allowed);
+ cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+ out_unlock:
+ mutex_unlock(&cpuset_mutex);
+ return 0;
+@@ -2069,7 +2069,7 @@
+ static void cpuset_bind(struct cgroup_subsys_state *root_css)
{
-+#ifndef CONFIG_PREEMPT_RT_BASE
- bit_spin_unlock(BH_State, &bh->b_state);
-+#else
-+ spin_unlock(&bh->b_state_lock);
-+#endif
+ mutex_lock(&cpuset_mutex);
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+
+ if (is_in_v2_mode()) {
+ cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask);
+@@ -2080,7 +2080,7 @@
+ top_cpuset.mems_allowed = top_cpuset.effective_mems;
+ }
+
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+ mutex_unlock(&cpuset_mutex);
}
- static inline void jbd_lock_bh_journal_head(struct buffer_head *bh)
- {
-+#ifndef CONFIG_PREEMPT_RT_BASE
- bit_spin_lock(BH_JournalHead, &bh->b_state);
-+#else
-+ spin_lock(&bh->b_journal_head_lock);
-+#endif
+@@ -2094,7 +2094,7 @@
+ if (task_css_is_root(task, cpuset_cgrp_id))
+ return;
+
+- set_cpus_allowed_ptr(task, ¤t->cpus_allowed);
++ set_cpus_allowed_ptr(task, current->cpus_ptr);
+ task->mems_allowed = current->mems_allowed;
}
- static inline void jbd_unlock_bh_journal_head(struct buffer_head *bh)
+@@ -2178,12 +2178,12 @@
{
-+#ifndef CONFIG_PREEMPT_RT_BASE
- bit_spin_unlock(BH_JournalHead, &bh->b_state);
-+#else
-+ spin_unlock(&bh->b_journal_head_lock);
-+#endif
- }
+ bool is_empty;
- #define J_ASSERT(assert) BUG_ON(!(assert))
-diff --git a/include/linux/kdb.h b/include/linux/kdb.h
-index 410decacff8f..0861bebfc188 100644
---- a/include/linux/kdb.h
-+++ b/include/linux/kdb.h
-@@ -167,6 +167,7 @@ extern __printf(2, 0) int vkdb_printf(enum kdb_msgsrc src, const char *fmt,
- extern __printf(1, 2) int kdb_printf(const char *, ...);
- typedef __printf(1, 2) int (*kdb_printf_t)(const char *, ...);
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cpumask_copy(cs->cpus_allowed, new_cpus);
+ cpumask_copy(cs->effective_cpus, new_cpus);
+ cs->mems_allowed = *new_mems;
+ cs->effective_mems = *new_mems;
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
-+#define in_kdb_printk() (kdb_trap_printk)
- extern void kdb_init(int level);
+ /*
+ * Don't call update_tasks_cpumask() if the cpuset becomes empty,
+@@ -2220,10 +2220,10 @@
+ if (nodes_empty(*new_mems))
+ *new_mems = parent_cs(cs)->effective_mems;
- /* Access to kdb specific polling devices */
-@@ -201,6 +202,7 @@ extern int kdb_register_flags(char *, kdb_func_t, char *, char *,
- extern int kdb_unregister(char *);
- #else /* ! CONFIG_KGDB_KDB */
- static inline __printf(1, 2) int kdb_printf(const char *fmt, ...) { return 0; }
-+#define in_kdb_printk() (0)
- static inline void kdb_init(int level) {}
- static inline int kdb_register(char *cmd, kdb_func_t func, char *usage,
- char *help, short minlen) { return 0; }
-diff --git a/include/linux/kernel.h b/include/linux/kernel.h
-index bc6ed52a39b9..7894d55e4998 100644
---- a/include/linux/kernel.h
-+++ b/include/linux/kernel.h
-@@ -194,6 +194,9 @@ extern int _cond_resched(void);
- */
- # define might_sleep() \
- do { __might_sleep(__FILE__, __LINE__, 0); might_resched(); } while (0)
-+
-+# define might_sleep_no_state_check() \
-+ do { ___might_sleep(__FILE__, __LINE__, 0); might_resched(); } while (0)
- # define sched_annotate_sleep() (current->task_state_change = 0)
- #else
- static inline void ___might_sleep(const char *file, int line,
-@@ -201,6 +204,7 @@ extern int _cond_resched(void);
- static inline void __might_sleep(const char *file, int line,
- int preempt_offset) { }
- # define might_sleep() do { might_resched(); } while (0)
-+# define might_sleep_no_state_check() do { might_resched(); } while (0)
- # define sched_annotate_sleep() do { } while (0)
- #endif
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ cpumask_copy(cs->effective_cpus, new_cpus);
+ cs->effective_mems = *new_mems;
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
-@@ -488,6 +492,7 @@ extern enum system_states {
- SYSTEM_HALT,
- SYSTEM_POWER_OFF,
- SYSTEM_RESTART,
-+ SYSTEM_SUSPEND,
- } system_state;
+ if (cpus_updated)
+ update_tasks_cpumask(cs);
+@@ -2316,21 +2316,21 @@
- #define TAINT_PROPRIETARY_MODULE 0
-diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
-index cb483305e1f5..4e5062316bb6 100644
---- a/include/linux/list_bl.h
-+++ b/include/linux/list_bl.h
-@@ -2,6 +2,7 @@
- #define _LINUX_LIST_BL_H
+ /* synchronize cpus_allowed to cpu_active_mask */
+ if (cpus_updated) {
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ if (!on_dfl)
+ cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
+ cpumask_copy(top_cpuset.effective_cpus, &new_cpus);
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+ /* we don't mess with cpumasks of tasks in top_cpuset */
+ }
- #include <linux/list.h>
-+#include <linux/spinlock.h>
- #include <linux/bit_spinlock.h>
+ /* synchronize mems_allowed to N_MEMORY */
+ if (mems_updated) {
+- spin_lock_irq(&callback_lock);
++ raw_spin_lock_irq(&callback_lock);
+ if (!on_dfl)
+ top_cpuset.mems_allowed = new_mems;
+ top_cpuset.effective_mems = new_mems;
+- spin_unlock_irq(&callback_lock);
++ raw_spin_unlock_irq(&callback_lock);
+ update_tasks_nodemask(&top_cpuset);
+ }
- /*
-@@ -32,13 +33,24 @@
+@@ -2429,11 +2429,11 @@
+ {
+ unsigned long flags;
- struct hlist_bl_head {
- struct hlist_bl_node *first;
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ raw_spinlock_t lock;
-+#endif
- };
+- spin_lock_irqsave(&callback_lock, flags);
++ raw_spin_lock_irqsave(&callback_lock, flags);
+ rcu_read_lock();
+ guarantee_online_cpus(task_cs(tsk), pmask);
+ rcu_read_unlock();
+- spin_unlock_irqrestore(&callback_lock, flags);
++ raw_spin_unlock_irqrestore(&callback_lock, flags);
+ }
- struct hlist_bl_node {
- struct hlist_bl_node *next, **pprev;
- };
--#define INIT_HLIST_BL_HEAD(ptr) \
-- ((ptr)->first = NULL)
-+
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+#define INIT_HLIST_BL_HEAD(h) \
-+do { \
-+ (h)->first = NULL; \
-+ raw_spin_lock_init(&(h)->lock); \
-+} while (0)
-+#else
-+#define INIT_HLIST_BL_HEAD(h) (h)->first = NULL
-+#endif
+ void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
+@@ -2481,11 +2481,11 @@
+ nodemask_t mask;
+ unsigned long flags;
- static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
- {
-@@ -118,12 +130,26 @@ static inline void hlist_bl_del_init(struct hlist_bl_node *n)
+- spin_lock_irqsave(&callback_lock, flags);
++ raw_spin_lock_irqsave(&callback_lock, flags);
+ rcu_read_lock();
+ guarantee_online_mems(task_cs(tsk), &mask);
+ rcu_read_unlock();
+- spin_unlock_irqrestore(&callback_lock, flags);
++ raw_spin_unlock_irqrestore(&callback_lock, flags);
- static inline void hlist_bl_lock(struct hlist_bl_head *b)
- {
-+#ifndef CONFIG_PREEMPT_RT_BASE
- bit_spin_lock(0, (unsigned long *)b);
-+#else
-+ raw_spin_lock(&b->lock);
-+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
-+ __set_bit(0, (unsigned long *)b);
-+#endif
-+#endif
+ return mask;
}
+@@ -2577,14 +2577,14 @@
+ return true;
+
+ /* Not hardwall and node outside mems_allowed: scan up cpusets */
+- spin_lock_irqsave(&callback_lock, flags);
++ raw_spin_lock_irqsave(&callback_lock, flags);
+
+ rcu_read_lock();
+ cs = nearest_hardwall_ancestor(task_cs(current));
+ allowed = node_isset(node, cs->mems_allowed);
+ rcu_read_unlock();
- static inline void hlist_bl_unlock(struct hlist_bl_head *b)
- {
-+#ifndef CONFIG_PREEMPT_RT_BASE
- __bit_spin_unlock(0, (unsigned long *)b);
-+#else
-+#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
-+ __clear_bit(0, (unsigned long *)b);
-+#endif
-+ raw_spin_unlock(&b->lock);
-+#endif
+- spin_unlock_irqrestore(&callback_lock, flags);
++ raw_spin_unlock_irqrestore(&callback_lock, flags);
+ return allowed;
}
- static inline bool hlist_bl_is_locked(struct hlist_bl_head *b)
-diff --git a/include/linux/locallock.h b/include/linux/locallock.h
-new file mode 100644
-index 000000000000..845c77f1a5ca
---- /dev/null
-+++ b/include/linux/locallock.h
-@@ -0,0 +1,278 @@
-+#ifndef _LINUX_LOCALLOCK_H
-+#define _LINUX_LOCALLOCK_H
-+
-+#include <linux/percpu.h>
-+#include <linux/spinlock.h>
-+
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+# define LL_WARN(cond) WARN_ON(cond)
-+#else
-+# define LL_WARN(cond) do { } while (0)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/cpu.c linux-4.14/kernel/cpu.c
+--- linux-4.14.orig/kernel/cpu.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/cpu.c 2018-09-05 11:05:07.000000000 +0200
+@@ -74,6 +74,11 @@
+ .fail = CPUHP_INVALID,
+ };
+
++#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PREEMPT_RT_FULL)
++static DEFINE_PER_CPU(struct rt_rw_lock, cpuhp_pin_lock) = \
++ __RWLOCK_RT_INITIALIZER(cpuhp_pin_lock);
+#endif
+
-+/*
-+ * per cpu lock based substitute for local_irq_*()
-+ */
-+struct local_irq_lock {
-+ spinlock_t lock;
-+ struct task_struct *owner;
-+ int nestcnt;
-+ unsigned long flags;
-+};
-+
-+#define DEFINE_LOCAL_IRQ_LOCK(lvar) \
-+ DEFINE_PER_CPU(struct local_irq_lock, lvar) = { \
-+ .lock = __SPIN_LOCK_UNLOCKED((lvar).lock) }
-+
-+#define DECLARE_LOCAL_IRQ_LOCK(lvar) \
-+ DECLARE_PER_CPU(struct local_irq_lock, lvar)
-+
-+#define local_irq_lock_init(lvar) \
-+ do { \
-+ int __cpu; \
-+ for_each_possible_cpu(__cpu) \
-+ spin_lock_init(&per_cpu(lvar, __cpu).lock); \
-+ } while (0)
-+
-+/*
-+ * spin_lock|trylock|unlock_local flavour that does not migrate disable
-+ * used for __local_lock|trylock|unlock where get_local_var/put_local_var
-+ * already takes care of the migrate_disable/enable
-+ * for CONFIG_PREEMPT_BASE map to the normal spin_* calls.
+ #if defined(CONFIG_LOCKDEP) && defined(CONFIG_SMP)
+ static struct lockdep_map cpuhp_state_up_map =
+ STATIC_LOCKDEP_MAP_INIT("cpuhp_state-up", &cpuhp_state_up_map);
+@@ -287,6 +292,55 @@
+
+ #ifdef CONFIG_HOTPLUG_CPU
+
++/**
++ * pin_current_cpu - Prevent the current cpu from being unplugged
+ */
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# define spin_lock_local(lock) rt_spin_lock__no_mg(lock)
-+# define spin_trylock_local(lock) rt_spin_trylock__no_mg(lock)
-+# define spin_unlock_local(lock) rt_spin_unlock__no_mg(lock)
-+#else
-+# define spin_lock_local(lock) spin_lock(lock)
-+# define spin_trylock_local(lock) spin_trylock(lock)
-+# define spin_unlock_local(lock) spin_unlock(lock)
-+#endif
-+
-+static inline void __local_lock(struct local_irq_lock *lv)
-+{
-+ if (lv->owner != current) {
-+ spin_lock_local(&lv->lock);
-+ LL_WARN(lv->owner);
-+ LL_WARN(lv->nestcnt);
-+ lv->owner = current;
-+ }
-+ lv->nestcnt++;
-+}
-+
-+#define local_lock(lvar) \
-+ do { __local_lock(&get_local_var(lvar)); } while (0)
-+
-+#define local_lock_on(lvar, cpu) \
-+ do { __local_lock(&per_cpu(lvar, cpu)); } while (0)
-+
-+static inline int __local_trylock(struct local_irq_lock *lv)
-+{
-+ if (lv->owner != current && spin_trylock_local(&lv->lock)) {
-+ LL_WARN(lv->owner);
-+ LL_WARN(lv->nestcnt);
-+ lv->owner = current;
-+ lv->nestcnt = 1;
-+ return 1;
-+ }
-+ return 0;
-+}
-+
-+#define local_trylock(lvar) \
-+ ({ \
-+ int __locked; \
-+ __locked = __local_trylock(&get_local_var(lvar)); \
-+ if (!__locked) \
-+ put_local_var(lvar); \
-+ __locked; \
-+ })
-+
-+static inline void __local_unlock(struct local_irq_lock *lv)
-+{
-+ LL_WARN(lv->nestcnt == 0);
-+ LL_WARN(lv->owner != current);
-+ if (--lv->nestcnt)
-+ return;
-+
-+ lv->owner = NULL;
-+ spin_unlock_local(&lv->lock);
-+}
-+
-+#define local_unlock(lvar) \
-+ do { \
-+ __local_unlock(this_cpu_ptr(&lvar)); \
-+ put_local_var(lvar); \
-+ } while (0)
-+
-+#define local_unlock_on(lvar, cpu) \
-+ do { __local_unlock(&per_cpu(lvar, cpu)); } while (0)
-+
-+static inline void __local_lock_irq(struct local_irq_lock *lv)
-+{
-+ spin_lock_irqsave(&lv->lock, lv->flags);
-+ LL_WARN(lv->owner);
-+ LL_WARN(lv->nestcnt);
-+ lv->owner = current;
-+ lv->nestcnt = 1;
-+}
-+
-+#define local_lock_irq(lvar) \
-+ do { __local_lock_irq(&get_local_var(lvar)); } while (0)
-+
-+#define local_lock_irq_on(lvar, cpu) \
-+ do { __local_lock_irq(&per_cpu(lvar, cpu)); } while (0)
-+
-+static inline void __local_unlock_irq(struct local_irq_lock *lv)
-+{
-+ LL_WARN(!lv->nestcnt);
-+ LL_WARN(lv->owner != current);
-+ lv->owner = NULL;
-+ lv->nestcnt = 0;
-+ spin_unlock_irq(&lv->lock);
-+}
-+
-+#define local_unlock_irq(lvar) \
-+ do { \
-+ __local_unlock_irq(this_cpu_ptr(&lvar)); \
-+ put_local_var(lvar); \
-+ } while (0)
-+
-+#define local_unlock_irq_on(lvar, cpu) \
-+ do { \
-+ __local_unlock_irq(&per_cpu(lvar, cpu)); \
-+ } while (0)
-+
-+static inline int __local_lock_irqsave(struct local_irq_lock *lv)
-+{
-+ if (lv->owner != current) {
-+ __local_lock_irq(lv);
-+ return 0;
-+ } else {
-+ lv->nestcnt++;
-+ return 1;
-+ }
-+}
-+
-+#define local_lock_irqsave(lvar, _flags) \
-+ do { \
-+ if (__local_lock_irqsave(&get_local_var(lvar))) \
-+ put_local_var(lvar); \
-+ _flags = __this_cpu_read(lvar.flags); \
-+ } while (0)
-+
-+#define local_lock_irqsave_on(lvar, _flags, cpu) \
-+ do { \
-+ __local_lock_irqsave(&per_cpu(lvar, cpu)); \
-+ _flags = per_cpu(lvar, cpu).flags; \
-+ } while (0)
-+
-+static inline int __local_unlock_irqrestore(struct local_irq_lock *lv,
-+ unsigned long flags)
++void pin_current_cpu(void)
+{
-+ LL_WARN(!lv->nestcnt);
-+ LL_WARN(lv->owner != current);
-+ if (--lv->nestcnt)
-+ return 0;
-+
-+ lv->owner = NULL;
-+ spin_unlock_irqrestore(&lv->lock, lv->flags);
-+ return 1;
-+}
-+
-+#define local_unlock_irqrestore(lvar, flags) \
-+ do { \
-+ if (__local_unlock_irqrestore(this_cpu_ptr(&lvar), flags)) \
-+ put_local_var(lvar); \
-+ } while (0)
-+
-+#define local_unlock_irqrestore_on(lvar, flags, cpu) \
-+ do { \
-+ __local_unlock_irqrestore(&per_cpu(lvar, cpu), flags); \
-+ } while (0)
-+
-+#define local_spin_trylock_irq(lvar, lock) \
-+ ({ \
-+ int __locked; \
-+ local_lock_irq(lvar); \
-+ __locked = spin_trylock(lock); \
-+ if (!__locked) \
-+ local_unlock_irq(lvar); \
-+ __locked; \
-+ })
-+
-+#define local_spin_lock_irq(lvar, lock) \
-+ do { \
-+ local_lock_irq(lvar); \
-+ spin_lock(lock); \
-+ } while (0)
-+
-+#define local_spin_unlock_irq(lvar, lock) \
-+ do { \
-+ spin_unlock(lock); \
-+ local_unlock_irq(lvar); \
-+ } while (0)
-+
-+#define local_spin_lock_irqsave(lvar, lock, flags) \
-+ do { \
-+ local_lock_irqsave(lvar, flags); \
-+ spin_lock(lock); \
-+ } while (0)
-+
-+#define local_spin_unlock_irqrestore(lvar, lock, flags) \
-+ do { \
-+ spin_unlock(lock); \
-+ local_unlock_irqrestore(lvar, flags); \
-+ } while (0)
-+
-+#define get_locked_var(lvar, var) \
-+ (*({ \
-+ local_lock(lvar); \
-+ this_cpu_ptr(&var); \
-+ }))
-+
-+#define put_locked_var(lvar, var) local_unlock(lvar);
-+
-+#define local_lock_cpu(lvar) \
-+ ({ \
-+ local_lock(lvar); \
-+ smp_processor_id(); \
-+ })
-+
-+#define local_unlock_cpu(lvar) local_unlock(lvar)
-+
-+#else /* PREEMPT_RT_BASE */
++#ifdef CONFIG_PREEMPT_RT_FULL
++ struct rt_rw_lock *cpuhp_pin;
++ unsigned int cpu;
++ int ret;
+
-+#define DEFINE_LOCAL_IRQ_LOCK(lvar) __typeof__(const int) lvar
-+#define DECLARE_LOCAL_IRQ_LOCK(lvar) extern __typeof__(const int) lvar
++again:
++ cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock);
++ ret = __read_rt_trylock(cpuhp_pin);
++ if (ret) {
++ current->pinned_on_cpu = smp_processor_id();
++ return;
++ }
++ cpu = smp_processor_id();
++ preempt_lazy_enable();
++ preempt_enable();
+
-+static inline void local_irq_lock_init(int lvar) { }
++ __read_rt_lock(cpuhp_pin);
+
-+#define local_lock(lvar) preempt_disable()
-+#define local_unlock(lvar) preempt_enable()
-+#define local_lock_irq(lvar) local_irq_disable()
-+#define local_lock_irq_on(lvar, cpu) local_irq_disable()
-+#define local_unlock_irq(lvar) local_irq_enable()
-+#define local_unlock_irq_on(lvar, cpu) local_irq_enable()
-+#define local_lock_irqsave(lvar, flags) local_irq_save(flags)
-+#define local_unlock_irqrestore(lvar, flags) local_irq_restore(flags)
++ preempt_disable();
++ preempt_lazy_disable();
++ if (cpu != smp_processor_id()) {
++ __read_rt_unlock(cpuhp_pin);
++ goto again;
++ }
++ current->pinned_on_cpu = cpu;
++#endif
++}
+
-+#define local_spin_trylock_irq(lvar, lock) spin_trylock_irq(lock)
-+#define local_spin_lock_irq(lvar, lock) spin_lock_irq(lock)
-+#define local_spin_unlock_irq(lvar, lock) spin_unlock_irq(lock)
-+#define local_spin_lock_irqsave(lvar, lock, flags) \
-+ spin_lock_irqsave(lock, flags)
-+#define local_spin_unlock_irqrestore(lvar, lock, flags) \
-+ spin_unlock_irqrestore(lock, flags)
++/**
++ * unpin_current_cpu - Allow unplug of current cpu
++ */
++void unpin_current_cpu(void)
++{
++#ifdef CONFIG_PREEMPT_RT_FULL
++ struct rt_rw_lock *cpuhp_pin = this_cpu_ptr(&cpuhp_pin_lock);
+
-+#define get_locked_var(lvar, var) get_cpu_var(var)
-+#define put_locked_var(lvar, var) put_cpu_var(var)
++ if (WARN_ON(current->pinned_on_cpu != smp_processor_id()))
++ cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, current->pinned_on_cpu);
+
-+#define local_lock_cpu(lvar) get_cpu()
-+#define local_unlock_cpu(lvar) put_cpu()
++ current->pinned_on_cpu = -1;
++ __read_rt_unlock(cpuhp_pin);
++#endif
++}
+
+ DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock);
+
+ void cpus_read_lock(void)
+@@ -843,6 +897,9 @@
+
+ static int takedown_cpu(unsigned int cpu)
+ {
++#ifdef CONFIG_PREEMPT_RT_FULL
++ struct rt_rw_lock *cpuhp_pin = per_cpu_ptr(&cpuhp_pin_lock, cpu);
++#endif
+ struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
+ int err;
+
+@@ -855,11 +912,18 @@
+ */
+ irq_lock_sparse();
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++ __write_rt_lock(cpuhp_pin);
+#endif
+
+ /*
+ * So now all preempt/rcu users must observe !cpu_active().
+ */
+ err = stop_machine_cpuslocked(take_cpu_down, NULL, cpumask_of(cpu));
+ if (err) {
++#ifdef CONFIG_PREEMPT_RT_FULL
++ __write_rt_unlock(cpuhp_pin);
+#endif
-diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
-index 08d947fc4c59..705fb564a605 100644
---- a/include/linux/mm_types.h
-+++ b/include/linux/mm_types.h
-@@ -11,6 +11,7 @@
- #include <linux/completion.h>
- #include <linux/cpumask.h>
- #include <linux/uprobes.h>
-+#include <linux/rcupdate.h>
- #include <linux/page-flags-layout.h>
- #include <linux/workqueue.h>
- #include <asm/page.h>
-@@ -509,6 +510,9 @@ struct mm_struct {
- bool tlb_flush_pending;
- #endif
- struct uprobes_state uprobes_state;
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ struct rcu_head delayed_drop;
+ /* CPU refused to die */
+ irq_unlock_sparse();
+ /* Unpark the hotplug thread so we can rollback there */
+@@ -878,6 +942,9 @@
+ wait_for_ap_thread(st, false);
+ BUG_ON(st->state != CPUHP_AP_IDLE_DEAD);
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++ __write_rt_unlock(cpuhp_pin);
+#endif
- #ifdef CONFIG_X86_INTEL_MPX
- /* address of the bounds directory */
- void __user *bd_addr;
-diff --git a/include/linux/mutex.h b/include/linux/mutex.h
-index 2cb7531e7d7a..b3fdfc820216 100644
---- a/include/linux/mutex.h
-+++ b/include/linux/mutex.h
-@@ -19,6 +19,17 @@
- #include <asm/processor.h>
- #include <linux/osq_lock.h>
+ /* Interrupts are moved away from the dying cpu, reenable alloc/free */
+ irq_unlock_sparse();
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
-+ , .dep_map = { .name = #lockname }
-+#else
-+# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/debug/kdb/kdb_io.c linux-4.14/kernel/debug/kdb/kdb_io.c
+--- linux-4.14.orig/kernel/debug/kdb/kdb_io.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/debug/kdb/kdb_io.c 2018-09-05 11:05:07.000000000 +0200
+@@ -854,9 +854,11 @@
+ va_list ap;
+ int r;
+
++ kdb_trap_printk++;
+ va_start(ap, fmt);
+ r = vkdb_printf(KDB_MSGSRC_INTERNAL, fmt, ap);
+ va_end(ap);
++ kdb_trap_printk--;
+
+ return r;
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/events/core.c linux-4.14/kernel/events/core.c
+--- linux-4.14.orig/kernel/events/core.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/events/core.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1065,7 +1065,7 @@
+ cpuctx->hrtimer_interval = ns_to_ktime(NSEC_PER_MSEC * interval);
+
+ raw_spin_lock_init(&cpuctx->hrtimer_lock);
+- hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
++ hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED_HARD);
+ timer->function = perf_mux_hrtimer_handler;
+ }
+
+@@ -8750,7 +8750,7 @@
+ if (!is_sampling_event(event))
+ return;
+
+- hrtimer_init(&hwc->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&hwc->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ hwc->hrtimer.function = perf_swevent_hrtimer;
+
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/exit.c linux-4.14/kernel/exit.c
+--- linux-4.14.orig/kernel/exit.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/exit.c 2018-09-05 11:05:07.000000000 +0200
+@@ -159,7 +159,7 @@
+ * Do this under ->siglock, we can race with another thread
+ * doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals.
+ */
+- flush_sigqueue(&tsk->pending);
++ flush_task_sigqueue(tsk);
+ tsk->sighand = NULL;
+ spin_unlock(&sighand->siglock);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/fork.c linux-4.14/kernel/fork.c
+--- linux-4.14.orig/kernel/fork.c 2018-09-05 11:03:28.000000000 +0200
++++ linux-4.14/kernel/fork.c 2018-09-05 11:05:07.000000000 +0200
+@@ -40,6 +40,7 @@
+ #include <linux/hmm.h>
+ #include <linux/fs.h>
+ #include <linux/mm.h>
++#include <linux/kprobes.h>
+ #include <linux/vmacache.h>
+ #include <linux/nsproxy.h>
+ #include <linux/capability.h>
+@@ -407,13 +408,24 @@
+ if (atomic_dec_and_test(&sig->sigcnt))
+ free_signal_struct(sig);
+ }
+-
++#ifdef CONFIG_PREEMPT_RT_BASE
++static
+#endif
+ void __put_task_struct(struct task_struct *tsk)
+ {
+ WARN_ON(!tsk->exit_state);
+ WARN_ON(atomic_read(&tsk->usage));
+ WARN_ON(tsk == current);
+
++ /*
++ * Remove function-return probe instances associated with this
++ * task and put them back on the free list.
++ */
++ kprobe_flush_task(tsk);
+
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# include <linux/mutex_rt.h>
++ /* Task is done with its stack. */
++ put_task_stack(tsk);
++
+ cgroup_free(tsk);
+ task_numa_free(tsk);
+ security_task_free(tsk);
+@@ -424,7 +436,18 @@
+ if (!profile_handoff_task(tsk))
+ free_task(tsk);
+ }
++#ifndef CONFIG_PREEMPT_RT_BASE
+ EXPORT_SYMBOL_GPL(__put_task_struct);
+#else
++void __put_task_struct_cb(struct rcu_head *rhp)
++{
++ struct task_struct *tsk = container_of(rhp, struct task_struct, put_rcu);
+
- /*
- * Simple, straightforward mutexes with strict semantics:
- *
-@@ -99,13 +110,6 @@ do { \
- static inline void mutex_destroy(struct mutex *lock) {}
++ __put_task_struct(tsk);
++
++}
++EXPORT_SYMBOL_GPL(__put_task_struct_cb);
++#endif
+
+ void __init __weak arch_task_cache_init(void) { }
+
+@@ -563,7 +586,8 @@
+ #ifdef CONFIG_CC_STACKPROTECTOR
+ tsk->stack_canary = get_random_canary();
#endif
+-
++ if (orig->cpus_ptr == &orig->cpus_mask)
++ tsk->cpus_ptr = &tsk->cpus_mask;
+ /*
+ * One for us, one for whoever does the "release_task()" (usually
+ * parent)
+@@ -575,6 +599,7 @@
+ tsk->splice_pipe = NULL;
+ tsk->task_frag.page = NULL;
+ tsk->wake_q.next = NULL;
++ tsk->wake_q_sleeper.next = NULL;
--#ifdef CONFIG_DEBUG_LOCK_ALLOC
--# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \
-- , .dep_map = { .name = #lockname }
--#else
--# define __DEP_MAP_MUTEX_INITIALIZER(lockname)
--#endif
+ account_kernel_stack(tsk, 1);
+
+@@ -915,6 +940,19 @@
+ }
+ EXPORT_SYMBOL_GPL(__mmdrop);
+
++#ifdef CONFIG_PREEMPT_RT_BASE
++/*
++ * RCU callback for delayed mm drop. Not strictly rcu, but we don't
++ * want another facility to make this work.
++ */
++void __mmdrop_delayed(struct rcu_head *rhp)
++{
++ struct mm_struct *mm = container_of(rhp, struct mm_struct, delayed_drop);
++
++ __mmdrop(mm);
++}
++#endif
++
+ static inline void __mmput(struct mm_struct *mm)
+ {
+ VM_BUG_ON(atomic_read(&mm->mm_users));
+@@ -1494,6 +1532,9 @@
+ */
+ static void posix_cpu_timers_init(struct task_struct *tsk)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++ tsk->posix_timer_list = NULL;
++#endif
+ tsk->cputime_expires.prof_exp = 0;
+ tsk->cputime_expires.virt_exp = 0;
+ tsk->cputime_expires.sched_exp = 0;
+@@ -1646,6 +1687,7 @@
+ spin_lock_init(&p->alloc_lock);
+
+ init_sigpending(&p->pending);
++ p->sigqueue_cache = NULL;
+
+ p->utime = p->stime = p->gtime = 0;
+ #ifdef CONFIG_ARCH_HAS_SCALED_CPUTIME
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/futex.c linux-4.14/kernel/futex.c
+--- linux-4.14.orig/kernel/futex.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/futex.c 2018-09-05 11:05:07.000000000 +0200
+@@ -936,7 +936,9 @@
+ if (head->next != next) {
+ /* retain curr->pi_lock for the loop invariant */
+ raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
++ raw_spin_unlock_irq(&curr->pi_lock);
+ spin_unlock(&hb->lock);
++ raw_spin_lock_irq(&curr->pi_lock);
+ put_pi_state(pi_state);
+ continue;
+ }
+@@ -1430,6 +1432,7 @@
+ struct task_struct *new_owner;
+ bool postunlock = false;
+ DEFINE_WAKE_Q(wake_q);
++ DEFINE_WAKE_Q(wake_sleeper_q);
+ int ret = 0;
+
+ new_owner = rt_mutex_next_owner(&pi_state->pi_mutex);
+@@ -1491,13 +1494,13 @@
+ pi_state->owner = new_owner;
+ raw_spin_unlock(&new_owner->pi_lock);
+
+- postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
-
- #define __MUTEX_INITIALIZER(lockname) \
- { .count = ATOMIC_INIT(1) \
- , .wait_lock = __SPIN_LOCK_UNLOCKED(lockname.wait_lock) \
-@@ -173,6 +177,8 @@ extern int __must_check mutex_lock_killable(struct mutex *lock);
- extern int mutex_trylock(struct mutex *lock);
- extern void mutex_unlock(struct mutex *lock);
++ postunlock = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q,
++ &wake_sleeper_q);
+ out_unlock:
+ raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+
+ if (postunlock)
+- rt_mutex_postunlock(&wake_q);
++ rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
+
+ return ret;
+ }
+@@ -2104,6 +2107,16 @@
+ requeue_pi_wake_futex(this, &key2, hb2);
+ drop_count++;
+ continue;
++ } else if (ret == -EAGAIN) {
++ /*
++ * Waiter was woken by timeout or
++ * signal and has set pi_blocked_on to
++ * PI_WAKEUP_INPROGRESS before we
++ * tried to enqueue it on the rtmutex.
++ */
++ this->pi_state = NULL;
++ put_pi_state(pi_state);
++ continue;
+ } else if (ret) {
+ /*
+ * rt_mutex_start_proxy_lock() detected a
+@@ -2642,10 +2655,9 @@
+ if (abs_time) {
+ to = &timeout;
+
+- hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
+- CLOCK_REALTIME : CLOCK_MONOTONIC,
+- HRTIMER_MODE_ABS);
+- hrtimer_init_sleeper(to, current);
++ hrtimer_init_sleeper_on_stack(to, (flags & FLAGS_CLOCKRT) ?
++ CLOCK_REALTIME : CLOCK_MONOTONIC,
++ HRTIMER_MODE_ABS, current);
+ hrtimer_set_expires_range_ns(&to->timer, *abs_time,
+ current->timer_slack_ns);
+ }
+@@ -2744,9 +2756,8 @@
+
+ if (time) {
+ to = &timeout;
+- hrtimer_init_on_stack(&to->timer, CLOCK_REALTIME,
+- HRTIMER_MODE_ABS);
+- hrtimer_init_sleeper(to, current);
++ hrtimer_init_sleeper_on_stack(to, CLOCK_REALTIME,
++ HRTIMER_MODE_ABS, current);
+ hrtimer_set_expires(&to->timer, *time);
+ }
+
+@@ -2801,7 +2812,7 @@
+ goto no_block;
+ }
+
+- rt_mutex_init_waiter(&rt_waiter);
++ rt_mutex_init_waiter(&rt_waiter, false);
+
+ /*
+ * On PREEMPT_RT_FULL, when hb->lock becomes an rt_mutex, we must not
+@@ -2816,9 +2827,18 @@
+ * lock handoff sequence.
+ */
+ raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
++ /*
++ * the migrate_disable() here disables migration in the in_atomic() fast
++ * path which is enabled again in the following spin_unlock(). We have
++ * one migrate_disable() pending in the slow-path which is reversed
++ * after the raw_spin_unlock_irq() where we leave the atomic context.
++ */
++ migrate_disable();
++
+ spin_unlock(q.lock_ptr);
+ ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current);
+ raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock);
++ migrate_enable();
+
+ if (ret) {
+ if (ret == 1)
+@@ -2965,11 +2985,21 @@
+ * observed.
+ */
+ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
++ /*
++ * Magic trickery for now to make the RT migrate disable
++ * logic happy. The following spin_unlock() happens with
++ * interrupts disabled so the internal migrate_enable()
++ * won't undo the migrate_disable() which was issued when
++ * locking hb->lock.
++ */
++ migrate_disable();
+ spin_unlock(&hb->lock);
+
+ /* drops pi_state->pi_mutex.wait_lock */
+ ret = wake_futex_pi(uaddr, uval, pi_state);
+
++ migrate_enable();
++
+ put_pi_state(pi_state);
+
+ /*
+@@ -3127,7 +3157,7 @@
+ struct hrtimer_sleeper timeout, *to = NULL;
+ struct futex_pi_state *pi_state = NULL;
+ struct rt_mutex_waiter rt_waiter;
+- struct futex_hash_bucket *hb;
++ struct futex_hash_bucket *hb, *hb2;
+ union futex_key key2 = FUTEX_KEY_INIT;
+ struct futex_q q = futex_q_init;
+ int res, ret;
+@@ -3143,10 +3173,9 @@
+
+ if (abs_time) {
+ to = &timeout;
+- hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
+- CLOCK_REALTIME : CLOCK_MONOTONIC,
+- HRTIMER_MODE_ABS);
+- hrtimer_init_sleeper(to, current);
++ hrtimer_init_sleeper_on_stack(to, (flags & FLAGS_CLOCKRT) ?
++ CLOCK_REALTIME : CLOCK_MONOTONIC,
++ HRTIMER_MODE_ABS, current);
+ hrtimer_set_expires_range_ns(&to->timer, *abs_time,
+ current->timer_slack_ns);
+ }
+@@ -3155,7 +3184,7 @@
+ * The waiter is allocated on our stack, manipulated by the requeue
+ * code while we sleep on uaddr.
+ */
+- rt_mutex_init_waiter(&rt_waiter);
++ rt_mutex_init_waiter(&rt_waiter, false);
-+#endif /* !PREEMPT_RT_FULL */
-+
- extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
+ ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
+ if (unlikely(ret != 0))
+@@ -3186,20 +3215,55 @@
+ /* Queue the futex_q, drop the hb lock, wait for wakeup. */
+ futex_wait_queue_me(hb, &q, to);
- #endif /* __LINUX_MUTEX_H */
-diff --git a/include/linux/mutex_rt.h b/include/linux/mutex_rt.h
-new file mode 100644
-index 000000000000..c38a44b14da5
---- /dev/null
-+++ b/include/linux/mutex_rt.h
-@@ -0,0 +1,84 @@
-+#ifndef __LINUX_MUTEX_RT_H
-+#define __LINUX_MUTEX_RT_H
-+
-+#ifndef __LINUX_MUTEX_H
-+#error "Please include mutex.h"
-+#endif
-+
-+#include <linux/rtmutex.h>
+- spin_lock(&hb->lock);
+- ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
+- spin_unlock(&hb->lock);
+- if (ret)
+- goto out_put_keys;
++ /*
++ * On RT we must avoid races with requeue and trying to block
++ * on two mutexes (hb->lock and uaddr2's rtmutex) by
++ * serializing access to pi_blocked_on with pi_lock.
++ */
++ raw_spin_lock_irq(¤t->pi_lock);
++ if (current->pi_blocked_on) {
++ /*
++ * We have been requeued or are in the process of
++ * being requeued.
++ */
++ raw_spin_unlock_irq(¤t->pi_lock);
++ } else {
++ /*
++ * Setting pi_blocked_on to PI_WAKEUP_INPROGRESS
++ * prevents a concurrent requeue from moving us to the
++ * uaddr2 rtmutex. After that we can safely acquire
++ * (and possibly block on) hb->lock.
++ */
++ current->pi_blocked_on = PI_WAKEUP_INPROGRESS;
++ raw_spin_unlock_irq(¤t->pi_lock);
+
-+/* FIXME: Just for __lockfunc */
-+#include <linux/spinlock.h>
++ spin_lock(&hb->lock);
+
-+struct mutex {
-+ struct rt_mutex lock;
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ struct lockdep_map dep_map;
-+#endif
-+};
++ /*
++ * Clean up pi_blocked_on. We might leak it otherwise
++ * when we succeeded with the hb->lock in the fast
++ * path.
++ */
++ raw_spin_lock_irq(¤t->pi_lock);
++ current->pi_blocked_on = NULL;
++ raw_spin_unlock_irq(¤t->pi_lock);
+
-+#define __MUTEX_INITIALIZER(mutexname) \
-+ { \
-+ .lock = __RT_MUTEX_INITIALIZER(mutexname.lock) \
-+ __DEP_MAP_MUTEX_INITIALIZER(mutexname) \
++ ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
++ spin_unlock(&hb->lock);
++ if (ret)
++ goto out_put_keys;
+ }
-+
-+#define DEFINE_MUTEX(mutexname) \
-+ struct mutex mutexname = __MUTEX_INITIALIZER(mutexname)
-+
-+extern void __mutex_do_init(struct mutex *lock, const char *name, struct lock_class_key *key);
-+extern void __lockfunc _mutex_lock(struct mutex *lock);
-+extern int __lockfunc _mutex_lock_interruptible(struct mutex *lock);
-+extern int __lockfunc _mutex_lock_killable(struct mutex *lock);
-+extern void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass);
-+extern void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
-+extern int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass);
-+extern int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass);
-+extern int __lockfunc _mutex_trylock(struct mutex *lock);
-+extern void __lockfunc _mutex_unlock(struct mutex *lock);
-+
-+#define mutex_is_locked(l) rt_mutex_is_locked(&(l)->lock)
-+#define mutex_lock(l) _mutex_lock(l)
-+#define mutex_lock_interruptible(l) _mutex_lock_interruptible(l)
-+#define mutex_lock_killable(l) _mutex_lock_killable(l)
-+#define mutex_trylock(l) _mutex_trylock(l)
-+#define mutex_unlock(l) _mutex_unlock(l)
-+#define mutex_destroy(l) rt_mutex_destroy(&(l)->lock)
-+
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+# define mutex_lock_nested(l, s) _mutex_lock_nested(l, s)
-+# define mutex_lock_interruptible_nested(l, s) \
-+ _mutex_lock_interruptible_nested(l, s)
-+# define mutex_lock_killable_nested(l, s) \
-+ _mutex_lock_killable_nested(l, s)
-+
-+# define mutex_lock_nest_lock(lock, nest_lock) \
-+do { \
-+ typecheck(struct lockdep_map *, &(nest_lock)->dep_map); \
-+ _mutex_lock_nest_lock(lock, &(nest_lock)->dep_map); \
-+} while (0)
-+
-+#else
-+# define mutex_lock_nested(l, s) _mutex_lock(l)
-+# define mutex_lock_interruptible_nested(l, s) \
-+ _mutex_lock_interruptible(l)
-+# define mutex_lock_killable_nested(l, s) \
-+ _mutex_lock_killable(l)
-+# define mutex_lock_nest_lock(lock, nest_lock) mutex_lock(lock)
-+#endif
-+
-+# define mutex_init(mutex) \
-+do { \
-+ static struct lock_class_key __key; \
-+ \
-+ rt_mutex_init(&(mutex)->lock); \
-+ __mutex_do_init((mutex), #mutex, &__key); \
-+} while (0)
-+
-+# define __mutex_init(mutex, name, key) \
-+do { \
-+ rt_mutex_init(&(mutex)->lock); \
-+ __mutex_do_init((mutex), name, key); \
-+} while (0)
-+
-+#endif
-diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
-index d83590ef74a1..0ae3b6cf430c 100644
---- a/include/linux/netdevice.h
-+++ b/include/linux/netdevice.h
-@@ -396,7 +396,19 @@ typedef enum rx_handler_result rx_handler_result_t;
- typedef rx_handler_result_t rx_handler_func_t(struct sk_buff **pskb);
- void __napi_schedule(struct napi_struct *n);
-+
-+/*
-+ * When PREEMPT_RT_FULL is defined, all device interrupt handlers
-+ * run as threads, and they can also be preempted (without PREEMPT_RT
-+ * interrupt threads can not be preempted). Which means that calling
-+ * __napi_schedule_irqoff() from an interrupt handler can be preempted
-+ * and can corrupt the napi->poll_list.
-+ */
+ /*
+- * In order for us to be here, we know our q.key == key2, and since
+- * we took the hb->lock above, we also know that futex_requeue() has
+- * completed and we no longer have to concern ourselves with a wakeup
+- * race with the atomic proxy lock acquisition by the requeue code. The
+- * futex_requeue dropped our key1 reference and incremented our key2
+- * reference count.
++ * In order to be here, we have either been requeued, are in
++ * the process of being requeued, or requeue successfully
++ * acquired uaddr2 on our behalf. If pi_blocked_on was
++ * non-null above, we may be racing with a requeue. Do not
++ * rely on q->lock_ptr to be hb2->lock until after blocking on
++ * hb->lock or hb2->lock. The futex_requeue dropped our key1
++ * reference and incremented our key2 reference count.
+ */
++ hb2 = hash_futex(&key2);
+
+ /* Check if the requeue code acquired the second futex for us. */
+ if (!q.rt_waiter) {
+@@ -3208,7 +3272,8 @@
+ * did a lock-steal - fix up the PI-state in that case.
+ */
+ if (q.pi_state && (q.pi_state->owner != current)) {
+- spin_lock(q.lock_ptr);
++ spin_lock(&hb2->lock);
++ BUG_ON(&hb2->lock != q.lock_ptr);
+ ret = fixup_pi_state_owner(uaddr2, &q, current);
+ if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) {
+ pi_state = q.pi_state;
+@@ -3219,7 +3284,7 @@
+ * the requeue_pi() code acquired for us.
+ */
+ put_pi_state(q.pi_state);
+- spin_unlock(q.lock_ptr);
++ spin_unlock(&hb2->lock);
+ }
+ } else {
+ struct rt_mutex *pi_mutex;
+@@ -3233,7 +3298,8 @@
+ pi_mutex = &q.pi_state->pi_mutex;
+ ret = rt_mutex_wait_proxy_lock(pi_mutex, to, &rt_waiter);
+
+- spin_lock(q.lock_ptr);
++ spin_lock(&hb2->lock);
++ BUG_ON(&hb2->lock != q.lock_ptr);
+ if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
+ ret = 0;
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/irq/handle.c linux-4.14/kernel/irq/handle.c
+--- linux-4.14.orig/kernel/irq/handle.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/irq/handle.c 2018-09-05 11:05:07.000000000 +0200
+@@ -183,10 +183,16 @@
+ {
+ irqreturn_t retval;
+ unsigned int flags = 0;
++ struct pt_regs *regs = get_irq_regs();
++ u64 ip = regs ? instruction_pointer(regs) : 0;
+
+ retval = __handle_irq_event_percpu(desc, &flags);
+
+- add_interrupt_randomness(desc->irq_data.irq, flags);
+#ifdef CONFIG_PREEMPT_RT_FULL
-+#define __napi_schedule_irqoff(n) __napi_schedule(n)
++ desc->random_ip = ip;
+#else
- void __napi_schedule_irqoff(struct napi_struct *n);
++ add_interrupt_randomness(desc->irq_data.irq, flags, ip);
+#endif
- static inline bool napi_disable_pending(struct napi_struct *n)
- {
-@@ -2461,14 +2473,53 @@ void netdev_freemem(struct net_device *dev);
- void synchronize_net(void);
- int init_dummy_netdev(struct net_device *dev);
+ if (!noirqdebug)
+ note_interrupt(desc, retval);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/irq/manage.c linux-4.14/kernel/irq/manage.c
+--- linux-4.14.orig/kernel/irq/manage.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/irq/manage.c 2018-09-05 11:05:07.000000000 +0200
+@@ -24,6 +24,7 @@
+ #include "internals.h"
--DECLARE_PER_CPU(int, xmit_recursion);
- #define XMIT_RECURSION_LIMIT 10
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static inline int dev_recursion_level(void)
-+{
-+ return current->xmit_recursion;
-+}
-+
-+static inline int xmit_rec_read(void)
-+{
-+ return current->xmit_recursion;
-+}
-+
-+static inline void xmit_rec_inc(void)
-+{
-+ current->xmit_recursion++;
-+}
-+
-+static inline void xmit_rec_dec(void)
-+{
-+ current->xmit_recursion--;
-+}
+ #ifdef CONFIG_IRQ_FORCED_THREADING
++# ifndef CONFIG_PREEMPT_RT_BASE
+ __read_mostly bool force_irqthreads;
+
+ static int __init setup_forced_irqthreads(char *arg)
+@@ -32,6 +33,7 @@
+ return 0;
+ }
+ early_param("threadirqs", setup_forced_irqthreads);
++# endif
+ #endif
+
+ static void __synchronize_hardirq(struct irq_desc *desc)
+@@ -224,7 +226,12 @@
+
+ if (desc->affinity_notify) {
+ kref_get(&desc->affinity_notify->kref);
+
++#ifdef CONFIG_PREEMPT_RT_BASE
++ swork_queue(&desc->affinity_notify->swork);
+#else
-+
-+DECLARE_PER_CPU(int, xmit_recursion);
+ schedule_work(&desc->affinity_notify->work);
++#endif
+ }
+ irqd_set(data, IRQD_AFFINITY_SET);
- static inline int dev_recursion_level(void)
+@@ -262,10 +269,8 @@
+ }
+ EXPORT_SYMBOL_GPL(irq_set_affinity_hint);
+
+-static void irq_affinity_notify(struct work_struct *work)
++static void _irq_affinity_notify(struct irq_affinity_notify *notify)
{
- return this_cpu_read(xmit_recursion);
+- struct irq_affinity_notify *notify =
+- container_of(work, struct irq_affinity_notify, work);
+ struct irq_desc *desc = irq_to_desc(notify->irq);
+ cpumask_var_t cpumask;
+ unsigned long flags;
+@@ -287,6 +292,35 @@
+ kref_put(¬ify->kref, notify->release);
}
-+static inline int xmit_rec_read(void)
++#ifdef CONFIG_PREEMPT_RT_BASE
++static void init_helper_thread(void)
+{
-+ return __this_cpu_read(xmit_recursion);
++ static int init_sworker_once;
++
++ if (init_sworker_once)
++ return;
++ if (WARN_ON(swork_get()))
++ return;
++ init_sworker_once = 1;
+}
+
-+static inline void xmit_rec_inc(void)
++static void irq_affinity_notify(struct swork_event *swork)
+{
-+ __this_cpu_inc(xmit_recursion);
++ struct irq_affinity_notify *notify =
++ container_of(swork, struct irq_affinity_notify, swork);
++ _irq_affinity_notify(notify);
+}
+
-+static inline void xmit_rec_dec(void)
++#else
++
++static void irq_affinity_notify(struct work_struct *work)
+{
-+ __this_cpu_dec(xmit_recursion);
++ struct irq_affinity_notify *notify =
++ container_of(work, struct irq_affinity_notify, work);
++ _irq_affinity_notify(notify);
+}
+#endif
+
- struct net_device *dev_get_by_index(struct net *net, int ifindex);
- struct net_device *__dev_get_by_index(struct net *net, int ifindex);
- struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
-@@ -2851,6 +2902,7 @@ struct softnet_data {
- unsigned int dropped;
- struct sk_buff_head input_pkt_queue;
- struct napi_struct backlog;
-+ struct sk_buff_head tofree_queue;
+ /**
+ * irq_set_affinity_notifier - control notification of IRQ affinity changes
+ * @irq: Interrupt for which to enable/disable notification
+@@ -315,7 +349,12 @@
+ if (notify) {
+ notify->irq = irq;
+ kref_init(¬ify->kref);
++#ifdef CONFIG_PREEMPT_RT_BASE
++ INIT_SWORK(¬ify->swork, irq_affinity_notify);
++ init_helper_thread();
++#else
+ INIT_WORK(¬ify->work, irq_affinity_notify);
++#endif
+ }
- };
+ raw_spin_lock_irqsave(&desc->lock, flags);
+@@ -883,7 +922,15 @@
+ local_bh_disable();
+ ret = action->thread_fn(action->irq, action->dev_id);
+ irq_finalize_oneshot(desc, action);
+- local_bh_enable();
++ /*
++ * Interrupts which have real time requirements can be set up
++ * to avoid softirq processing in the thread handler. This is
++ * safe as these interrupts do not raise soft interrupts.
++ */
++ if (irq_settings_no_softirq_call(desc))
++ _local_bh_enable();
++ else
++ local_bh_enable();
+ return ret;
+ }
-diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h
-index 2ad1a2b289b5..b4d10155af54 100644
---- a/include/linux/netfilter/x_tables.h
-+++ b/include/linux/netfilter/x_tables.h
-@@ -4,6 +4,7 @@
+@@ -980,6 +1027,12 @@
+ if (action_ret == IRQ_WAKE_THREAD)
+ irq_wake_secondary(desc, action);
- #include <linux/netdevice.h>
- #include <linux/static_key.h>
-+#include <linux/locallock.h>
- #include <uapi/linux/netfilter/x_tables.h>
++#ifdef CONFIG_PREEMPT_RT_FULL
++ migrate_disable();
++ add_interrupt_randomness(action->irq, 0,
++ desc->random_ip ^ (unsigned long) action);
++ migrate_enable();
++#endif
+ wake_threads_waitq(desc);
+ }
- /* Test a struct->invflags and a boolean for inequality */
-@@ -300,6 +301,8 @@ void xt_free_table_info(struct xt_table_info *info);
- */
- DECLARE_PER_CPU(seqcount_t, xt_recseq);
+@@ -1378,6 +1431,9 @@
+ irqd_set(&desc->irq_data, IRQD_NO_BALANCING);
+ }
-+DECLARE_LOCAL_IRQ_LOCK(xt_write_lock);
++ if (new->flags & IRQF_NO_SOFTIRQ_CALL)
++ irq_settings_set_no_softirq_call(desc);
+
- /* xt_tee_enabled - true if x_tables needs to handle reentrancy
+ if (irq_settings_can_autoenable(desc)) {
+ irq_startup(desc, IRQ_RESEND, IRQ_START_COND);
+ } else {
+@@ -2159,7 +2215,7 @@
+ * This call sets the internal irqchip state of an interrupt,
+ * depending on the value of @which.
*
- * Enabled if current ip(6)tables ruleset has at least one -j TEE rule.
-@@ -320,6 +323,9 @@ static inline unsigned int xt_write_recseq_begin(void)
- {
- unsigned int addend;
-
-+ /* RT protection */
-+ local_lock(xt_write_lock);
-+
- /*
- * Low order bit of sequence is set if we already
- * called xt_write_recseq_begin().
-@@ -350,6 +356,7 @@ static inline void xt_write_recseq_end(unsigned int addend)
- /* this is kind of a write_seqcount_end(), but addend is 0 or 1 */
- smp_wmb();
- __this_cpu_add(xt_recseq.sequence, addend);
-+ local_unlock(xt_write_lock);
- }
+- * This function should be called with preemption disabled if the
++ * This function should be called with migration disabled if the
+ * interrupt controller has per-cpu registers.
+ */
+ int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/irq/settings.h linux-4.14/kernel/irq/settings.h
+--- linux-4.14.orig/kernel/irq/settings.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/irq/settings.h 2018-09-05 11:05:07.000000000 +0200
+@@ -17,6 +17,7 @@
+ _IRQ_PER_CPU_DEVID = IRQ_PER_CPU_DEVID,
+ _IRQ_IS_POLLED = IRQ_IS_POLLED,
+ _IRQ_DISABLE_UNLAZY = IRQ_DISABLE_UNLAZY,
++ _IRQ_NO_SOFTIRQ_CALL = IRQ_NO_SOFTIRQ_CALL,
+ _IRQF_MODIFY_MASK = IRQF_MODIFY_MASK,
+ };
- /*
-diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
-index 810124b33327..d54ca43d571f 100644
---- a/include/linux/nfs_fs.h
-+++ b/include/linux/nfs_fs.h
-@@ -165,7 +165,11 @@ struct nfs_inode {
+@@ -31,6 +32,7 @@
+ #define IRQ_PER_CPU_DEVID GOT_YOU_MORON
+ #define IRQ_IS_POLLED GOT_YOU_MORON
+ #define IRQ_DISABLE_UNLAZY GOT_YOU_MORON
++#define IRQ_NO_SOFTIRQ_CALL GOT_YOU_MORON
+ #undef IRQF_MODIFY_MASK
+ #define IRQF_MODIFY_MASK GOT_YOU_MORON
- /* Readers: in-flight sillydelete RPC calls */
- /* Writers: rmdir */
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ struct semaphore rmdir_sem;
-+#else
- struct rw_semaphore rmdir_sem;
-+#endif
+@@ -41,6 +43,16 @@
+ desc->status_use_accessors |= (set & _IRQF_MODIFY_MASK);
+ }
- #if IS_ENABLED(CONFIG_NFS_V4)
- struct nfs4_cached_acl *nfs4_acl;
-diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
-index beb1e10f446e..ebaf2e7bfe29 100644
---- a/include/linux/nfs_xdr.h
-+++ b/include/linux/nfs_xdr.h
-@@ -1490,7 +1490,7 @@ struct nfs_unlinkdata {
- struct nfs_removeargs args;
- struct nfs_removeres res;
- struct dentry *dentry;
-- wait_queue_head_t wq;
-+ struct swait_queue_head wq;
- struct rpc_cred *cred;
- struct nfs_fattr dir_attr;
- long timeout;
-diff --git a/include/linux/notifier.h b/include/linux/notifier.h
-index 4149868de4e6..babe5b9bcb91 100644
---- a/include/linux/notifier.h
-+++ b/include/linux/notifier.h
-@@ -6,7 +6,7 @@
- *
- * Alan Cox <Alan.Cox@linux.org>
- */
--
++static inline bool irq_settings_no_softirq_call(struct irq_desc *desc)
++{
++ return desc->status_use_accessors & _IRQ_NO_SOFTIRQ_CALL;
++}
+
- #ifndef _LINUX_NOTIFIER_H
- #define _LINUX_NOTIFIER_H
- #include <linux/errno.h>
-@@ -42,9 +42,7 @@
- * in srcu_notifier_call_chain(): no cache bounces and no memory barriers.
- * As compensation, srcu_notifier_chain_unregister() is rather expensive.
- * SRCU notifier chains should be used when the chain will be called very
-- * often but notifier_blocks will seldom be removed. Also, SRCU notifier
-- * chains are slightly more difficult to use because they require special
-- * runtime initialization.
-+ * often but notifier_blocks will seldom be removed.
- */
-
- struct notifier_block;
-@@ -90,7 +88,7 @@ struct srcu_notifier_head {
- (name)->head = NULL; \
- } while (0)
-
--/* srcu_notifier_heads must be initialized and cleaned up dynamically */
-+/* srcu_notifier_heads must be cleaned up dynamically */
- extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
- #define srcu_cleanup_notifier_head(name) \
- cleanup_srcu_struct(&(name)->srcu);
-@@ -103,7 +101,13 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
- .head = NULL }
- #define RAW_NOTIFIER_INIT(name) { \
- .head = NULL }
--/* srcu_notifier_heads cannot be initialized statically */
++static inline void irq_settings_set_no_softirq_call(struct irq_desc *desc)
++{
++ desc->status_use_accessors |= _IRQ_NO_SOFTIRQ_CALL;
++}
+
-+#define SRCU_NOTIFIER_INIT(name, pcpu) \
-+ { \
-+ .mutex = __MUTEX_INITIALIZER(name.mutex), \
-+ .head = NULL, \
-+ .srcu = __SRCU_STRUCT_INIT(name.srcu, pcpu), \
-+ }
+ static inline bool irq_settings_is_per_cpu(struct irq_desc *desc)
+ {
+ return desc->status_use_accessors & _IRQ_PER_CPU;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/irq/spurious.c linux-4.14/kernel/irq/spurious.c
+--- linux-4.14.orig/kernel/irq/spurious.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/irq/spurious.c 2018-09-05 11:05:07.000000000 +0200
+@@ -445,6 +445,10 @@
- #define ATOMIC_NOTIFIER_HEAD(name) \
- struct atomic_notifier_head name = \
-@@ -115,6 +119,18 @@ extern void srcu_init_notifier_head(struct srcu_notifier_head *nh);
- struct raw_notifier_head name = \
- RAW_NOTIFIER_INIT(name)
+ static int __init irqfixup_setup(char *str)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++ pr_warn("irqfixup boot option not supported w/ CONFIG_PREEMPT_RT_BASE\n");
++ return 1;
++#endif
+ irqfixup = 1;
+ printk(KERN_WARNING "Misrouted IRQ fixup support enabled.\n");
+ printk(KERN_WARNING "This may impact system performance.\n");
+@@ -457,6 +461,10 @@
-+#define _SRCU_NOTIFIER_HEAD(name, mod) \
-+ static DEFINE_PER_CPU(struct srcu_struct_array, \
-+ name##_head_srcu_array); \
-+ mod struct srcu_notifier_head name = \
-+ SRCU_NOTIFIER_INIT(name, name##_head_srcu_array)
-+
-+#define SRCU_NOTIFIER_HEAD(name) \
-+ _SRCU_NOTIFIER_HEAD(name, )
-+
-+#define SRCU_NOTIFIER_HEAD_STATIC(name) \
-+ _SRCU_NOTIFIER_HEAD(name, static)
-+
- #ifdef __KERNEL__
+ static int __init irqpoll_setup(char *str)
+ {
++#ifdef CONFIG_PREEMPT_RT_BASE
++ pr_warn("irqpoll boot option not supported w/ CONFIG_PREEMPT_RT_BASE\n");
++ return 1;
++#endif
+ irqfixup = 2;
+ printk(KERN_WARNING "Misrouted IRQ fixup and polling support "
+ "enabled\n");
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/irq_work.c linux-4.14/kernel/irq_work.c
+--- linux-4.14.orig/kernel/irq_work.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/irq_work.c 2018-09-05 11:05:07.000000000 +0200
+@@ -17,6 +17,7 @@
+ #include <linux/cpu.h>
+ #include <linux/notifier.h>
+ #include <linux/smp.h>
++#include <linux/interrupt.h>
+ #include <asm/processor.h>
- extern int atomic_notifier_chain_register(struct atomic_notifier_head *nh,
-@@ -184,12 +200,12 @@ static inline int notifier_to_errno(int ret)
- /*
- * Declared notifiers so far. I can imagine quite a few more chains
-- * over time (eg laptop power reset chains, reboot chain (to clean
-+ * over time (eg laptop power reset chains, reboot chain (to clean
- * device units up), device [un]mount chain, module load/unload chain,
-- * low memory chain, screenblank chain (for plug in modular screenblankers)
-+ * low memory chain, screenblank chain (for plug in modular screenblankers)
- * VC switch chains (for loadable kernel svgalib VC switch helpers) etc...
+@@ -65,6 +66,8 @@
*/
--
+ bool irq_work_queue_on(struct irq_work *work, int cpu)
+ {
++ struct llist_head *list;
+
- /* CPU notfiers are defined in include/linux/cpu.h. */
+ /* All work should have been flushed before going offline */
+ WARN_ON_ONCE(cpu_is_offline(cpu));
- /* netdevice notifiers are defined in include/linux/netdevice.h */
-diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
-index 5b2e6159b744..ea940f451606 100644
---- a/include/linux/percpu-rwsem.h
-+++ b/include/linux/percpu-rwsem.h
-@@ -4,7 +4,7 @@
- #include <linux/atomic.h>
- #include <linux/rwsem.h>
- #include <linux/percpu.h>
--#include <linux/wait.h>
-+#include <linux/swait.h>
- #include <linux/rcu_sync.h>
- #include <linux/lockdep.h>
+@@ -75,7 +78,12 @@
+ if (!irq_work_claim(work))
+ return false;
+
+- if (llist_add(&work->llnode, &per_cpu(raised_list, cpu)))
++ if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ))
++ list = &per_cpu(lazy_list, cpu);
++ else
++ list = &per_cpu(raised_list, cpu);
++
++ if (llist_add(&work->llnode, list))
+ arch_send_call_function_single_ipi(cpu);
-@@ -12,7 +12,7 @@ struct percpu_rw_semaphore {
- struct rcu_sync rss;
- unsigned int __percpu *read_count;
- struct rw_semaphore rw_sem;
-- wait_queue_head_t writer;
-+ struct swait_queue_head writer;
- int readers_block;
- };
+ return true;
+@@ -86,6 +94,9 @@
+ /* Enqueue the irq work @work on the current CPU */
+ bool irq_work_queue(struct irq_work *work)
+ {
++ struct llist_head *list;
++ bool lazy_work, realtime = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);
++
+ /* Only queue if not already pending */
+ if (!irq_work_claim(work))
+ return false;
+@@ -93,13 +104,15 @@
+ /* Queue the entry and raise the IPI if needed. */
+ preempt_disable();
-@@ -22,13 +22,13 @@ static struct percpu_rw_semaphore name = { \
- .rss = __RCU_SYNC_INITIALIZER(name.rss, RCU_SCHED_SYNC), \
- .read_count = &__percpu_rwsem_rc_##name, \
- .rw_sem = __RWSEM_INITIALIZER(name.rw_sem), \
-- .writer = __WAIT_QUEUE_HEAD_INITIALIZER(name.writer), \
-+ .writer = __SWAIT_QUEUE_HEAD_INITIALIZER(name.writer), \
- }
+- /* If the work is "lazy", handle it from next tick if any */
+- if (work->flags & IRQ_WORK_LAZY) {
+- if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) &&
+- tick_nohz_tick_stopped())
+- arch_irq_work_raise();
+- } else {
+- if (llist_add(&work->llnode, this_cpu_ptr(&raised_list)))
++ lazy_work = work->flags & IRQ_WORK_LAZY;
++
++ if (lazy_work || (realtime && !(work->flags & IRQ_WORK_HARD_IRQ)))
++ list = this_cpu_ptr(&lazy_list);
++ else
++ list = this_cpu_ptr(&raised_list);
++
++ if (llist_add(&work->llnode, list)) {
++ if (!lazy_work || tick_nohz_tick_stopped())
+ arch_irq_work_raise();
+ }
- extern int __percpu_down_read(struct percpu_rw_semaphore *, int);
- extern void __percpu_up_read(struct percpu_rw_semaphore *);
+@@ -116,9 +129,8 @@
+ raised = this_cpu_ptr(&raised_list);
+ lazy = this_cpu_ptr(&lazy_list);
--static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *sem)
-+static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
- {
- might_sleep();
+- if (llist_empty(raised) || arch_irq_work_has_interrupt())
+- if (llist_empty(lazy))
+- return false;
++ if (llist_empty(raised) && llist_empty(lazy))
++ return false;
-@@ -46,16 +46,10 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *
- __this_cpu_inc(*sem->read_count);
- if (unlikely(!rcu_sync_is_idle(&sem->rss)))
- __percpu_down_read(sem, false); /* Unconditional memory barrier */
-- barrier();
- /*
-- * The barrier() prevents the compiler from
-+ * The preempt_enable() prevents the compiler from
- * bleeding the critical section out.
- */
--}
--
--static inline void percpu_down_read(struct percpu_rw_semaphore *sem)
--{
-- percpu_down_read_preempt_disable(sem);
- preempt_enable();
- }
+ /* All work should have been flushed before going offline */
+ WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
+@@ -132,7 +144,7 @@
+ struct irq_work *work;
+ struct llist_node *llnode;
-@@ -82,13 +76,9 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
- return ret;
- }
+- BUG_ON(!irqs_disabled());
++ BUG_ON_NONRT(!irqs_disabled());
--static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem)
-+static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
+ if (llist_empty(list))
+ return;
+@@ -169,7 +181,16 @@
+ void irq_work_run(void)
{
-- /*
-- * The barrier() prevents the compiler from
-- * bleeding the critical section out.
-- */
-- barrier();
-+ preempt_disable();
- /*
- * Same as in percpu_down_read().
- */
-@@ -101,12 +91,6 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem
- rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_);
+ irq_work_run_list(this_cpu_ptr(&raised_list));
+- irq_work_run_list(this_cpu_ptr(&lazy_list));
++ if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL)) {
++ /*
++ * NOTE: we raise softirq via IPI for safety,
++ * and execute in irq_work_tick() to move the
++ * overhead from hard to soft irq context.
++ */
++ if (!llist_empty(this_cpu_ptr(&lazy_list)))
++ raise_softirq(TIMER_SOFTIRQ);
++ } else
++ irq_work_run_list(this_cpu_ptr(&lazy_list));
}
+ EXPORT_SYMBOL_GPL(irq_work_run);
--static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
--{
-- preempt_disable();
-- percpu_up_read_preempt_enable(sem);
--}
--
- extern void percpu_down_write(struct percpu_rw_semaphore *);
- extern void percpu_up_write(struct percpu_rw_semaphore *);
-
-diff --git a/include/linux/percpu.h b/include/linux/percpu.h
-index 56939d3f6e53..1c7e33fc83e4 100644
---- a/include/linux/percpu.h
-+++ b/include/linux/percpu.h
-@@ -18,6 +18,35 @@
- #define PERCPU_MODULE_RESERVE 0
- #endif
+@@ -179,8 +200,17 @@
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+
-+#define get_local_var(var) (*({ \
-+ migrate_disable(); \
-+ this_cpu_ptr(&var); }))
-+
-+#define put_local_var(var) do { \
-+ (void)&(var); \
-+ migrate_enable(); \
-+} while (0)
-+
-+# define get_local_ptr(var) ({ \
-+ migrate_disable(); \
-+ this_cpu_ptr(var); })
-+
-+# define put_local_ptr(var) do { \
-+ (void)(var); \
-+ migrate_enable(); \
-+} while (0)
-+
-+#else
+ if (!llist_empty(raised) && !arch_irq_work_has_interrupt())
+ irq_work_run_list(raised);
+
-+#define get_local_var(var) get_cpu_var(var)
-+#define put_local_var(var) put_cpu_var(var)
-+#define get_local_ptr(var) get_cpu_ptr(var)
-+#define put_local_ptr(var) put_cpu_ptr(var)
++ if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
++ irq_work_run_list(this_cpu_ptr(&lazy_list));
++}
+
++#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
++void irq_work_tick_soft(void)
++{
+ irq_work_run_list(this_cpu_ptr(&lazy_list));
+ }
+#endif
-+
- /* minimum unit size, also is the maximum supported allocation size */
- #define PCPU_MIN_UNIT_SIZE PFN_ALIGN(32 << 10)
-
-diff --git a/include/linux/pid.h b/include/linux/pid.h
-index 23705a53abba..2cc64b779f03 100644
---- a/include/linux/pid.h
-+++ b/include/linux/pid.h
-@@ -2,6 +2,7 @@
- #define _LINUX_PID_H
- #include <linux/rcupdate.h>
-+#include <linux/atomic.h>
-
- enum pid_type
- {
-diff --git a/include/linux/preempt.h b/include/linux/preempt.h
-index 75e4e30677f1..1cfb1cb72354 100644
---- a/include/linux/preempt.h
-+++ b/include/linux/preempt.h
-@@ -50,7 +50,11 @@
- #define HARDIRQ_OFFSET (1UL << HARDIRQ_SHIFT)
- #define NMI_OFFSET (1UL << NMI_SHIFT)
+ /*
+ * Synchronize against the irq_work @entry, ensures the entry is not
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/Kconfig.locks linux-4.14/kernel/Kconfig.locks
+--- linux-4.14.orig/kernel/Kconfig.locks 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/Kconfig.locks 2018-09-05 11:05:07.000000000 +0200
+@@ -225,11 +225,11 @@
--#define SOFTIRQ_DISABLE_OFFSET (2 * SOFTIRQ_OFFSET)
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+# define SOFTIRQ_DISABLE_OFFSET (2 * SOFTIRQ_OFFSET)
-+#else
-+# define SOFTIRQ_DISABLE_OFFSET (0)
-+#endif
+ config MUTEX_SPIN_ON_OWNER
+ def_bool y
+- depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW
++ depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW && !PREEMPT_RT_FULL
- /* We use the MSB mostly because its available */
- #define PREEMPT_NEED_RESCHED 0x80000000
-@@ -59,9 +63,15 @@
- #include <asm/preempt.h>
+ config RWSEM_SPIN_ON_OWNER
+ def_bool y
+- depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW
++ depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW && !PREEMPT_RT_FULL
- #define hardirq_count() (preempt_count() & HARDIRQ_MASK)
--#define softirq_count() (preempt_count() & SOFTIRQ_MASK)
- #define irq_count() (preempt_count() & (HARDIRQ_MASK | SOFTIRQ_MASK \
- | NMI_MASK))
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+# define softirq_count() (preempt_count() & SOFTIRQ_MASK)
-+# define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET)
-+#else
-+# define softirq_count() (0UL)
-+extern int in_serving_softirq(void);
-+#endif
+ config LOCK_SPIN_ON_OWNER
+ def_bool y
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/Kconfig.preempt linux-4.14/kernel/Kconfig.preempt
+--- linux-4.14.orig/kernel/Kconfig.preempt 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/Kconfig.preempt 2018-09-05 11:05:07.000000000 +0200
+@@ -1,3 +1,16 @@
++config PREEMPT
++ bool
++ select PREEMPT_COUNT
++
++config PREEMPT_RT_BASE
++ bool
++ select PREEMPT
++
++config HAVE_PREEMPT_LAZY
++ bool
++
++config PREEMPT_LAZY
++ def_bool y if HAVE_PREEMPT_LAZY && PREEMPT_RT_FULL
- /*
- * Are we doing bottom half or hardware interrupt processing?
-@@ -72,7 +82,6 @@
- #define in_irq() (hardirq_count())
- #define in_softirq() (softirq_count())
- #define in_interrupt() (irq_count())
--#define in_serving_softirq() (softirq_count() & SOFTIRQ_OFFSET)
+ choice
+ prompt "Preemption Model"
+@@ -33,9 +46,9 @@
- /*
- * Are we in NMI context?
-@@ -91,7 +100,11 @@
- /*
- * The preempt_count offset after spin_lock()
- */
-+#if !defined(CONFIG_PREEMPT_RT_FULL)
- #define PREEMPT_LOCK_OFFSET PREEMPT_DISABLE_OFFSET
-+#else
-+#define PREEMPT_LOCK_OFFSET 0
-+#endif
+ Select this if you are building a kernel for a desktop system.
- /*
- * The preempt_count offset needed for things like:
-@@ -140,6 +153,20 @@ extern void preempt_count_sub(int val);
- #define preempt_count_inc() preempt_count_add(1)
- #define preempt_count_dec() preempt_count_sub(1)
+-config PREEMPT
++config PREEMPT__LL
+ bool "Preemptible Kernel (Low-Latency Desktop)"
+- select PREEMPT_COUNT
++ select PREEMPT
+ select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
+ help
+ This option reduces the latency of the kernel by making
+@@ -52,6 +65,22 @@
+ embedded system with latency requirements in the milliseconds
+ range.
-+#ifdef CONFIG_PREEMPT_LAZY
-+#define add_preempt_lazy_count(val) do { preempt_lazy_count() += (val); } while (0)
-+#define sub_preempt_lazy_count(val) do { preempt_lazy_count() -= (val); } while (0)
-+#define inc_preempt_lazy_count() add_preempt_lazy_count(1)
-+#define dec_preempt_lazy_count() sub_preempt_lazy_count(1)
-+#define preempt_lazy_count() (current_thread_info()->preempt_lazy_count)
-+#else
-+#define add_preempt_lazy_count(val) do { } while (0)
-+#define sub_preempt_lazy_count(val) do { } while (0)
-+#define inc_preempt_lazy_count() do { } while (0)
-+#define dec_preempt_lazy_count() do { } while (0)
-+#define preempt_lazy_count() (0)
-+#endif
++config PREEMPT_RTB
++ bool "Preemptible Kernel (Basic RT)"
++ select PREEMPT_RT_BASE
++ help
++ This option is basically the same as (Low-Latency Desktop) but
++ enables changes which are preliminary for the full preemptible
++ RT kernel.
+
- #ifdef CONFIG_PREEMPT_COUNT
++config PREEMPT_RT_FULL
++ bool "Fully Preemptible Kernel (RT)"
++ depends on IRQ_FORCED_THREADING
++ select PREEMPT_RT_BASE
++ select PREEMPT_RCU
++ help
++ All and everything
++
+ endchoice
- #define preempt_disable() \
-@@ -148,13 +175,25 @@ do { \
- barrier(); \
- } while (0)
+ config PREEMPT_COUNT
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/ksysfs.c linux-4.14/kernel/ksysfs.c
+--- linux-4.14.orig/kernel/ksysfs.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/ksysfs.c 2018-09-05 11:05:07.000000000 +0200
+@@ -140,6 +140,15 @@
-+#define preempt_lazy_disable() \
-+do { \
-+ inc_preempt_lazy_count(); \
-+ barrier(); \
-+} while (0)
-+
- #define sched_preempt_enable_no_resched() \
- do { \
- barrier(); \
- preempt_count_dec(); \
- } while (0)
+ #endif /* CONFIG_CRASH_CORE */
--#define preempt_enable_no_resched() sched_preempt_enable_no_resched()
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+# define preempt_enable_no_resched() sched_preempt_enable_no_resched()
-+# define preempt_check_resched_rt() preempt_check_resched()
-+#else
-+# define preempt_enable_no_resched() preempt_enable()
-+# define preempt_check_resched_rt() barrier();
++#if defined(CONFIG_PREEMPT_RT_FULL)
++static ssize_t realtime_show(struct kobject *kobj,
++ struct kobj_attribute *attr, char *buf)
++{
++ return sprintf(buf, "%d\n", 1);
++}
++KERNEL_ATTR_RO(realtime);
++#endif
++
+ /* whether file capabilities are enabled */
+ static ssize_t fscaps_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+@@ -231,6 +240,9 @@
+ &rcu_expedited_attr.attr,
+ &rcu_normal_attr.attr,
+ #endif
++#ifdef CONFIG_PREEMPT_RT_FULL
++ &realtime_attr.attr,
+#endif
+ NULL
+ };
- #define preemptible() (preempt_count() == 0 && !irqs_disabled())
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/lockdep.c linux-4.14/kernel/locking/lockdep.c
+--- linux-4.14.orig/kernel/locking/lockdep.c 2018-09-05 11:03:29.000000000 +0200
++++ linux-4.14/kernel/locking/lockdep.c 2018-09-05 11:05:07.000000000 +0200
+@@ -3916,6 +3916,7 @@
+ }
+ }
-@@ -179,6 +218,13 @@ do { \
- __preempt_schedule(); \
- } while (0)
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+ * We dont accurately track softirq state in e.g.
+ * hardirq contexts (such as on 4KSTACKS), so only
+@@ -3930,6 +3931,7 @@
+ DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
+ }
+ }
++#endif
-+#define preempt_lazy_enable() \
-+do { \
-+ dec_preempt_lazy_count(); \
-+ barrier(); \
-+ preempt_check_resched(); \
-+} while (0)
-+
- #else /* !CONFIG_PREEMPT */
- #define preempt_enable() \
- do { \
-@@ -224,6 +270,7 @@ do { \
- #define preempt_disable_notrace() barrier()
- #define preempt_enable_no_resched_notrace() barrier()
- #define preempt_enable_notrace() barrier()
-+#define preempt_check_resched_rt() barrier()
- #define preemptible() 0
+ if (!debug_locks)
+ print_irqtrace_events(current);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/locktorture.c linux-4.14/kernel/locking/locktorture.c
+--- linux-4.14.orig/kernel/locking/locktorture.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/locking/locktorture.c 2018-09-05 11:05:07.000000000 +0200
+@@ -26,7 +26,6 @@
+ #include <linux/kthread.h>
+ #include <linux/sched/rt.h>
+ #include <linux/spinlock.h>
+-#include <linux/rwlock.h>
+ #include <linux/mutex.h>
+ #include <linux/rwsem.h>
+ #include <linux/smp.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/Makefile linux-4.14/kernel/locking/Makefile
+--- linux-4.14.orig/kernel/locking/Makefile 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/locking/Makefile 2018-09-05 11:05:07.000000000 +0200
+@@ -3,7 +3,7 @@
+ # and is generally not a function of system call inputs.
+ KCOV_INSTRUMENT := n
- #endif /* CONFIG_PREEMPT_COUNT */
-@@ -244,10 +291,31 @@ do { \
- } while (0)
- #define preempt_fold_need_resched() \
- do { \
-- if (tif_need_resched()) \
-+ if (tif_need_resched_now()) \
- set_preempt_need_resched(); \
- } while (0)
+-obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
++obj-y += semaphore.o percpu-rwsem.o
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# define preempt_disable_rt() preempt_disable()
-+# define preempt_enable_rt() preempt_enable()
-+# define preempt_disable_nort() barrier()
-+# define preempt_enable_nort() barrier()
-+# ifdef CONFIG_SMP
-+ extern void migrate_disable(void);
-+ extern void migrate_enable(void);
-+# else /* CONFIG_SMP */
-+# define migrate_disable() barrier()
-+# define migrate_enable() barrier()
-+# endif /* CONFIG_SMP */
-+#else
-+# define preempt_disable_rt() barrier()
-+# define preempt_enable_rt() barrier()
-+# define preempt_disable_nort() preempt_disable()
-+# define preempt_enable_nort() preempt_enable()
-+# define migrate_disable() preempt_disable()
-+# define migrate_enable() preempt_enable()
+ ifdef CONFIG_FUNCTION_TRACER
+ CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE)
+@@ -12,7 +12,11 @@
+ CFLAGS_REMOVE_rtmutex-debug.o = $(CC_FLAGS_FTRACE)
+ endif
+
++ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
++obj-y += mutex.o
+ obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o
++endif
++obj-y += rwsem.o
+ obj-$(CONFIG_LOCKDEP) += lockdep.o
+ ifeq ($(CONFIG_PROC_FS),y)
+ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
+@@ -25,8 +29,11 @@
+ obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
+ obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
+ obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o
++ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
+ obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
+ obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem-xadd.o
++endif
++obj-$(CONFIG_PREEMPT_RT_FULL) += mutex-rt.o rwsem-rt.o rwlock-rt.o
+ obj-$(CONFIG_QUEUED_RWLOCKS) += qrwlock.o
+ obj-$(CONFIG_LOCK_TORTURE_TEST) += locktorture.o
+ obj-$(CONFIG_WW_MUTEX_SELFTEST) += test-ww_mutex.o
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/mutex-rt.c linux-4.14/kernel/locking/mutex-rt.c
+--- linux-4.14.orig/kernel/locking/mutex-rt.c 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/kernel/locking/mutex-rt.c 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,223 @@
++/*
++ * kernel/rt.c
++ *
++ * Real-Time Preemption Support
++ *
++ * started by Ingo Molnar:
++ *
++ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
++ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
++ *
++ * historic credit for proving that Linux spinlocks can be implemented via
++ * RT-aware mutexes goes to many people: The Pmutex project (Dirk Grambow
++ * and others) who prototyped it on 2.4 and did lots of comparative
++ * research and analysis; TimeSys, for proving that you can implement a
++ * fully preemptible kernel via the use of IRQ threading and mutexes;
++ * Bill Huey for persuasively arguing on lkml that the mutex model is the
++ * right one; and to MontaVista, who ported pmutexes to 2.6.
++ *
++ * This code is a from-scratch implementation and is not based on pmutexes,
++ * but the idea of converting spinlocks to mutexes is used here too.
++ *
++ * lock debugging, locking tree, deadlock detection:
++ *
++ * Copyright (C) 2004, LynuxWorks, Inc., Igor Manyilov, Bill Huey
++ * Released under the General Public License (GPL).
++ *
++ * Includes portions of the generic R/W semaphore implementation from:
++ *
++ * Copyright (c) 2001 David Howells (dhowells@redhat.com).
++ * - Derived partially from idea by Andrea Arcangeli <andrea@suse.de>
++ * - Derived also from comments by Linus
++ *
++ * Pending ownership of locks and ownership stealing:
++ *
++ * Copyright (C) 2005, Kihon Technologies Inc., Steven Rostedt
++ *
++ * (also by Steven Rostedt)
++ * - Converted single pi_lock to individual task locks.
++ *
++ * By Esben Nielsen:
++ * Doing priority inheritance with help of the scheduler.
++ *
++ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
++ * - major rework based on Esben Nielsens initial patch
++ * - replaced thread_info references by task_struct refs
++ * - removed task->pending_owner dependency
++ * - BKL drop/reacquire for semaphore style locks to avoid deadlocks
++ * in the scheduler return path as discussed with Steven Rostedt
++ *
++ * Copyright (C) 2006, Kihon Technologies Inc.
++ * Steven Rostedt <rostedt@goodmis.org>
++ * - debugged and patched Thomas Gleixner's rework.
++ * - added back the cmpxchg to the rework.
++ * - turned atomic require back on for SMP.
++ */
++
++#include <linux/spinlock.h>
++#include <linux/rtmutex.h>
++#include <linux/sched.h>
++#include <linux/delay.h>
++#include <linux/module.h>
++#include <linux/kallsyms.h>
++#include <linux/syscalls.h>
++#include <linux/interrupt.h>
++#include <linux/plist.h>
++#include <linux/fs.h>
++#include <linux/futex.h>
++#include <linux/hrtimer.h>
++
++#include "rtmutex_common.h"
++
++/*
++ * struct mutex functions
++ */
++void __mutex_do_init(struct mutex *mutex, const char *name,
++ struct lock_class_key *key)
++{
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ /*
++ * Make sure we are not reinitializing a held lock:
++ */
++ debug_check_no_locks_freed((void *)mutex, sizeof(*mutex));
++ lockdep_init_map(&mutex->dep_map, name, key, 0);
+#endif
++ mutex->lock.save_state = 0;
++}
++EXPORT_SYMBOL(__mutex_do_init);
++
++void __lockfunc _mutex_lock(struct mutex *lock)
++{
++ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++ __rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
++}
++EXPORT_SYMBOL(_mutex_lock);
++
++void __lockfunc _mutex_lock_io(struct mutex *lock)
++{
++ int token;
++
++ token = io_schedule_prepare();
++ _mutex_lock(lock);
++ io_schedule_finish(token);
++}
++EXPORT_SYMBOL_GPL(_mutex_lock_io);
++
++int __lockfunc _mutex_lock_interruptible(struct mutex *lock)
++{
++ int ret;
++
++ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++ ret = __rt_mutex_lock_state(&lock->lock, TASK_INTERRUPTIBLE);
++ if (ret)
++ mutex_release(&lock->dep_map, 1, _RET_IP_);
++ return ret;
++}
++EXPORT_SYMBOL(_mutex_lock_interruptible);
++
++int __lockfunc _mutex_lock_killable(struct mutex *lock)
++{
++ int ret;
++
++ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++ ret = __rt_mutex_lock_state(&lock->lock, TASK_KILLABLE);
++ if (ret)
++ mutex_release(&lock->dep_map, 1, _RET_IP_);
++ return ret;
++}
++EXPORT_SYMBOL(_mutex_lock_killable);
++
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass)
++{
++ mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
++ __rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
++}
++EXPORT_SYMBOL(_mutex_lock_nested);
+
- #ifdef CONFIG_PREEMPT_NOTIFIERS
-
- struct preempt_notifier;
-diff --git a/include/linux/printk.h b/include/linux/printk.h
-index eac1af8502bb..37e647af0b0b 100644
---- a/include/linux/printk.h
-+++ b/include/linux/printk.h
-@@ -126,9 +126,11 @@ struct va_format {
- #ifdef CONFIG_EARLY_PRINTK
- extern asmlinkage __printf(1, 2)
- void early_printk(const char *fmt, ...);
-+extern void printk_kill(void);
- #else
- static inline __printf(1, 2) __cold
- void early_printk(const char *s, ...) { }
-+static inline void printk_kill(void) { }
- #endif
-
- #ifdef CONFIG_PRINTK_NMI
-diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
-index af3581b8a451..277295039c8f 100644
---- a/include/linux/radix-tree.h
-+++ b/include/linux/radix-tree.h
-@@ -292,6 +292,8 @@ unsigned int radix_tree_gang_lookup_slot(struct radix_tree_root *root,
- int radix_tree_preload(gfp_t gfp_mask);
- int radix_tree_maybe_preload(gfp_t gfp_mask);
- int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order);
-+void radix_tree_preload_end(void);
++void __lockfunc _mutex_lock_io_nested(struct mutex *lock, int subclass)
++{
++ int token;
+
- void radix_tree_init(void);
- void *radix_tree_tag_set(struct radix_tree_root *root,
- unsigned long index, unsigned int tag);
-@@ -314,11 +316,6 @@ unsigned long radix_tree_range_tag_if_tagged(struct radix_tree_root *root,
- int radix_tree_tagged(struct radix_tree_root *root, unsigned int tag);
- unsigned long radix_tree_locate_item(struct radix_tree_root *root, void *item);
-
--static inline void radix_tree_preload_end(void)
--{
-- preempt_enable();
--}
--
- /**
- * struct radix_tree_iter - radix tree iterator state
- *
-diff --git a/include/linux/random.h b/include/linux/random.h
-index 7bd2403e4fef..b2df7148a42b 100644
---- a/include/linux/random.h
-+++ b/include/linux/random.h
-@@ -31,7 +31,7 @@ static inline void add_latent_entropy(void) {}
-
- extern void add_input_randomness(unsigned int type, unsigned int code,
- unsigned int value) __latent_entropy;
--extern void add_interrupt_randomness(int irq, int irq_flags) __latent_entropy;
-+extern void add_interrupt_randomness(int irq, int irq_flags, __u64 ip) __latent_entropy;
-
- extern void get_random_bytes(void *buf, int nbytes);
- extern int add_random_ready_callback(struct random_ready_callback *rdy);
-diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
-index e585018498d5..25c64474fc27 100644
---- a/include/linux/rbtree.h
-+++ b/include/linux/rbtree.h
-@@ -31,7 +31,7 @@
-
- #include <linux/kernel.h>
- #include <linux/stddef.h>
--#include <linux/rcupdate.h>
-+#include <linux/rcu_assign_pointer.h>
-
- struct rb_node {
- unsigned long __rb_parent_color;
-diff --git a/include/linux/rbtree_augmented.h b/include/linux/rbtree_augmented.h
-index d076183e49be..36bfb4dd57ae 100644
---- a/include/linux/rbtree_augmented.h
-+++ b/include/linux/rbtree_augmented.h
-@@ -26,6 +26,7 @@
-
- #include <linux/compiler.h>
- #include <linux/rbtree.h>
-+#include <linux/rcupdate.h>
-
- /*
- * Please note - only struct rb_augment_callbacks and the prototypes for
-diff --git a/include/linux/rcu_assign_pointer.h b/include/linux/rcu_assign_pointer.h
-new file mode 100644
-index 000000000000..7066962a4379
---- /dev/null
-+++ b/include/linux/rcu_assign_pointer.h
-@@ -0,0 +1,54 @@
-+#ifndef __LINUX_RCU_ASSIGN_POINTER_H__
-+#define __LINUX_RCU_ASSIGN_POINTER_H__
-+#include <linux/compiler.h>
-+#include <asm/barrier.h>
++ token = io_schedule_prepare();
+
-+/**
-+ * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
-+ * @v: The value to statically initialize with.
-+ */
-+#define RCU_INITIALIZER(v) (typeof(*(v)) __force __rcu *)(v)
++ mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
++ __rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
+
-+/**
-+ * rcu_assign_pointer() - assign to RCU-protected pointer
-+ * @p: pointer to assign to
-+ * @v: value to assign (publish)
-+ *
-+ * Assigns the specified value to the specified RCU-protected
-+ * pointer, ensuring that any concurrent RCU readers will see
-+ * any prior initialization.
-+ *
-+ * Inserts memory barriers on architectures that require them
-+ * (which is most of them), and also prevents the compiler from
-+ * reordering the code that initializes the structure after the pointer
-+ * assignment. More importantly, this call documents which pointers
-+ * will be dereferenced by RCU read-side code.
-+ *
-+ * In some special cases, you may use RCU_INIT_POINTER() instead
-+ * of rcu_assign_pointer(). RCU_INIT_POINTER() is a bit faster due
-+ * to the fact that it does not constrain either the CPU or the compiler.
-+ * That said, using RCU_INIT_POINTER() when you should have used
-+ * rcu_assign_pointer() is a very bad thing that results in
-+ * impossible-to-diagnose memory corruption. So please be careful.
-+ * See the RCU_INIT_POINTER() comment header for details.
-+ *
-+ * Note that rcu_assign_pointer() evaluates each of its arguments only
-+ * once, appearances notwithstanding. One of the "extra" evaluations
-+ * is in typeof() and the other visible only to sparse (__CHECKER__),
-+ * neither of which actually execute the argument. As with most cpp
-+ * macros, this execute-arguments-only-once property is important, so
-+ * please be careful when making changes to rcu_assign_pointer() and the
-+ * other macros that it invokes.
-+ */
-+#define rcu_assign_pointer(p, v) \
-+({ \
-+ uintptr_t _r_a_p__v = (uintptr_t)(v); \
-+ \
-+ if (__builtin_constant_p(v) && (_r_a_p__v) == (uintptr_t)NULL) \
-+ WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \
-+ else \
-+ smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
-+ _r_a_p__v; \
-+})
++ io_schedule_finish(token);
++}
++EXPORT_SYMBOL_GPL(_mutex_lock_io_nested);
+
-+#endif
-diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
-index 01f71e1d2e94..30cc001d0d5a 100644
---- a/include/linux/rcupdate.h
-+++ b/include/linux/rcupdate.h
-@@ -46,6 +46,7 @@
- #include <linux/compiler.h>
- #include <linux/ktime.h>
- #include <linux/irqflags.h>
-+#include <linux/rcu_assign_pointer.h>
-
- #include <asm/barrier.h>
-
-@@ -178,6 +179,9 @@ void call_rcu(struct rcu_head *head,
-
- #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+#define call_rcu_bh call_rcu
-+#else
- /**
- * call_rcu_bh() - Queue an RCU for invocation after a quicker grace period.
- * @head: structure to be used for queueing the RCU updates.
-@@ -201,6 +205,7 @@ void call_rcu(struct rcu_head *head,
- */
- void call_rcu_bh(struct rcu_head *head,
- rcu_callback_t func);
-+#endif
-
- /**
- * call_rcu_sched() - Queue an RCU for invocation after sched grace period.
-@@ -301,6 +306,11 @@ void synchronize_rcu(void);
- * types of kernel builds, the rcu_read_lock() nesting depth is unknowable.
- */
- #define rcu_preempt_depth() (current->rcu_read_lock_nesting)
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+#define sched_rcu_preempt_depth() rcu_preempt_depth()
-+#else
-+static inline int sched_rcu_preempt_depth(void) { return 0; }
-+#endif
-
- #else /* #ifdef CONFIG_PREEMPT_RCU */
-
-@@ -326,6 +336,8 @@ static inline int rcu_preempt_depth(void)
- return 0;
- }
-
-+#define sched_rcu_preempt_depth() rcu_preempt_depth()
++void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
++{
++ mutex_acquire_nest(&lock->dep_map, 0, 0, nest, _RET_IP_);
++ __rt_mutex_lock_state(&lock->lock, TASK_UNINTERRUPTIBLE);
++}
++EXPORT_SYMBOL(_mutex_lock_nest_lock);
+
- #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
-
- /* Internal to kernel */
-@@ -505,7 +517,14 @@ extern struct lockdep_map rcu_callback_map;
- int debug_lockdep_rcu_enabled(void);
-
- int rcu_read_lock_held(void);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static inline int rcu_read_lock_bh_held(void)
++int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass)
+{
-+ return rcu_read_lock_held();
++ int ret;
++
++ mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
++ ret = __rt_mutex_lock_state(&lock->lock, TASK_INTERRUPTIBLE);
++ if (ret)
++ mutex_release(&lock->dep_map, 1, _RET_IP_);
++ return ret;
+}
-+#else
- int rcu_read_lock_bh_held(void);
++EXPORT_SYMBOL(_mutex_lock_interruptible_nested);
++
++int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass)
++{
++ int ret;
++
++ mutex_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
++ ret = __rt_mutex_lock_state(&lock->lock, TASK_KILLABLE);
++ if (ret)
++ mutex_release(&lock->dep_map, 1, _RET_IP_);
++ return ret;
++}
++EXPORT_SYMBOL(_mutex_lock_killable_nested);
+#endif
-
- /**
- * rcu_read_lock_sched_held() - might we be in RCU-sched read-side critical section?
-@@ -626,54 +645,6 @@ static inline void rcu_preempt_sleep_check(void)
- })
-
- /**
-- * RCU_INITIALIZER() - statically initialize an RCU-protected global variable
-- * @v: The value to statically initialize with.
-- */
--#define RCU_INITIALIZER(v) (typeof(*(v)) __force __rcu *)(v)
--
--/**
-- * rcu_assign_pointer() - assign to RCU-protected pointer
-- * @p: pointer to assign to
-- * @v: value to assign (publish)
-- *
-- * Assigns the specified value to the specified RCU-protected
-- * pointer, ensuring that any concurrent RCU readers will see
-- * any prior initialization.
-- *
-- * Inserts memory barriers on architectures that require them
-- * (which is most of them), and also prevents the compiler from
-- * reordering the code that initializes the structure after the pointer
-- * assignment. More importantly, this call documents which pointers
-- * will be dereferenced by RCU read-side code.
-- *
-- * In some special cases, you may use RCU_INIT_POINTER() instead
-- * of rcu_assign_pointer(). RCU_INIT_POINTER() is a bit faster due
-- * to the fact that it does not constrain either the CPU or the compiler.
-- * That said, using RCU_INIT_POINTER() when you should have used
-- * rcu_assign_pointer() is a very bad thing that results in
-- * impossible-to-diagnose memory corruption. So please be careful.
-- * See the RCU_INIT_POINTER() comment header for details.
-- *
-- * Note that rcu_assign_pointer() evaluates each of its arguments only
-- * once, appearances notwithstanding. One of the "extra" evaluations
-- * is in typeof() and the other visible only to sparse (__CHECKER__),
-- * neither of which actually execute the argument. As with most cpp
-- * macros, this execute-arguments-only-once property is important, so
-- * please be careful when making changes to rcu_assign_pointer() and the
-- * other macros that it invokes.
-- */
--#define rcu_assign_pointer(p, v) \
--({ \
-- uintptr_t _r_a_p__v = (uintptr_t)(v); \
-- \
-- if (__builtin_constant_p(v) && (_r_a_p__v) == (uintptr_t)NULL) \
-- WRITE_ONCE((p), (typeof(p))(_r_a_p__v)); \
-- else \
-- smp_store_release(&p, RCU_INITIALIZER((typeof(p))_r_a_p__v)); \
-- _r_a_p__v; \
--})
--
--/**
- * rcu_access_pointer() - fetch RCU pointer with no dereferencing
- * @p: The pointer to read
++
++int __lockfunc _mutex_trylock(struct mutex *lock)
++{
++ int ret = __rt_mutex_trylock(&lock->lock);
++
++ if (ret)
++ mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++
++ return ret;
++}
++EXPORT_SYMBOL(_mutex_trylock);
++
++void __lockfunc _mutex_unlock(struct mutex *lock)
++{
++ mutex_release(&lock->dep_map, 1, _RET_IP_);
++ __rt_mutex_unlock(&lock->lock);
++}
++EXPORT_SYMBOL(_mutex_unlock);
++
++/**
++ * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0
++ * @cnt: the atomic which we are to dec
++ * @lock: the mutex to return holding if we dec to 0
++ *
++ * return true and hold lock if we dec to 0, return false otherwise
++ */
++int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock)
++{
++ /* dec if we can't possibly hit 0 */
++ if (atomic_add_unless(cnt, -1, 1))
++ return 0;
++ /* we might hit 0, so take the lock */
++ mutex_lock(lock);
++ if (!atomic_dec_and_test(cnt)) {
++ /* when we actually did the dec, we didn't hit 0 */
++ mutex_unlock(lock);
++ return 0;
++ }
++ /* we hit 0, and we hold the lock */
++ return 1;
++}
++EXPORT_SYMBOL(atomic_dec_and_mutex_lock);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/rtmutex.c linux-4.14/kernel/locking/rtmutex.c
+--- linux-4.14.orig/kernel/locking/rtmutex.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/locking/rtmutex.c 2018-09-05 11:05:07.000000000 +0200
+@@ -7,6 +7,11 @@
+ * Copyright (C) 2005-2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
+ * Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt
+ * Copyright (C) 2006 Esben Nielsen
++ * Adaptive Spinlocks:
++ * Copyright (C) 2008 Novell, Inc., Gregory Haskins, Sven Dietrich,
++ * and Peter Morreale,
++ * Adaptive Spinlocks simplification:
++ * Copyright (C) 2008 Red Hat, Inc., Steven Rostedt <srostedt@redhat.com>
*
-@@ -951,10 +922,14 @@ static inline void rcu_read_unlock(void)
- static inline void rcu_read_lock_bh(void)
- {
- local_bh_disable();
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ rcu_read_lock();
-+#else
- __acquire(RCU_BH);
- rcu_lock_acquire(&rcu_bh_lock_map);
- RCU_LOCKDEP_WARN(!rcu_is_watching(),
- "rcu_read_lock_bh() used illegally while idle");
-+#endif
+ * See Documentation/locking/rt-mutex-design.txt for details.
+ */
+@@ -18,6 +23,8 @@
+ #include <linux/sched/wake_q.h>
+ #include <linux/sched/debug.h>
+ #include <linux/timer.h>
++#include <linux/ww_mutex.h>
++#include <linux/blkdev.h>
+
+ #include "rtmutex_common.h"
+
+@@ -135,6 +142,12 @@
+ WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS);
}
++static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter)
++{
++ return waiter && waiter != PI_WAKEUP_INPROGRESS &&
++ waiter != PI_REQUEUE_INPROGRESS;
++}
++
/*
-@@ -964,10 +939,14 @@ static inline void rcu_read_lock_bh(void)
+ * We can speed up the acquire/release, if there's no debugging state to be
+ * set up.
+@@ -228,7 +241,7 @@
+ * Only use with rt_mutex_waiter_{less,equal}()
*/
- static inline void rcu_read_unlock_bh(void)
- {
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ rcu_read_unlock();
-+#else
- RCU_LOCKDEP_WARN(!rcu_is_watching(),
- "rcu_read_unlock_bh() used illegally while idle");
- rcu_lock_release(&rcu_bh_lock_map);
- __release(RCU_BH);
-+#endif
- local_bh_enable();
- }
+ #define task_to_waiter(p) \
+- &(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline }
++ &(struct rt_mutex_waiter){ .prio = (p)->prio, .deadline = (p)->dl.deadline, .task = (p) }
-diff --git a/include/linux/rcutree.h b/include/linux/rcutree.h
-index 63a4e4cf40a5..08ab12df2863 100644
---- a/include/linux/rcutree.h
-+++ b/include/linux/rcutree.h
-@@ -44,7 +44,11 @@ static inline void rcu_virt_note_context_switch(int cpu)
- rcu_note_context_switch();
+ static inline int
+ rt_mutex_waiter_less(struct rt_mutex_waiter *left,
+@@ -268,6 +281,27 @@
+ return 1;
}
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# define synchronize_rcu_bh synchronize_rcu
-+#else
- void synchronize_rcu_bh(void);
-+#endif
- void synchronize_sched_expedited(void);
- void synchronize_rcu_expedited(void);
-
-@@ -72,7 +76,11 @@ static inline void synchronize_rcu_bh_expedited(void)
++#define STEAL_NORMAL 0
++#define STEAL_LATERAL 1
++
++static inline int
++rt_mutex_steal(struct rt_mutex *lock, struct rt_mutex_waiter *waiter, int mode)
++{
++ struct rt_mutex_waiter *top_waiter = rt_mutex_top_waiter(lock);
++
++ if (waiter == top_waiter || rt_mutex_waiter_less(waiter, top_waiter))
++ return 1;
++
++ /*
++ * Note that RT tasks are excluded from lateral-steals
++ * to prevent the introduction of an unbounded latency.
++ */
++ if (mode == STEAL_NORMAL || rt_task(waiter->task))
++ return 0;
++
++ return rt_mutex_waiter_equal(waiter, top_waiter);
++}
++
+ static void
+ rt_mutex_enqueue(struct rt_mutex *lock, struct rt_mutex_waiter *waiter)
+ {
+@@ -372,6 +406,14 @@
+ return debug_rt_mutex_detect_deadlock(waiter, chwalk);
}
- void rcu_barrier(void);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# define rcu_barrier_bh rcu_barrier
-+#else
- void rcu_barrier_bh(void);
-+#endif
- void rcu_barrier_sched(void);
- unsigned long get_state_synchronize_rcu(void);
- void cond_synchronize_rcu(unsigned long oldstate);
-@@ -82,17 +90,14 @@ void cond_synchronize_sched(unsigned long oldstate);
- extern unsigned long rcutorture_testseq;
- extern unsigned long rcutorture_vernum;
- unsigned long rcu_batches_started(void);
--unsigned long rcu_batches_started_bh(void);
- unsigned long rcu_batches_started_sched(void);
- unsigned long rcu_batches_completed(void);
--unsigned long rcu_batches_completed_bh(void);
- unsigned long rcu_batches_completed_sched(void);
- unsigned long rcu_exp_batches_completed(void);
- unsigned long rcu_exp_batches_completed_sched(void);
- void show_rcu_gp_kthreads(void);
-
- void rcu_force_quiescent_state(void);
--void rcu_bh_force_quiescent_state(void);
- void rcu_sched_force_quiescent_state(void);
++static void rt_mutex_wake_waiter(struct rt_mutex_waiter *waiter)
++{
++ if (waiter->savestate)
++ wake_up_lock_sleeper(waiter->task);
++ else
++ wake_up_process(waiter->task);
++}
++
+ /*
+ * Max number of times we'll walk the boosting chain:
+ */
+@@ -379,7 +421,8 @@
- void rcu_idle_enter(void);
-@@ -109,6 +114,16 @@ extern int rcu_scheduler_active __read_mostly;
+ static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p)
+ {
+- return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL;
++ return rt_mutex_real_waiter(p->pi_blocked_on) ?
++ p->pi_blocked_on->lock : NULL;
+ }
- bool rcu_is_watching(void);
+ /*
+@@ -515,7 +558,7 @@
+ * reached or the state of the chain has changed while we
+ * dropped the locks.
+ */
+- if (!waiter)
++ if (!rt_mutex_real_waiter(waiter))
+ goto out_unlock_pi;
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+void rcu_bh_force_quiescent_state(void);
-+unsigned long rcu_batches_started_bh(void);
-+unsigned long rcu_batches_completed_bh(void);
-+#else
-+# define rcu_bh_force_quiescent_state rcu_force_quiescent_state
-+# define rcu_batches_completed_bh rcu_batches_completed
-+# define rcu_batches_started_bh rcu_batches_completed
-+#endif
+ /*
+@@ -696,13 +739,16 @@
+ * follow here. This is the end of the chain we are walking.
+ */
+ if (!rt_mutex_owner(lock)) {
++ struct rt_mutex_waiter *lock_top_waiter;
+
- void rcu_all_qs(void);
-
- /* RCUtree hotplug events */
-diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
-index 1abba5ce2a2f..30211c627511 100644
---- a/include/linux/rtmutex.h
-+++ b/include/linux/rtmutex.h
-@@ -13,11 +13,15 @@
- #define __LINUX_RT_MUTEX_H
-
- #include <linux/linkage.h>
-+#include <linux/spinlock_types_raw.h>
- #include <linux/rbtree.h>
--#include <linux/spinlock_types.h>
+ /*
+ * If the requeue [7] above changed the top waiter,
+ * then we need to wake the new top waiter up to try
+ * to get the lock.
+ */
+- if (prerequeue_top_waiter != rt_mutex_top_waiter(lock))
+- wake_up_process(rt_mutex_top_waiter(lock)->task);
++ lock_top_waiter = rt_mutex_top_waiter(lock);
++ if (prerequeue_top_waiter != lock_top_waiter)
++ rt_mutex_wake_waiter(lock_top_waiter);
+ raw_spin_unlock_irq(&lock->wait_lock);
+ return 0;
+ }
+@@ -804,9 +850,11 @@
+ * @task: The task which wants to acquire the lock
+ * @waiter: The waiter that is queued to the lock's wait tree if the
+ * callsite called task_blocked_on_lock(), otherwise NULL
++ * @mode: Lock steal mode (STEAL_NORMAL, STEAL_LATERAL)
+ */
+-static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
+- struct rt_mutex_waiter *waiter)
++static int __try_to_take_rt_mutex(struct rt_mutex *lock,
++ struct task_struct *task,
++ struct rt_mutex_waiter *waiter, int mode)
+ {
+ lockdep_assert_held(&lock->wait_lock);
- extern int max_lock_depth; /* for sysctl */
+@@ -842,12 +890,11 @@
+ */
+ if (waiter) {
+ /*
+- * If waiter is not the highest priority waiter of
+- * @lock, give up.
++ * If waiter is not the highest priority waiter of @lock,
++ * or its peer when lateral steal is allowed, give up.
+ */
+- if (waiter != rt_mutex_top_waiter(lock))
++ if (!rt_mutex_steal(lock, waiter, mode))
+ return 0;
+-
+ /*
+ * We can acquire the lock. Remove the waiter from the
+ * lock waiters tree.
+@@ -865,14 +912,12 @@
+ */
+ if (rt_mutex_has_waiters(lock)) {
+ /*
+- * If @task->prio is greater than or equal to
+- * the top waiter priority (kernel view),
+- * @task lost.
++ * If @task->prio is greater than the top waiter
++ * priority (kernel view), or equal to it when a
++ * lateral steal is forbidden, @task lost.
+ */
+- if (!rt_mutex_waiter_less(task_to_waiter(task),
+- rt_mutex_top_waiter(lock)))
++ if (!rt_mutex_steal(lock, task_to_waiter(task), mode))
+ return 0;
+-
+ /*
+ * The current top waiter stays enqueued. We
+ * don't have to change anything in the lock
+@@ -919,6 +964,351 @@
+ return 1;
+ }
-+#ifdef CONFIG_DEBUG_MUTEXES
-+#include <linux/debug_locks.h>
++#ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * preemptible spin_lock functions:
++ */
++static inline void rt_spin_lock_fastlock(struct rt_mutex *lock,
++ void (*slowfn)(struct rt_mutex *lock))
++{
++ might_sleep_no_state_check();
++
++ if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current)))
++ return;
++ else
++ slowfn(lock);
++}
++
++static inline void rt_spin_lock_fastunlock(struct rt_mutex *lock,
++ void (*slowfn)(struct rt_mutex *lock))
++{
++ if (likely(rt_mutex_cmpxchg_release(lock, current, NULL)))
++ return;
++ else
++ slowfn(lock);
++}
++#ifdef CONFIG_SMP
++/*
++ * Note that owner is a speculative pointer and dereferencing relies
++ * on rcu_read_lock() and the check against the lock owner.
++ */
++static int adaptive_wait(struct rt_mutex *lock,
++ struct task_struct *owner)
++{
++ int res = 0;
++
++ rcu_read_lock();
++ for (;;) {
++ if (owner != rt_mutex_owner(lock))
++ break;
++ /*
++ * Ensure that owner->on_cpu is dereferenced _after_
++ * checking the above to be valid.
++ */
++ barrier();
++ if (!owner->on_cpu) {
++ res = 1;
++ break;
++ }
++ cpu_relax();
++ }
++ rcu_read_unlock();
++ return res;
++}
++#else
++static int adaptive_wait(struct rt_mutex *lock,
++ struct task_struct *orig_owner)
++{
++ return 1;
++}
+#endif
+
- /**
- * The rt_mutex structure
- *
-@@ -31,8 +35,8 @@ struct rt_mutex {
- struct rb_root waiters;
- struct rb_node *waiters_leftmost;
- struct task_struct *owner;
--#ifdef CONFIG_DEBUG_RT_MUTEXES
- int save_state;
-+#ifdef CONFIG_DEBUG_RT_MUTEXES
- const char *name, *file;
- int line;
- void *magic;
-@@ -55,22 +59,33 @@ struct hrtimer_sleeper;
- # define rt_mutex_debug_check_no_locks_held(task) do { } while (0)
- #endif
-
-+# define rt_mutex_init(mutex) \
-+ do { \
-+ raw_spin_lock_init(&(mutex)->wait_lock); \
-+ __rt_mutex_init(mutex, #mutex); \
-+ } while (0)
++static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
++ struct rt_mutex_waiter *waiter,
++ struct task_struct *task,
++ enum rtmutex_chainwalk chwalk);
++/*
++ * Slow path lock function spin_lock style: this variant is very
++ * careful not to miss any non-lock wakeups.
++ *
++ * We store the current state under p->pi_lock in p->saved_state and
++ * the try_to_wake_up() code handles this accordingly.
++ */
++void __sched rt_spin_lock_slowlock_locked(struct rt_mutex *lock,
++ struct rt_mutex_waiter *waiter,
++ unsigned long flags)
++{
++ struct task_struct *lock_owner, *self = current;
++ struct rt_mutex_waiter *top_waiter;
++ int ret;
+
- #ifdef CONFIG_DEBUG_RT_MUTEXES
- # define __DEBUG_RT_MUTEX_INITIALIZER(mutexname) \
- , .name = #mutexname, .file = __FILE__, .line = __LINE__
--# define rt_mutex_init(mutex) __rt_mutex_init(mutex, __func__)
- extern void rt_mutex_debug_task_free(struct task_struct *tsk);
- #else
- # define __DEBUG_RT_MUTEX_INITIALIZER(mutexname)
--# define rt_mutex_init(mutex) __rt_mutex_init(mutex, NULL)
- # define rt_mutex_debug_task_free(t) do { } while (0)
- #endif
-
--#define __RT_MUTEX_INITIALIZER(mutexname) \
-- { .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
-+#define __RT_MUTEX_INITIALIZER_PLAIN(mutexname) \
-+ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(mutexname.wait_lock) \
- , .waiters = RB_ROOT \
- , .owner = NULL \
-- __DEBUG_RT_MUTEX_INITIALIZER(mutexname)}
-+ __DEBUG_RT_MUTEX_INITIALIZER(mutexname)
++ if (__try_to_take_rt_mutex(lock, self, NULL, STEAL_LATERAL))
++ return;
+
-+#define __RT_MUTEX_INITIALIZER(mutexname) \
-+ { __RT_MUTEX_INITIALIZER_PLAIN(mutexname) }
++ BUG_ON(rt_mutex_owner(lock) == self);
+
-+#define __RT_MUTEX_INITIALIZER_SAVE_STATE(mutexname) \
-+ { __RT_MUTEX_INITIALIZER_PLAIN(mutexname) \
-+ , .save_state = 1 }
-
- #define DEFINE_RT_MUTEX(mutexname) \
- struct rt_mutex mutexname = __RT_MUTEX_INITIALIZER(mutexname)
-@@ -91,6 +106,7 @@ extern void rt_mutex_destroy(struct rt_mutex *lock);
-
- extern void rt_mutex_lock(struct rt_mutex *lock);
- extern int rt_mutex_lock_interruptible(struct rt_mutex *lock);
-+extern int rt_mutex_lock_killable(struct rt_mutex *lock);
- extern int rt_mutex_timed_lock(struct rt_mutex *lock,
- struct hrtimer_sleeper *timeout);
-
-diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
-new file mode 100644
-index 000000000000..49ed2d45d3be
---- /dev/null
-+++ b/include/linux/rwlock_rt.h
-@@ -0,0 +1,99 @@
-+#ifndef __LINUX_RWLOCK_RT_H
-+#define __LINUX_RWLOCK_RT_H
++ /*
++ * We save whatever state the task is in and we'll restore it
++ * after acquiring the lock taking real wakeups into account
++ * as well. We are serialized via pi_lock against wakeups. See
++ * try_to_wake_up().
++ */
++ raw_spin_lock(&self->pi_lock);
++ self->saved_state = self->state;
++ __set_current_state_no_track(TASK_UNINTERRUPTIBLE);
++ raw_spin_unlock(&self->pi_lock);
+
-+#ifndef __LINUX_SPINLOCK_H
-+#error Do not include directly. Use spinlock.h
-+#endif
++ ret = task_blocks_on_rt_mutex(lock, waiter, self, RT_MUTEX_MIN_CHAINWALK);
++ BUG_ON(ret);
+
-+#define rwlock_init(rwl) \
-+do { \
-+ static struct lock_class_key __key; \
-+ \
-+ rt_mutex_init(&(rwl)->lock); \
-+ __rt_rwlock_init(rwl, #rwl, &__key); \
-+} while (0)
++ for (;;) {
++ /* Try to acquire the lock again. */
++ if (__try_to_take_rt_mutex(lock, self, waiter, STEAL_LATERAL))
++ break;
+
-+extern void __lockfunc rt_write_lock(rwlock_t *rwlock);
-+extern void __lockfunc rt_read_lock(rwlock_t *rwlock);
-+extern int __lockfunc rt_write_trylock(rwlock_t *rwlock);
-+extern int __lockfunc rt_write_trylock_irqsave(rwlock_t *trylock, unsigned long *flags);
-+extern int __lockfunc rt_read_trylock(rwlock_t *rwlock);
-+extern void __lockfunc rt_write_unlock(rwlock_t *rwlock);
-+extern void __lockfunc rt_read_unlock(rwlock_t *rwlock);
-+extern unsigned long __lockfunc rt_write_lock_irqsave(rwlock_t *rwlock);
-+extern unsigned long __lockfunc rt_read_lock_irqsave(rwlock_t *rwlock);
-+extern void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key);
++ top_waiter = rt_mutex_top_waiter(lock);
++ lock_owner = rt_mutex_owner(lock);
+
-+#define read_trylock(lock) __cond_lock(lock, rt_read_trylock(lock))
-+#define write_trylock(lock) __cond_lock(lock, rt_write_trylock(lock))
++ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+
-+#define write_trylock_irqsave(lock, flags) \
-+ __cond_lock(lock, rt_write_trylock_irqsave(lock, &flags))
++ debug_rt_mutex_print_deadlock(waiter);
+
-+#define read_lock_irqsave(lock, flags) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ flags = rt_read_lock_irqsave(lock); \
-+ } while (0)
++ if (top_waiter != waiter || adaptive_wait(lock, lock_owner))
++ schedule();
+
-+#define write_lock_irqsave(lock, flags) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ flags = rt_write_lock_irqsave(lock); \
-+ } while (0)
++ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+
-+#define read_lock(lock) rt_read_lock(lock)
++ raw_spin_lock(&self->pi_lock);
++ __set_current_state_no_track(TASK_UNINTERRUPTIBLE);
++ raw_spin_unlock(&self->pi_lock);
++ }
+
-+#define read_lock_bh(lock) \
-+ do { \
-+ local_bh_disable(); \
-+ rt_read_lock(lock); \
-+ } while (0)
++ /*
++ * Restore the task state to current->saved_state. We set it
++ * to the original state above and the try_to_wake_up() code
++ * has possibly updated it when a real (non-rtmutex) wakeup
++ * happened while we were blocked. Clear saved_state so
++ * try_to_wakeup() does not get confused.
++ */
++ raw_spin_lock(&self->pi_lock);
++ __set_current_state_no_track(self->saved_state);
++ self->saved_state = TASK_RUNNING;
++ raw_spin_unlock(&self->pi_lock);
+
-+#define read_lock_irq(lock) read_lock(lock)
++ /*
++ * try_to_take_rt_mutex() sets the waiter bit
++ * unconditionally. We might have to fix that up:
++ */
++ fixup_rt_mutex_waiters(lock);
+
-+#define write_lock(lock) rt_write_lock(lock)
++ BUG_ON(rt_mutex_has_waiters(lock) && waiter == rt_mutex_top_waiter(lock));
++ BUG_ON(!RB_EMPTY_NODE(&waiter->tree_entry));
++}
+
-+#define write_lock_bh(lock) \
-+ do { \
-+ local_bh_disable(); \
-+ rt_write_lock(lock); \
-+ } while (0)
++static void noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock)
++{
++ struct rt_mutex_waiter waiter;
++ unsigned long flags;
+
-+#define write_lock_irq(lock) write_lock(lock)
++ rt_mutex_init_waiter(&waiter, true);
+
-+#define read_unlock(lock) rt_read_unlock(lock)
++ raw_spin_lock_irqsave(&lock->wait_lock, flags);
++ rt_spin_lock_slowlock_locked(lock, &waiter, flags);
++ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
++ debug_rt_mutex_free_waiter(&waiter);
++}
+
-+#define read_unlock_bh(lock) \
-+ do { \
-+ rt_read_unlock(lock); \
-+ local_bh_enable(); \
-+ } while (0)
++static bool __sched __rt_mutex_unlock_common(struct rt_mutex *lock,
++ struct wake_q_head *wake_q,
++ struct wake_q_head *wq_sleeper);
++/*
++ * Slow path to release a rt_mutex spin_lock style
++ */
++void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock)
++{
++ unsigned long flags;
++ DEFINE_WAKE_Q(wake_q);
++ DEFINE_WAKE_Q(wake_sleeper_q);
++ bool postunlock;
+
-+#define read_unlock_irq(lock) read_unlock(lock)
++ raw_spin_lock_irqsave(&lock->wait_lock, flags);
++ postunlock = __rt_mutex_unlock_common(lock, &wake_q, &wake_sleeper_q);
++ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+
-+#define write_unlock(lock) rt_write_unlock(lock)
++ if (postunlock)
++ rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
++}
+
-+#define write_unlock_bh(lock) \
-+ do { \
-+ rt_write_unlock(lock); \
-+ local_bh_enable(); \
-+ } while (0)
++void __lockfunc rt_spin_lock(spinlock_t *lock)
++{
++ sleeping_lock_inc();
++ migrate_disable();
++ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++ rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
++}
++EXPORT_SYMBOL(rt_spin_lock);
+
-+#define write_unlock_irq(lock) write_unlock(lock)
++void __lockfunc __rt_spin_lock(struct rt_mutex *lock)
++{
++ rt_spin_lock_fastlock(lock, rt_spin_lock_slowlock);
++}
+
-+#define read_unlock_irqrestore(lock, flags) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ (void) flags; \
-+ rt_read_unlock(lock); \
-+ } while (0)
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass)
++{
++ sleeping_lock_inc();
++ migrate_disable();
++ spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
++ rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock);
++}
++EXPORT_SYMBOL(rt_spin_lock_nested);
++#endif
+
-+#define write_unlock_irqrestore(lock, flags) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ (void) flags; \
-+ rt_write_unlock(lock); \
-+ } while (0)
++void __lockfunc rt_spin_unlock(spinlock_t *lock)
++{
++ /* NOTE: we always pass in '1' for nested, for simplicity */
++ spin_release(&lock->dep_map, 1, _RET_IP_);
++ rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock);
++ migrate_enable();
++ sleeping_lock_dec();
++}
++EXPORT_SYMBOL(rt_spin_unlock);
+
-+#endif
-diff --git a/include/linux/rwlock_types.h b/include/linux/rwlock_types.h
-index cc0072e93e36..5317cd957292 100644
---- a/include/linux/rwlock_types.h
-+++ b/include/linux/rwlock_types.h
-@@ -1,6 +1,10 @@
- #ifndef __LINUX_RWLOCK_TYPES_H
- #define __LINUX_RWLOCK_TYPES_H
-
-+#if !defined(__LINUX_SPINLOCK_TYPES_H)
-+# error "Do not include directly, include spinlock_types.h"
-+#endif
++void __lockfunc __rt_spin_unlock(struct rt_mutex *lock)
++{
++ rt_spin_lock_fastunlock(lock, rt_spin_lock_slowunlock);
++}
++EXPORT_SYMBOL(__rt_spin_unlock);
++
++/*
++ * Wait for the lock to get unlocked: instead of polling for an unlock
++ * (like raw spinlocks do), we lock and unlock, to force the kernel to
++ * schedule if there's contention:
++ */
++void __lockfunc rt_spin_unlock_wait(spinlock_t *lock)
++{
++ spin_lock(lock);
++ spin_unlock(lock);
++}
++EXPORT_SYMBOL(rt_spin_unlock_wait);
++
++int __lockfunc rt_spin_trylock(spinlock_t *lock)
++{
++ int ret;
++
++ sleeping_lock_inc();
++ migrate_disable();
++ ret = __rt_mutex_trylock(&lock->lock);
++ if (ret) {
++ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++ } else {
++ migrate_enable();
++ sleeping_lock_dec();
++ }
++ return ret;
++}
++EXPORT_SYMBOL(rt_spin_trylock);
++
++int __lockfunc rt_spin_trylock_bh(spinlock_t *lock)
++{
++ int ret;
++
++ local_bh_disable();
++ ret = __rt_mutex_trylock(&lock->lock);
++ if (ret) {
++ sleeping_lock_inc();
++ migrate_disable();
++ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++ } else
++ local_bh_enable();
++ return ret;
++}
++EXPORT_SYMBOL(rt_spin_trylock_bh);
++
++int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags)
++{
++ int ret;
+
- /*
- * include/linux/rwlock_types.h - generic rwlock type definitions
- * and initializers
-diff --git a/include/linux/rwlock_types_rt.h b/include/linux/rwlock_types_rt.h
-new file mode 100644
-index 000000000000..51b28d775fe1
---- /dev/null
-+++ b/include/linux/rwlock_types_rt.h
-@@ -0,0 +1,33 @@
-+#ifndef __LINUX_RWLOCK_TYPES_RT_H
-+#define __LINUX_RWLOCK_TYPES_RT_H
++ *flags = 0;
++ ret = __rt_mutex_trylock(&lock->lock);
++ if (ret) {
++ sleeping_lock_inc();
++ migrate_disable();
++ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(rt_spin_trylock_irqsave);
+
-+#ifndef __LINUX_SPINLOCK_TYPES_H
-+#error "Do not include directly. Include spinlock_types.h instead"
-+#endif
++int atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock)
++{
++ /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
++ if (atomic_add_unless(atomic, -1, 1))
++ return 0;
++ rt_spin_lock(lock);
++ if (atomic_dec_and_test(atomic))
++ return 1;
++ rt_spin_unlock(lock);
++ return 0;
++}
++EXPORT_SYMBOL(atomic_dec_and_spin_lock);
+
-+/*
-+ * rwlocks - rtmutex which allows single reader recursion
-+ */
-+typedef struct {
-+ struct rt_mutex lock;
-+ int read_depth;
-+ unsigned int break_lock;
++void
++__rt_spin_lock_init(spinlock_t *lock, const char *name, struct lock_class_key *key)
++{
+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ struct lockdep_map dep_map;
++ /*
++ * Make sure we are not reinitializing a held lock:
++ */
++ debug_check_no_locks_freed((void *)lock, sizeof(*lock));
++ lockdep_init_map(&lock->dep_map, name, key, 0);
+#endif
-+} rwlock_t;
++}
++EXPORT_SYMBOL(__rt_spin_lock_init);
+
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+# define RW_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
-+#else
-+# define RW_DEP_MAP_INIT(lockname)
-+#endif
++#endif /* PREEMPT_RT_FULL */
+
-+#define __RW_LOCK_UNLOCKED(name) \
-+ { .lock = __RT_MUTEX_INITIALIZER_SAVE_STATE(name.lock), \
-+ RW_DEP_MAP_INIT(name) }
++#ifdef CONFIG_PREEMPT_RT_FULL
++ static inline int __sched
++__mutex_lock_check_stamp(struct rt_mutex *lock, struct ww_acquire_ctx *ctx)
++{
++ struct ww_mutex *ww = container_of(lock, struct ww_mutex, base.lock);
++ struct ww_acquire_ctx *hold_ctx = ACCESS_ONCE(ww->ctx);
+
-+#define DEFINE_RWLOCK(name) \
-+ rwlock_t name = __RW_LOCK_UNLOCKED(name)
++ if (!hold_ctx)
++ return 0;
+
-+#endif
-diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
-index dd1d14250340..8e1f44ff1f2f 100644
---- a/include/linux/rwsem.h
-+++ b/include/linux/rwsem.h
-@@ -19,6 +19,10 @@
- #include <linux/osq_lock.h>
- #endif
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+#include <linux/rwsem_rt.h>
-+#else /* PREEMPT_RT_FULL */
++ if (unlikely(ctx == hold_ctx))
++ return -EALREADY;
+
- struct rw_semaphore;
-
- #ifdef CONFIG_RWSEM_GENERIC_SPINLOCK
-@@ -184,4 +188,6 @@ extern void up_read_non_owner(struct rw_semaphore *sem);
- # define up_read_non_owner(sem) up_read(sem)
- #endif
-
-+#endif /* !PREEMPT_RT_FULL */
++ if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
++ (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
++#ifdef CONFIG_DEBUG_MUTEXES
++ DEBUG_LOCKS_WARN_ON(ctx->contending_lock);
++ ctx->contending_lock = ww;
++#endif
++ return -EDEADLK;
++ }
+
- #endif /* _LINUX_RWSEM_H */
-diff --git a/include/linux/rwsem_rt.h b/include/linux/rwsem_rt.h
-new file mode 100644
-index 000000000000..e26bd95a57c3
---- /dev/null
-+++ b/include/linux/rwsem_rt.h
-@@ -0,0 +1,167 @@
-+#ifndef _LINUX_RWSEM_RT_H
-+#define _LINUX_RWSEM_RT_H
++ return 0;
++}
++#else
++ static inline int __sched
++__mutex_lock_check_stamp(struct rt_mutex *lock, struct ww_acquire_ctx *ctx)
++{
++ BUG();
++ return 0;
++}
+
-+#ifndef _LINUX_RWSEM_H
-+#error "Include rwsem.h"
+#endif
+
-+/*
-+ * RW-semaphores are a spinlock plus a reader-depth count.
-+ *
-+ * Note that the semantics are different from the usual
-+ * Linux rw-sems, in PREEMPT_RT mode we do not allow
-+ * multiple readers to hold the lock at once, we only allow
-+ * a read-lock owner to read-lock recursively. This is
-+ * better for latency, makes the implementation inherently
-+ * fair and makes it simpler as well.
-+ */
++static inline int
++try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
++ struct rt_mutex_waiter *waiter)
++{
++ return __try_to_take_rt_mutex(lock, task, waiter, STEAL_NORMAL);
++}
+
-+#include <linux/rtmutex.h>
+ /*
+ * Task blocks on lock.
+ *
+@@ -951,6 +1341,22 @@
+ return -EDEADLK;
+
+ raw_spin_lock(&task->pi_lock);
++ /*
++ * In the case of futex requeue PI, this will be a proxy
++ * lock. The task will wake unaware that it is enqueueed on
++ * this lock. Avoid blocking on two locks and corrupting
++ * pi_blocked_on via the PI_WAKEUP_INPROGRESS
++ * flag. futex_wait_requeue_pi() sets this when it wakes up
++ * before requeue (due to a signal or timeout). Do not enqueue
++ * the task if PI_WAKEUP_INPROGRESS is set.
++ */
++ if (task != current && task->pi_blocked_on == PI_WAKEUP_INPROGRESS) {
++ raw_spin_unlock(&task->pi_lock);
++ return -EAGAIN;
++ }
+
-+struct rw_semaphore {
-+ struct rt_mutex lock;
-+ int read_depth;
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ struct lockdep_map dep_map;
-+#endif
-+};
++ BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on));
+
-+#define __RWSEM_INITIALIZER(name) \
-+ { .lock = __RT_MUTEX_INITIALIZER(name.lock), \
-+ RW_DEP_MAP_INIT(name) }
+ waiter->task = task;
+ waiter->lock = lock;
+ waiter->prio = task->prio;
+@@ -974,7 +1380,7 @@
+ rt_mutex_enqueue_pi(owner, waiter);
+
+ rt_mutex_adjust_prio(owner);
+- if (owner->pi_blocked_on)
++ if (rt_mutex_real_waiter(owner->pi_blocked_on))
+ chain_walk = 1;
+ } else if (rt_mutex_cond_detect_deadlock(waiter, chwalk)) {
+ chain_walk = 1;
+@@ -1016,6 +1422,7 @@
+ * Called with lock->wait_lock held and interrupts disabled.
+ */
+ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
++ struct wake_q_head *wake_sleeper_q,
+ struct rt_mutex *lock)
+ {
+ struct rt_mutex_waiter *waiter;
+@@ -1055,7 +1462,10 @@
+ * Pairs with preempt_enable() in rt_mutex_postunlock();
+ */
+ preempt_disable();
+- wake_q_add(wake_q, waiter->task);
++ if (waiter->savestate)
++ wake_q_add_sleeper(wake_sleeper_q, waiter->task);
++ else
++ wake_q_add(wake_q, waiter->task);
+ raw_spin_unlock(¤t->pi_lock);
+ }
+
+@@ -1070,7 +1480,7 @@
+ {
+ bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock));
+ struct task_struct *owner = rt_mutex_owner(lock);
+- struct rt_mutex *next_lock;
++ struct rt_mutex *next_lock = NULL;
+
+ lockdep_assert_held(&lock->wait_lock);
+
+@@ -1096,7 +1506,8 @@
+ rt_mutex_adjust_prio(owner);
+
+ /* Store the lock on which owner is blocked or NULL */
+- next_lock = task_blocked_on_lock(owner);
++ if (rt_mutex_real_waiter(owner->pi_blocked_on))
++ next_lock = task_blocked_on_lock(owner);
+
+ raw_spin_unlock(&owner->pi_lock);
+
+@@ -1132,26 +1543,28 @@
+ raw_spin_lock_irqsave(&task->pi_lock, flags);
+
+ waiter = task->pi_blocked_on;
+- if (!waiter || rt_mutex_waiter_equal(waiter, task_to_waiter(task))) {
++ if (!rt_mutex_real_waiter(waiter) ||
++ rt_mutex_waiter_equal(waiter, task_to_waiter(task))) {
+ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ return;
+ }
+ next_lock = waiter->lock;
+- raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+
+ /* gets dropped in rt_mutex_adjust_prio_chain()! */
+ get_task_struct(task);
+
++ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ rt_mutex_adjust_prio_chain(task, RT_MUTEX_MIN_CHAINWALK, NULL,
+ next_lock, NULL, task);
+ }
+
+-void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter)
++void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter, bool savestate)
+ {
+ debug_rt_mutex_init_waiter(waiter);
+ RB_CLEAR_NODE(&waiter->pi_tree_entry);
+ RB_CLEAR_NODE(&waiter->tree_entry);
+ waiter->task = NULL;
++ waiter->savestate = savestate;
+ }
+
+ /**
+@@ -1167,7 +1580,8 @@
+ static int __sched
+ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+- struct rt_mutex_waiter *waiter)
++ struct rt_mutex_waiter *waiter,
++ struct ww_acquire_ctx *ww_ctx)
+ {
+ int ret = 0;
+
+@@ -1176,16 +1590,17 @@
+ if (try_to_take_rt_mutex(lock, current, waiter))
+ break;
+
+- /*
+- * TASK_INTERRUPTIBLE checks for signals and
+- * timeout. Ignored otherwise.
+- */
+- if (likely(state == TASK_INTERRUPTIBLE)) {
+- /* Signal pending? */
+- if (signal_pending(current))
+- ret = -EINTR;
+- if (timeout && !timeout->task)
+- ret = -ETIMEDOUT;
++ if (timeout && !timeout->task) {
++ ret = -ETIMEDOUT;
++ break;
++ }
++ if (signal_pending_state(state, current)) {
++ ret = -EINTR;
++ break;
++ }
+
-+#define DECLARE_RWSEM(lockname) \
-+ struct rw_semaphore lockname = __RWSEM_INITIALIZER(lockname)
++ if (ww_ctx && ww_ctx->acquired > 0) {
++ ret = __mutex_lock_check_stamp(lock, ww_ctx);
+ if (ret)
+ break;
+ }
+@@ -1224,33 +1639,104 @@
+ }
+ }
+
+-/*
+- * Slow path lock function:
+- */
+-static int __sched
+-rt_mutex_slowlock(struct rt_mutex *lock, int state,
+- struct hrtimer_sleeper *timeout,
+- enum rtmutex_chainwalk chwalk)
++static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww,
++ struct ww_acquire_ctx *ww_ctx)
+ {
+- struct rt_mutex_waiter waiter;
+- unsigned long flags;
+- int ret = 0;
++#ifdef CONFIG_DEBUG_MUTEXES
++ /*
++ * If this WARN_ON triggers, you used ww_mutex_lock to acquire,
++ * but released with a normal mutex_unlock in this call.
++ *
++ * This should never happen, always use ww_mutex_unlock.
++ */
++ DEBUG_LOCKS_WARN_ON(ww->ctx);
+
+- rt_mutex_init_waiter(&waiter);
++ /*
++ * Not quite done after calling ww_acquire_done() ?
++ */
++ DEBUG_LOCKS_WARN_ON(ww_ctx->done_acquire);
+
-+extern void __rt_rwsem_init(struct rw_semaphore *rwsem, const char *name,
-+ struct lock_class_key *key);
++ if (ww_ctx->contending_lock) {
++ /*
++ * After -EDEADLK you tried to
++ * acquire a different ww_mutex? Bad!
++ */
++ DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock != ww);
+
-+#define __rt_init_rwsem(sem, name, key) \
-+ do { \
-+ rt_mutex_init(&(sem)->lock); \
-+ __rt_rwsem_init((sem), (name), (key));\
-+ } while (0)
++ /*
++ * You called ww_mutex_lock after receiving -EDEADLK,
++ * but 'forgot' to unlock everything else first?
++ */
++ DEBUG_LOCKS_WARN_ON(ww_ctx->acquired > 0);
++ ww_ctx->contending_lock = NULL;
++ }
+
+ /*
+- * Technically we could use raw_spin_[un]lock_irq() here, but this can
+- * be called in early boot if the cmpxchg() fast path is disabled
+- * (debug, no architecture support). In this case we will acquire the
+- * rtmutex with lock->wait_lock held. But we cannot unconditionally
+- * enable interrupts in that early boot case. So we need to use the
+- * irqsave/restore variants.
++ * Naughty, using a different class will lead to undefined behavior!
+ */
+- raw_spin_lock_irqsave(&lock->wait_lock, flags);
++ DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class);
++#endif
++ ww_ctx->acquired++;
++}
+
-+#define __init_rwsem(sem, name, key) __rt_init_rwsem(sem, name, key)
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void ww_mutex_account_lock(struct rt_mutex *lock,
++ struct ww_acquire_ctx *ww_ctx)
++{
++ struct ww_mutex *ww = container_of(lock, struct ww_mutex, base.lock);
++ struct rt_mutex_waiter *waiter, *n;
+
-+# define rt_init_rwsem(sem) \
-+do { \
-+ static struct lock_class_key __key; \
-+ \
-+ __rt_init_rwsem((sem), #sem, &__key); \
-+} while (0)
++ /*
++ * This branch gets optimized out for the common case,
++ * and is only important for ww_mutex_lock.
++ */
++ ww_mutex_lock_acquired(ww, ww_ctx);
++ ww->ctx = ww_ctx;
+
-+extern void rt_down_write(struct rw_semaphore *rwsem);
-+extern int rt_down_write_killable(struct rw_semaphore *rwsem);
-+extern void rt_down_read_nested(struct rw_semaphore *rwsem, int subclass);
-+extern void rt_down_write_nested(struct rw_semaphore *rwsem, int subclass);
-+extern int rt_down_write_killable_nested(struct rw_semaphore *rwsem,
-+ int subclass);
-+extern void rt_down_write_nested_lock(struct rw_semaphore *rwsem,
-+ struct lockdep_map *nest);
-+extern void rt__down_read(struct rw_semaphore *rwsem);
-+extern void rt_down_read(struct rw_semaphore *rwsem);
-+extern int rt_down_write_trylock(struct rw_semaphore *rwsem);
-+extern int rt__down_read_trylock(struct rw_semaphore *rwsem);
-+extern int rt_down_read_trylock(struct rw_semaphore *rwsem);
-+extern void __rt_up_read(struct rw_semaphore *rwsem);
-+extern void rt_up_read(struct rw_semaphore *rwsem);
-+extern void rt_up_write(struct rw_semaphore *rwsem);
-+extern void rt_downgrade_write(struct rw_semaphore *rwsem);
-+
-+#define init_rwsem(sem) rt_init_rwsem(sem)
-+#define rwsem_is_locked(s) rt_mutex_is_locked(&(s)->lock)
++ /*
++ * Give any possible sleeping processes the chance to wake up,
++ * so they can recheck if they have to back off.
++ */
++ rbtree_postorder_for_each_entry_safe(waiter, n, &lock->waiters.rb_root,
++ tree_entry) {
++ /* XXX debug rt mutex waiter wakeup */
+
-+static inline int rwsem_is_contended(struct rw_semaphore *sem)
-+{
-+ /* rt_mutex_has_waiters() */
-+ return !RB_EMPTY_ROOT(&sem->lock.waiters);
++ BUG_ON(waiter->lock != lock);
++ rt_mutex_wake_waiter(waiter);
++ }
+}
+
-+static inline void __down_read(struct rw_semaphore *sem)
-+{
-+ rt__down_read(sem);
-+}
++#else
+
-+static inline void down_read(struct rw_semaphore *sem)
++static void ww_mutex_account_lock(struct rt_mutex *lock,
++ struct ww_acquire_ctx *ww_ctx)
+{
-+ rt_down_read(sem);
++ BUG();
+}
++#endif
+
-+static inline int __down_read_trylock(struct rw_semaphore *sem)
++int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state,
++ struct hrtimer_sleeper *timeout,
++ enum rtmutex_chainwalk chwalk,
++ struct ww_acquire_ctx *ww_ctx,
++ struct rt_mutex_waiter *waiter)
+{
-+ return rt__down_read_trylock(sem);
-+}
++ int ret;
+
-+static inline int down_read_trylock(struct rw_semaphore *sem)
-+{
-+ return rt_down_read_trylock(sem);
-+}
++#ifdef CONFIG_PREEMPT_RT_FULL
++ if (ww_ctx) {
++ struct ww_mutex *ww;
+
-+static inline void down_write(struct rw_semaphore *sem)
-+{
-+ rt_down_write(sem);
++ ww = container_of(lock, struct ww_mutex, base.lock);
++ if (unlikely(ww_ctx == READ_ONCE(ww->ctx)))
++ return -EALREADY;
++ }
++#endif
+
+ /* Try to acquire the lock again: */
+ if (try_to_take_rt_mutex(lock, current, NULL)) {
+- raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
++ if (ww_ctx)
++ ww_mutex_account_lock(lock, ww_ctx);
+ return 0;
+ }
+
+@@ -1260,17 +1746,27 @@
+ if (unlikely(timeout))
+ hrtimer_start_expires(&timeout->timer, HRTIMER_MODE_ABS);
+
+- ret = task_blocks_on_rt_mutex(lock, &waiter, current, chwalk);
++ ret = task_blocks_on_rt_mutex(lock, waiter, current, chwalk);
+
+- if (likely(!ret))
++ if (likely(!ret)) {
+ /* sleep on the mutex */
+- ret = __rt_mutex_slowlock(lock, state, timeout, &waiter);
++ ret = __rt_mutex_slowlock(lock, state, timeout, waiter,
++ ww_ctx);
++ } else if (ww_ctx) {
++ /* ww_mutex received EDEADLK, let it become EALREADY */
++ ret = __mutex_lock_check_stamp(lock, ww_ctx);
++ BUG_ON(!ret);
++ }
+
+ if (unlikely(ret)) {
+ __set_current_state(TASK_RUNNING);
+ if (rt_mutex_has_waiters(lock))
+- remove_waiter(lock, &waiter);
+- rt_mutex_handle_deadlock(ret, chwalk, &waiter);
++ remove_waiter(lock, waiter);
++ /* ww_mutex want to report EDEADLK/EALREADY, let them */
++ if (!ww_ctx)
++ rt_mutex_handle_deadlock(ret, chwalk, waiter);
++ } else if (ww_ctx) {
++ ww_mutex_account_lock(lock, ww_ctx);
+ }
+
+ /*
+@@ -1278,6 +1774,36 @@
+ * unconditionally. We might have to fix that up.
+ */
+ fixup_rt_mutex_waiters(lock);
++ return ret;
+}
+
-+static inline int down_write_killable(struct rw_semaphore *sem)
++/*
++ * Slow path lock function:
++ */
++static int __sched
++rt_mutex_slowlock(struct rt_mutex *lock, int state,
++ struct hrtimer_sleeper *timeout,
++ enum rtmutex_chainwalk chwalk,
++ struct ww_acquire_ctx *ww_ctx)
+{
-+ return rt_down_write_killable(sem);
-+}
++ struct rt_mutex_waiter waiter;
++ unsigned long flags;
++ int ret = 0;
+
-+static inline int down_write_trylock(struct rw_semaphore *sem)
-+{
-+ return rt_down_write_trylock(sem);
-+}
++ rt_mutex_init_waiter(&waiter, false);
+
-+static inline void __up_read(struct rw_semaphore *sem)
-+{
-+ __rt_up_read(sem);
-+}
++ /*
++ * Technically we could use raw_spin_[un]lock_irq() here, but this can
++ * be called in early boot if the cmpxchg() fast path is disabled
++ * (debug, no architecture support). In this case we will acquire the
++ * rtmutex with lock->wait_lock held. But we cannot unconditionally
++ * enable interrupts in that early boot case. So we need to use the
++ * irqsave/restore variants.
++ */
++ raw_spin_lock_irqsave(&lock->wait_lock, flags);
+
-+static inline void up_read(struct rw_semaphore *sem)
-+{
-+ rt_up_read(sem);
-+}
++ ret = rt_mutex_slowlock_locked(lock, state, timeout, chwalk, ww_ctx,
++ &waiter);
+
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+
+@@ -1338,7 +1864,8 @@
+ * Return whether the current task needs to call rt_mutex_postunlock().
+ */
+ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
+- struct wake_q_head *wake_q)
++ struct wake_q_head *wake_q,
++ struct wake_q_head *wake_sleeper_q)
+ {
+ unsigned long flags;
+
+@@ -1392,7 +1919,7 @@
+ *
+ * Queue the next waiter for wakeup once we release the wait_lock.
+ */
+- mark_wakeup_next_waiter(wake_q, lock);
++ mark_wakeup_next_waiter(wake_q, wake_sleeper_q, lock);
+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+
+ return true; /* call rt_mutex_postunlock() */
+@@ -1406,29 +1933,45 @@
+ */
+ static inline int
+ rt_mutex_fastlock(struct rt_mutex *lock, int state,
++ struct ww_acquire_ctx *ww_ctx,
+ int (*slowfn)(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+- enum rtmutex_chainwalk chwalk))
++ enum rtmutex_chainwalk chwalk,
++ struct ww_acquire_ctx *ww_ctx))
+ {
+ if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current)))
+ return 0;
+
+- return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK);
++ /*
++ * If rt_mutex blocks, the function sched_submit_work will not call
++ * blk_schedule_flush_plug (because tsk_is_pi_blocked would be true).
++ * We must call blk_schedule_flush_plug here, if we don't call it,
++ * a deadlock in device mapper may happen.
++ */
++ if (unlikely(blk_needs_flush_plug(current)))
++ blk_schedule_flush_plug(current);
+
-+static inline void up_write(struct rw_semaphore *sem)
-+{
-+ rt_up_write(sem);
-+}
++ return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK, ww_ctx);
+ }
+
+ static inline int
+ rt_mutex_timed_fastlock(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+ enum rtmutex_chainwalk chwalk,
++ struct ww_acquire_ctx *ww_ctx,
+ int (*slowfn)(struct rt_mutex *lock, int state,
+ struct hrtimer_sleeper *timeout,
+- enum rtmutex_chainwalk chwalk))
++ enum rtmutex_chainwalk chwalk,
++ struct ww_acquire_ctx *ww_ctx))
+ {
+ if (chwalk == RT_MUTEX_MIN_CHAINWALK &&
+ likely(rt_mutex_cmpxchg_acquire(lock, NULL, current)))
+ return 0;
+
+- return slowfn(lock, state, timeout, chwalk);
++ if (unlikely(blk_needs_flush_plug(current)))
++ blk_schedule_flush_plug(current);
+
-+static inline void downgrade_write(struct rw_semaphore *sem)
-+{
-+ rt_downgrade_write(sem);
++ return slowfn(lock, state, timeout, chwalk, ww_ctx);
+ }
+
+ static inline int
+@@ -1444,9 +1987,11 @@
+ /*
+ * Performs the wakeup of the the top-waiter and re-enables preemption.
+ */
+-void rt_mutex_postunlock(struct wake_q_head *wake_q)
++void rt_mutex_postunlock(struct wake_q_head *wake_q,
++ struct wake_q_head *wake_sleeper_q)
+ {
+ wake_up_q(wake_q);
++ wake_up_q_sleeper(wake_sleeper_q);
+
+ /* Pairs with preempt_disable() in rt_mutex_slowunlock() */
+ preempt_enable();
+@@ -1455,15 +2000,40 @@
+ static inline void
+ rt_mutex_fastunlock(struct rt_mutex *lock,
+ bool (*slowfn)(struct rt_mutex *lock,
+- struct wake_q_head *wqh))
++ struct wake_q_head *wqh,
++ struct wake_q_head *wq_sleeper))
+ {
+ DEFINE_WAKE_Q(wake_q);
++ DEFINE_WAKE_Q(wake_sleeper_q);
+
+ if (likely(rt_mutex_cmpxchg_release(lock, current, NULL)))
+ return;
+
+- if (slowfn(lock, &wake_q))
+- rt_mutex_postunlock(&wake_q);
++ if (slowfn(lock, &wake_q, &wake_sleeper_q))
++ rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
+}
+
-+static inline void down_read_nested(struct rw_semaphore *sem, int subclass)
++int __sched __rt_mutex_lock_state(struct rt_mutex *lock, int state)
+{
-+ return rt_down_read_nested(sem, subclass);
++ might_sleep();
++ return rt_mutex_fastlock(lock, state, NULL, rt_mutex_slowlock);
+}
+
-+static inline void down_write_nested(struct rw_semaphore *sem, int subclass)
++/**
++ * rt_mutex_lock_state - lock a rt_mutex with a given state
++ *
++ * @lock: The rt_mutex to be locked
++ * @state: The state to set when blocking on the rt_mutex
++ */
++static int __sched rt_mutex_lock_state(struct rt_mutex *lock, int state)
+{
-+ rt_down_write_nested(sem, subclass);
-+}
++ int ret;
+
-+static inline int down_write_killable_nested(struct rw_semaphore *sem,
-+ int subclass)
++ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++ ret = __rt_mutex_lock_state(lock, state);
++ if (ret)
++ mutex_release(&lock->dep_map, 1, _RET_IP_);
++ return ret;
+ }
+
+ /**
+@@ -1473,10 +2043,7 @@
+ */
+ void __sched rt_mutex_lock(struct rt_mutex *lock)
+ {
+- might_sleep();
+-
+- mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+- rt_mutex_fastlock(lock, TASK_UNINTERRUPTIBLE, rt_mutex_slowlock);
++ rt_mutex_lock_state(lock, TASK_UNINTERRUPTIBLE);
+ }
+ EXPORT_SYMBOL_GPL(rt_mutex_lock);
+
+@@ -1491,16 +2058,7 @@
+ */
+ int __sched rt_mutex_lock_interruptible(struct rt_mutex *lock)
+ {
+- int ret;
+-
+- might_sleep();
+-
+- mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+- ret = rt_mutex_fastlock(lock, TASK_INTERRUPTIBLE, rt_mutex_slowlock);
+- if (ret)
+- mutex_release(&lock->dep_map, 1, _RET_IP_);
+-
+- return ret;
++ return rt_mutex_lock_state(lock, TASK_INTERRUPTIBLE);
+ }
+ EXPORT_SYMBOL_GPL(rt_mutex_lock_interruptible);
+
+@@ -1518,6 +2076,22 @@
+ }
+
+ /**
++ * rt_mutex_lock_killable - lock a rt_mutex killable
++ *
++ * @lock: the rt_mutex to be locked
++ * @detect_deadlock: deadlock detection on/off
++ *
++ * Returns:
++ * 0 on success
++ * -EINTR when interrupted by a signal
++ */
++int __sched rt_mutex_lock_killable(struct rt_mutex *lock)
+{
-+ return rt_down_write_killable_nested(sem, subclass);
++ return rt_mutex_lock_state(lock, TASK_KILLABLE);
+}
++EXPORT_SYMBOL_GPL(rt_mutex_lock_killable);
+
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+static inline void down_write_nest_lock(struct rw_semaphore *sem,
-+ struct rw_semaphore *nest_lock)
++/**
+ * rt_mutex_timed_lock - lock a rt_mutex interruptible
+ * the timeout structure is provided
+ * by the caller
+@@ -1540,6 +2114,7 @@
+ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
+ ret = rt_mutex_timed_fastlock(lock, TASK_INTERRUPTIBLE, timeout,
+ RT_MUTEX_MIN_CHAINWALK,
++ NULL,
+ rt_mutex_slowlock);
+ if (ret)
+ mutex_release(&lock->dep_map, 1, _RET_IP_);
+@@ -1548,6 +2123,18 @@
+ }
+ EXPORT_SYMBOL_GPL(rt_mutex_timed_lock);
+
++int __sched __rt_mutex_trylock(struct rt_mutex *lock)
+{
-+ rt_down_write_nested_lock(sem, &nest_lock->dep_map);
-+}
-+
++#ifdef CONFIG_PREEMPT_RT_FULL
++ if (WARN_ON_ONCE(in_irq() || in_nmi()))
+#else
++ if (WARN_ON_ONCE(in_irq() || in_nmi() || in_serving_softirq()))
++#endif
++ return 0;
+
-+static inline void down_write_nest_lock(struct rw_semaphore *sem,
-+ struct rw_semaphore *nest_lock)
-+{
-+ rt_down_write_nested_lock(sem, NULL);
++ return rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock);
+}
-+#endif
-+#endif
-diff --git a/include/linux/sched.h b/include/linux/sched.h
-index 75d9a57e212e..8cb7df0f56e3 100644
---- a/include/linux/sched.h
-+++ b/include/linux/sched.h
-@@ -26,6 +26,7 @@ struct sched_param {
- #include <linux/nodemask.h>
- #include <linux/mm_types.h>
- #include <linux/preempt.h>
-+#include <asm/kmap_types.h>
++
+ /**
+ * rt_mutex_trylock - try to lock a rt_mutex
+ *
+@@ -1563,10 +2150,7 @@
+ {
+ int ret;
- #include <asm/page.h>
- #include <asm/ptrace.h>
-@@ -243,10 +244,7 @@ extern char ___assert_task_state[1 - 2*!!(
- TASK_UNINTERRUPTIBLE | __TASK_STOPPED | \
- __TASK_TRACED | EXIT_ZOMBIE | EXIT_DEAD)
-
--#define task_is_traced(task) ((task->state & __TASK_TRACED) != 0)
- #define task_is_stopped(task) ((task->state & __TASK_STOPPED) != 0)
--#define task_is_stopped_or_traced(task) \
-- ((task->state & (__TASK_STOPPED | __TASK_TRACED)) != 0)
- #define task_contributes_to_load(task) \
- ((task->state & TASK_UNINTERRUPTIBLE) != 0 && \
- (task->flags & PF_FROZEN) == 0 && \
-@@ -312,6 +310,11 @@ extern char ___assert_task_state[1 - 2*!!(
+- if (WARN_ON_ONCE(in_irq() || in_nmi() || in_serving_softirq()))
+- return 0;
+-
+- ret = rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock);
++ ret = __rt_mutex_trylock(lock);
+ if (ret)
+ mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
- #endif
+@@ -1574,6 +2158,11 @@
+ }
+ EXPORT_SYMBOL_GPL(rt_mutex_trylock);
-+#define __set_current_state_no_track(state_value) \
-+ do { current->state = (state_value); } while (0)
-+#define set_current_state_no_track(state_value) \
-+ set_mb(current->state, (state_value))
++void __sched __rt_mutex_unlock(struct rt_mutex *lock)
++{
++ rt_mutex_fastunlock(lock, rt_mutex_slowunlock);
++}
+
- /* Task command name length */
- #define TASK_COMM_LEN 16
+ /**
+ * rt_mutex_unlock - unlock a rt_mutex
+ *
+@@ -1582,16 +2171,13 @@
+ void __sched rt_mutex_unlock(struct rt_mutex *lock)
+ {
+ mutex_release(&lock->dep_map, 1, _RET_IP_);
+- rt_mutex_fastunlock(lock, rt_mutex_slowunlock);
++ __rt_mutex_unlock(lock);
+ }
+ EXPORT_SYMBOL_GPL(rt_mutex_unlock);
+
+-/**
+- * Futex variant, that since futex variants do not use the fast-path, can be
+- * simple and will not need to retry.
+- */
+-bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
+- struct wake_q_head *wake_q)
++static bool __sched __rt_mutex_unlock_common(struct rt_mutex *lock,
++ struct wake_q_head *wake_q,
++ struct wake_q_head *wq_sleeper)
+ {
+ lockdep_assert_held(&lock->wait_lock);
+
+@@ -1608,22 +2194,35 @@
+ * avoid inversion prior to the wakeup. preempt_disable()
+ * therein pairs with rt_mutex_postunlock().
+ */
+- mark_wakeup_next_waiter(wake_q, lock);
++ mark_wakeup_next_waiter(wake_q, wq_sleeper, lock);
-@@ -1013,8 +1016,18 @@ struct wake_q_head {
- struct wake_q_head name = { WAKE_Q_TAIL, &name.first }
+ return true; /* call postunlock() */
+ }
- extern void wake_q_add(struct wake_q_head *head,
-- struct task_struct *task);
--extern void wake_up_q(struct wake_q_head *head);
-+ struct task_struct *task);
-+extern void __wake_up_q(struct wake_q_head *head, bool sleeper);
-+
-+static inline void wake_up_q(struct wake_q_head *head)
++/**
++ * Futex variant, that since futex variants do not use the fast-path, can be
++ * simple and will not need to retry.
++ */
++bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock,
++ struct wake_q_head *wake_q,
++ struct wake_q_head *wq_sleeper)
+{
-+ __wake_up_q(head, false);
++ return __rt_mutex_unlock_common(lock, wake_q, wq_sleeper);
+}
+
-+static inline void wake_up_q_sleeper(struct wake_q_head *head)
-+{
-+ __wake_up_q(head, true);
-+}
+ void __sched rt_mutex_futex_unlock(struct rt_mutex *lock)
+ {
+ DEFINE_WAKE_Q(wake_q);
++ DEFINE_WAKE_Q(wake_sleeper_q);
++ unsigned long flags;
+ bool postunlock;
- /*
- * sched-domains (multiprocessor balancing) declarations:
-@@ -1481,6 +1494,7 @@ struct task_struct {
- struct thread_info thread_info;
- #endif
- volatile long state; /* -1 unrunnable, 0 runnable, >0 stopped */
-+ volatile long saved_state; /* saved state for "spinlock sleepers" */
- void *stack;
- atomic_t usage;
- unsigned int flags; /* per process flags, defined below */
-@@ -1520,6 +1534,12 @@ struct task_struct {
- #endif
+- raw_spin_lock_irq(&lock->wait_lock);
+- postunlock = __rt_mutex_futex_unlock(lock, &wake_q);
+- raw_spin_unlock_irq(&lock->wait_lock);
++ raw_spin_lock_irqsave(&lock->wait_lock, flags);
++ postunlock = __rt_mutex_futex_unlock(lock, &wake_q, &wake_sleeper_q);
++ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
- unsigned int policy;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ int migrate_disable;
-+# ifdef CONFIG_SCHED_DEBUG
-+ int migrate_disable_atomic;
-+# endif
-+#endif
- int nr_cpus_allowed;
- cpumask_t cpus_allowed;
+ if (postunlock)
+- rt_mutex_postunlock(&wake_q);
++ rt_mutex_postunlock(&wake_q, &wake_sleeper_q);
+ }
-@@ -1654,6 +1674,9 @@ struct task_struct {
+ /**
+@@ -1662,7 +2261,7 @@
+ if (name && key)
+ debug_rt_mutex_init(lock, name, key);
+ }
+-EXPORT_SYMBOL_GPL(__rt_mutex_init);
++EXPORT_SYMBOL(__rt_mutex_init);
- struct task_cputime cputime_expires;
- struct list_head cpu_timers[3];
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ struct task_struct *posix_timer_list;
+ /**
+ * rt_mutex_init_proxy_locked - initialize and lock a rt_mutex on behalf of a
+@@ -1682,6 +2281,14 @@
+ struct task_struct *proxy_owner)
+ {
+ __rt_mutex_init(lock, NULL, NULL);
++#ifdef CONFIG_DEBUG_SPINLOCK
++ /*
++ * get another key class for the wait_lock. LOCK_PI and UNLOCK_PI is
++ * holding the ->wait_lock of the proxy_lock while unlocking a sleeping
++ * lock.
++ */
++ raw_spin_lock_init(&lock->wait_lock);
+#endif
+ debug_rt_mutex_proxy_lock(lock, proxy_owner);
+ rt_mutex_set_owner(lock, proxy_owner);
+ }
+@@ -1714,6 +2321,34 @@
+ if (try_to_take_rt_mutex(lock, task, NULL))
+ return 1;
- /* process credentials */
- const struct cred __rcu *ptracer_cred; /* Tracer's credentials at attach */
-@@ -1685,10 +1708,15 @@ struct task_struct {
- /* signal handlers */
- struct signal_struct *signal;
- struct sighand_struct *sighand;
-+ struct sigqueue *sigqueue_cache;
-
- sigset_t blocked, real_blocked;
- sigset_t saved_sigmask; /* restored if set_restore_sigmask() was used */
- struct sigpending pending;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ /* TODO: move me into ->restart_block ? */
-+ struct siginfo forced_info;
-+#endif
-
- unsigned long sas_ss_sp;
- size_t sas_ss_size;
-@@ -1917,6 +1945,12 @@ struct task_struct {
- /* bitmask and counter of trace recursion */
- unsigned long trace_recursion;
- #endif /* CONFIG_TRACING */
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ u64 preempt_timestamp_hist;
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ long timer_offset;
-+#endif
-+#endif
- #ifdef CONFIG_KCOV
- /* Coverage collection mode enabled for this task (0 if disabled). */
- enum kcov_mode kcov_mode;
-@@ -1942,9 +1976,23 @@ struct task_struct {
- unsigned int sequential_io;
- unsigned int sequential_io_avg;
- #endif
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ struct rcu_head put_rcu;
-+ int softirq_nestcnt;
-+ unsigned int softirqs_raised;
-+#endif
+#ifdef CONFIG_PREEMPT_RT_FULL
-+# if defined CONFIG_HIGHMEM || defined CONFIG_X86_32
-+ int kmap_idx;
-+ pte_t kmap_pte[KM_TYPE_NR];
-+# endif
-+#endif
- #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
- unsigned long task_state_change;
- #endif
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ int xmit_recursion;
++ /*
++ * In PREEMPT_RT there's an added race.
++ * If the task, that we are about to requeue, times out,
++ * it can set the PI_WAKEUP_INPROGRESS. This tells the requeue
++ * to skip this task. But right after the task sets
++ * its pi_blocked_on to PI_WAKEUP_INPROGRESS it can then
++ * block on the spin_lock(&hb->lock), which in RT is an rtmutex.
++ * This will replace the PI_WAKEUP_INPROGRESS with the actual
++ * lock that it blocks on. We *must not* place this task
++ * on this proxy lock in that case.
++ *
++ * To prevent this race, we first take the task's pi_lock
++ * and check if it has updated its pi_blocked_on. If it has,
++ * we assume that it woke up and we return -EAGAIN.
++ * Otherwise, we set the task's pi_blocked_on to
++ * PI_REQUEUE_INPROGRESS, so that if the task is waking up
++ * it will know that we are in the process of requeuing it.
++ */
++ raw_spin_lock(&task->pi_lock);
++ if (task->pi_blocked_on) {
++ raw_spin_unlock(&task->pi_lock);
++ return -EAGAIN;
++ }
++ task->pi_blocked_on = PI_REQUEUE_INPROGRESS;
++ raw_spin_unlock(&task->pi_lock);
+#endif
- int pagefault_disabled;
- #ifdef CONFIG_MMU
- struct task_struct *oom_reaper_list;
-@@ -1984,14 +2032,6 @@ static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t)
- }
- #endif
-
--/* Future-safe accessor for struct task_struct's cpus_allowed. */
--#define tsk_cpus_allowed(tsk) (&(tsk)->cpus_allowed)
--
--static inline int tsk_nr_cpus_allowed(struct task_struct *p)
--{
-- return p->nr_cpus_allowed;
--}
--
- #define TNF_MIGRATED 0x01
- #define TNF_NO_GROUP 0x02
- #define TNF_SHARED 0x04
-@@ -2207,6 +2247,15 @@ extern struct pid *cad_pid;
- extern void free_task(struct task_struct *tsk);
- #define get_task_struct(tsk) do { atomic_inc(&(tsk)->usage); } while(0)
-
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+extern void __put_task_struct_cb(struct rcu_head *rhp);
+
-+static inline void put_task_struct(struct task_struct *t)
-+{
-+ if (atomic_dec_and_test(&t->usage))
-+ call_rcu(&t->put_rcu, __put_task_struct_cb);
-+}
-+#else
- extern void __put_task_struct(struct task_struct *t);
+ /* We enforce deadlock detection for futexes */
+ ret = task_blocks_on_rt_mutex(lock, waiter, task,
+ RT_MUTEX_FULL_CHAINWALK);
+@@ -1728,7 +2363,7 @@
+ ret = 0;
+ }
- static inline void put_task_struct(struct task_struct *t)
-@@ -2214,6 +2263,7 @@ static inline void put_task_struct(struct task_struct *t)
- if (atomic_dec_and_test(&t->usage))
- __put_task_struct(t);
- }
-+#endif
+- if (unlikely(ret))
++ if (ret && rt_mutex_has_waiters(lock))
+ remove_waiter(lock, waiter);
- struct task_struct *task_rcu_dereference(struct task_struct **ptask);
- struct task_struct *try_get_task_struct(struct task_struct **ptask);
-@@ -2255,6 +2305,7 @@ extern void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut,
- /*
- * Per process flags
- */
-+#define PF_IN_SOFTIRQ 0x00000001 /* Task is serving softirq */
- #define PF_EXITING 0x00000004 /* getting shut down */
- #define PF_EXITPIDONE 0x00000008 /* pi exit done on shut down */
- #define PF_VCPU 0x00000010 /* I'm a virtual CPU */
-@@ -2423,6 +2474,10 @@ extern void do_set_cpus_allowed(struct task_struct *p,
-
- extern int set_cpus_allowed_ptr(struct task_struct *p,
- const struct cpumask *new_mask);
-+int migrate_me(void);
-+void tell_sched_cpu_down_begin(int cpu);
-+void tell_sched_cpu_down_done(int cpu);
+ debug_rt_mutex_print_deadlock(waiter);
+@@ -1803,17 +2438,36 @@
+ struct hrtimer_sleeper *to,
+ struct rt_mutex_waiter *waiter)
+ {
++ struct task_struct *tsk = current;
+ int ret;
+
+ raw_spin_lock_irq(&lock->wait_lock);
+ /* sleep on the mutex */
+ set_current_state(TASK_INTERRUPTIBLE);
+- ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter);
++ ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter, NULL);
+ /*
+ * try_to_take_rt_mutex() sets the waiter bit unconditionally. We might
+ * have to fix that up.
+ */
+ fixup_rt_mutex_waiters(lock);
++ /*
++ * RT has a problem here when the wait got interrupted by a timeout
++ * or a signal. task->pi_blocked_on is still set. The task must
++ * acquire the hash bucket lock when returning from this function.
++ *
++ * If the hash bucket lock is contended then the
++ * BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)) in
++ * task_blocks_on_rt_mutex() will trigger. This can be avoided by
++ * clearing task->pi_blocked_on which removes the task from the
++ * boosting chain of the rtmutex. That's correct because the task
++ * is not longer blocked on it.
++ */
++ if (ret) {
++ raw_spin_lock(&tsk->pi_lock);
++ tsk->pi_blocked_on = NULL;
++ raw_spin_unlock(&tsk->pi_lock);
++ }
+
- #else
- static inline void do_set_cpus_allowed(struct task_struct *p,
- const struct cpumask *new_mask)
-@@ -2435,6 +2490,9 @@ static inline int set_cpus_allowed_ptr(struct task_struct *p,
- return -EINVAL;
- return 0;
- }
-+static inline int migrate_me(void) { return 0; }
-+static inline void tell_sched_cpu_down_begin(int cpu) { }
-+static inline void tell_sched_cpu_down_done(int cpu) { }
- #endif
+ raw_spin_unlock_irq(&lock->wait_lock);
- #ifdef CONFIG_NO_HZ_COMMON
-@@ -2673,6 +2731,7 @@ extern void xtime_update(unsigned long ticks);
+ return ret;
+@@ -1874,3 +2528,99 @@
- extern int wake_up_state(struct task_struct *tsk, unsigned int state);
- extern int wake_up_process(struct task_struct *tsk);
-+extern int wake_up_lock_sleeper(struct task_struct * tsk);
- extern void wake_up_new_task(struct task_struct *tsk);
- #ifdef CONFIG_SMP
- extern void kick_process(struct task_struct *tsk);
-@@ -2881,6 +2940,17 @@ static inline void mmdrop(struct mm_struct *mm)
- __mmdrop(mm);
+ return cleanup;
}
-
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+extern void __mmdrop_delayed(struct rcu_head *rhp);
-+static inline void mmdrop_delayed(struct mm_struct *mm)
++
++static inline int
++ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
-+ if (atomic_dec_and_test(&mm->mm_count))
-+ call_rcu(&mm->delayed_drop, __mmdrop_delayed);
-+}
-+#else
-+# define mmdrop_delayed(mm) mmdrop(mm)
++#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
++ unsigned tmp;
++
++ if (ctx->deadlock_inject_countdown-- == 0) {
++ tmp = ctx->deadlock_inject_interval;
++ if (tmp > UINT_MAX/4)
++ tmp = UINT_MAX;
++ else
++ tmp = tmp*2 + tmp + tmp/2;
++
++ ctx->deadlock_inject_interval = tmp;
++ ctx->deadlock_inject_countdown = tmp;
++ ctx->contending_lock = lock;
++
++ ww_mutex_unlock(lock);
++
++ return -EDEADLK;
++ }
+#endif
+
- static inline void mmdrop_async_fn(struct work_struct *work)
- {
- struct mm_struct *mm = container_of(work, struct mm_struct, async_put_work);
-@@ -3273,6 +3343,43 @@ static inline int test_tsk_need_resched(struct task_struct *tsk)
- return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED));
- }
-
-+#ifdef CONFIG_PREEMPT_LAZY
-+static inline void set_tsk_need_resched_lazy(struct task_struct *tsk)
-+{
-+ set_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY);
++ return 0;
+}
+
-+static inline void clear_tsk_need_resched_lazy(struct task_struct *tsk)
++#ifdef CONFIG_PREEMPT_RT_FULL
++int __sched
++ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
-+ clear_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY);
-+}
++ int ret;
+
-+static inline int test_tsk_need_resched_lazy(struct task_struct *tsk)
-+{
-+ return unlikely(test_tsk_thread_flag(tsk,TIF_NEED_RESCHED_LAZY));
-+}
++ might_sleep();
+
-+static inline int need_resched_lazy(void)
-+{
-+ return test_thread_flag(TIF_NEED_RESCHED_LAZY);
-+}
++ mutex_acquire_nest(&lock->base.dep_map, 0, 0,
++ ctx ? &ctx->dep_map : NULL, _RET_IP_);
++ ret = rt_mutex_slowlock(&lock->base.lock, TASK_INTERRUPTIBLE, NULL, 0,
++ ctx);
++ if (ret)
++ mutex_release(&lock->base.dep_map, 1, _RET_IP_);
++ else if (!ret && ctx && ctx->acquired > 1)
++ return ww_mutex_deadlock_injection(lock, ctx);
+
-+static inline int need_resched_now(void)
-+{
-+ return test_thread_flag(TIF_NEED_RESCHED);
++ return ret;
+}
++EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible);
+
-+#else
-+static inline void clear_tsk_need_resched_lazy(struct task_struct *tsk) { }
-+static inline int need_resched_lazy(void) { return 0; }
-+
-+static inline int need_resched_now(void)
++int __sched
++ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
-+ return test_thread_flag(TIF_NEED_RESCHED);
-+}
++ int ret;
+
-+#endif
++ might_sleep();
+
- static inline int restart_syscall(void)
- {
- set_tsk_thread_flag(current, TIF_SIGPENDING);
-@@ -3304,6 +3411,51 @@ static inline int signal_pending_state(long state, struct task_struct *p)
- return (state & TASK_INTERRUPTIBLE) || __fatal_signal_pending(p);
- }
-
-+static inline bool __task_is_stopped_or_traced(struct task_struct *task)
-+{
-+ if (task->state & (__TASK_STOPPED | __TASK_TRACED))
-+ return true;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ if (task->saved_state & (__TASK_STOPPED | __TASK_TRACED))
-+ return true;
-+#endif
-+ return false;
++ mutex_acquire_nest(&lock->base.dep_map, 0, 0,
++ ctx ? &ctx->dep_map : NULL, _RET_IP_);
++ ret = rt_mutex_slowlock(&lock->base.lock, TASK_UNINTERRUPTIBLE, NULL, 0,
++ ctx);
++ if (ret)
++ mutex_release(&lock->base.dep_map, 1, _RET_IP_);
++ else if (!ret && ctx && ctx->acquired > 1)
++ return ww_mutex_deadlock_injection(lock, ctx);
++
++ return ret;
+}
++EXPORT_SYMBOL_GPL(ww_mutex_lock);
+
-+static inline bool task_is_stopped_or_traced(struct task_struct *task)
++void __sched ww_mutex_unlock(struct ww_mutex *lock)
+{
-+ bool traced_stopped;
-+
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ unsigned long flags;
++ int nest = !!lock->ctx;
+
-+ raw_spin_lock_irqsave(&task->pi_lock, flags);
-+ traced_stopped = __task_is_stopped_or_traced(task);
-+ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
-+#else
-+ traced_stopped = __task_is_stopped_or_traced(task);
++ /*
++ * The unlocking fastpath is the 0->1 transition from 'locked'
++ * into 'unlocked' state:
++ */
++ if (nest) {
++#ifdef CONFIG_DEBUG_MUTEXES
++ DEBUG_LOCKS_WARN_ON(!lock->ctx->acquired);
+#endif
-+ return traced_stopped;
++ if (lock->ctx->acquired > 0)
++ lock->ctx->acquired--;
++ lock->ctx = NULL;
++ }
++
++ mutex_release(&lock->base.dep_map, nest, _RET_IP_);
++ __rt_mutex_unlock(&lock->base.lock);
+}
++EXPORT_SYMBOL(ww_mutex_unlock);
+
-+static inline bool task_is_traced(struct task_struct *task)
++int __rt_mutex_owner_current(struct rt_mutex *lock)
+{
-+ bool traced = false;
-+
-+ if (task->state & __TASK_TRACED)
-+ return true;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ /* in case the task is sleeping on tasklist_lock */
-+ raw_spin_lock_irq(&task->pi_lock);
-+ if (task->state & __TASK_TRACED)
-+ traced = true;
-+ else if (task->saved_state & __TASK_TRACED)
-+ traced = true;
-+ raw_spin_unlock_irq(&task->pi_lock);
-+#endif
-+ return traced;
++ return rt_mutex_owner(lock) == current;
+}
-+
- /*
- * cond_resched() and cond_resched_lock(): latency reduction via
- * explicit rescheduling in places that are safe. The return
-@@ -3329,12 +3481,16 @@ extern int __cond_resched_lock(spinlock_t *lock);
- __cond_resched_lock(lock); \
- })
-
-+#ifndef CONFIG_PREEMPT_RT_FULL
- extern int __cond_resched_softirq(void);
-
- #define cond_resched_softirq() ({ \
- ___might_sleep(__FILE__, __LINE__, SOFTIRQ_DISABLE_OFFSET); \
- __cond_resched_softirq(); \
- })
-+#else
-+# define cond_resched_softirq() cond_resched()
++EXPORT_SYMBOL(__rt_mutex_owner_current);
+#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/rtmutex_common.h linux-4.14/kernel/locking/rtmutex_common.h
+--- linux-4.14.orig/kernel/locking/rtmutex_common.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/locking/rtmutex_common.h 2018-09-05 11:05:07.000000000 +0200
+@@ -15,6 +15,7 @@
- static inline void cond_resched_rcu(void)
- {
-@@ -3509,6 +3665,31 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+ #include <linux/rtmutex.h>
+ #include <linux/sched/wake_q.h>
++#include <linux/sched/debug.h>
- #endif /* CONFIG_SMP */
+ /*
+ * This is the control structure for tasks blocked on a rt_mutex,
+@@ -29,6 +30,7 @@
+ struct rb_node pi_tree_entry;
+ struct task_struct *task;
+ struct rt_mutex *lock;
++ bool savestate;
+ #ifdef CONFIG_DEBUG_RT_MUTEXES
+ unsigned long ip;
+ struct pid *deadlock_task_pid;
+@@ -129,12 +131,15 @@
+ /*
+ * PI-futex support (proxy locking functions, etc.):
+ */
++#define PI_WAKEUP_INPROGRESS ((struct rt_mutex_waiter *) 1)
++#define PI_REQUEUE_INPROGRESS ((struct rt_mutex_waiter *) 2)
++
+ extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock);
+ extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
+ struct task_struct *proxy_owner);
+ extern void rt_mutex_proxy_unlock(struct rt_mutex *lock,
+ struct task_struct *proxy_owner);
+-extern void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter);
++extern void rt_mutex_init_waiter(struct rt_mutex_waiter *waiter, bool savetate);
+ extern int __rt_mutex_start_proxy_lock(struct rt_mutex *lock,
+ struct rt_mutex_waiter *waiter,
+ struct task_struct *task);
+@@ -152,9 +157,27 @@
+
+ extern void rt_mutex_futex_unlock(struct rt_mutex *lock);
+ extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock,
+- struct wake_q_head *wqh);
++ struct wake_q_head *wqh,
++ struct wake_q_head *wq_sleeper);
+
+-extern void rt_mutex_postunlock(struct wake_q_head *wake_q);
++extern void rt_mutex_postunlock(struct wake_q_head *wake_q,
++ struct wake_q_head *wake_sleeper_q);
++
++/* RW semaphore special interface */
++struct ww_acquire_ctx;
++
++extern int __rt_mutex_lock_state(struct rt_mutex *lock, int state);
++extern int __rt_mutex_trylock(struct rt_mutex *lock);
++extern void __rt_mutex_unlock(struct rt_mutex *lock);
++int __sched rt_mutex_slowlock_locked(struct rt_mutex *lock, int state,
++ struct hrtimer_sleeper *timeout,
++ enum rtmutex_chainwalk chwalk,
++ struct ww_acquire_ctx *ww_ctx,
++ struct rt_mutex_waiter *waiter);
++void __sched rt_spin_lock_slowlock_locked(struct rt_mutex *lock,
++ struct rt_mutex_waiter *waiter,
++ unsigned long flags);
++void __sched rt_spin_lock_slowunlock(struct rt_mutex *lock);
-+static inline int __migrate_disabled(struct task_struct *p)
+ #ifdef CONFIG_DEBUG_RT_MUTEXES
+ # include "rtmutex-debug.h"
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/rwlock-rt.c linux-4.14/kernel/locking/rwlock-rt.c
+--- linux-4.14.orig/kernel/locking/rwlock-rt.c 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/kernel/locking/rwlock-rt.c 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,378 @@
++/*
++ */
++#include <linux/sched/debug.h>
++#include <linux/export.h>
++
++#include "rtmutex_common.h"
++#include <linux/rwlock_types_rt.h>
++
++/*
++ * RT-specific reader/writer locks
++ *
++ * write_lock()
++ * 1) Lock lock->rtmutex
++ * 2) Remove the reader BIAS to force readers into the slow path
++ * 3) Wait until all readers have left the critical region
++ * 4) Mark it write locked
++ *
++ * write_unlock()
++ * 1) Remove the write locked marker
++ * 2) Set the reader BIAS so readers can use the fast path again
++ * 3) Unlock lock->rtmutex to release blocked readers
++ *
++ * read_lock()
++ * 1) Try fast path acquisition (reader BIAS is set)
++ * 2) Take lock->rtmutex.wait_lock which protects the writelocked flag
++ * 3) If !writelocked, acquire it for read
++ * 4) If writelocked, block on lock->rtmutex
++ * 5) unlock lock->rtmutex, goto 1)
++ *
++ * read_unlock()
++ * 1) Try fast path release (reader count != 1)
++ * 2) Wake the writer waiting in write_lock()#3
++ *
++ * read_lock()#3 has the consequence, that rw locks on RT are not writer
++ * fair, but writers, which should be avoided in RT tasks (think tasklist
++ * lock), are subject to the rtmutex priority/DL inheritance mechanism.
++ *
++ * It's possible to make the rw locks writer fair by keeping a list of
++ * active readers. A blocked writer would force all newly incoming readers
++ * to block on the rtmutex, but the rtmutex would have to be proxy locked
++ * for one reader after the other. We can't use multi-reader inheritance
++ * because there is no way to support that with
++ * SCHED_DEADLINE. Implementing the one by one reader boosting/handover
++ * mechanism is a major surgery for a very dubious value.
++ *
++ * The risk of writer starvation is there, but the pathological use cases
++ * which trigger it are not necessarily the typical RT workloads.
++ */
++
++void __rwlock_biased_rt_init(struct rt_rw_lock *lock, const char *name,
++ struct lock_class_key *key)
+{
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ return p->migrate_disable;
-+#else
-+ return 0;
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ /*
++ * Make sure we are not reinitializing a held semaphore:
++ */
++ debug_check_no_locks_freed((void *)lock, sizeof(*lock));
++ lockdep_init_map(&lock->dep_map, name, key, 0);
+#endif
++ atomic_set(&lock->readers, READER_BIAS);
++ rt_mutex_init(&lock->rtmutex);
++ lock->rtmutex.save_state = 1;
+}
+
-+/* Future-safe accessor for struct task_struct's cpus_allowed. */
-+static inline const struct cpumask *tsk_cpus_allowed(struct task_struct *p)
++int __read_rt_trylock(struct rt_rw_lock *lock)
+{
-+ if (__migrate_disabled(p))
-+ return cpumask_of(task_cpu(p));
++ int r, old;
+
-+ return &p->cpus_allowed;
++ /*
++ * Increment reader count, if lock->readers < 0, i.e. READER_BIAS is
++ * set.
++ */
++ for (r = atomic_read(&lock->readers); r < 0;) {
++ old = atomic_cmpxchg(&lock->readers, r, r + 1);
++ if (likely(old == r))
++ return 1;
++ r = old;
++ }
++ return 0;
+}
+
-+static inline int tsk_nr_cpus_allowed(struct task_struct *p)
++void __sched __read_rt_lock(struct rt_rw_lock *lock)
+{
-+ if (__migrate_disabled(p))
-+ return 1;
-+ return p->nr_cpus_allowed;
-+}
++ struct rt_mutex *m = &lock->rtmutex;
++ struct rt_mutex_waiter waiter;
++ unsigned long flags;
+
- extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
- extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
-
-diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
-index ead97654c4e9..3d7223ffdd3b 100644
---- a/include/linux/seqlock.h
-+++ b/include/linux/seqlock.h
-@@ -220,20 +220,30 @@ static inline int read_seqcount_retry(const seqcount_t *s, unsigned start)
- return __read_seqcount_retry(s, start);
- }
-
--
--
--static inline void raw_write_seqcount_begin(seqcount_t *s)
-+static inline void __raw_write_seqcount_begin(seqcount_t *s)
- {
- s->sequence++;
- smp_wmb();
- }
-
--static inline void raw_write_seqcount_end(seqcount_t *s)
-+static inline void raw_write_seqcount_begin(seqcount_t *s)
-+{
-+ preempt_disable_rt();
-+ __raw_write_seqcount_begin(s);
++ if (__read_rt_trylock(lock))
++ return;
++
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++ /*
++ * Allow readers as long as the writer has not completely
++ * acquired the semaphore for write.
++ */
++ if (atomic_read(&lock->readers) != WRITER_BIAS) {
++ atomic_inc(&lock->readers);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ return;
++ }
++
++ /*
++ * Call into the slow lock path with the rtmutex->wait_lock
++ * held, so this can't result in the following race:
++ *
++ * Reader1 Reader2 Writer
++ * read_lock()
++ * write_lock()
++ * rtmutex_lock(m)
++ * swait()
++ * read_lock()
++ * unlock(m->wait_lock)
++ * read_unlock()
++ * swake()
++ * lock(m->wait_lock)
++ * lock->writelocked=true
++ * unlock(m->wait_lock)
++ *
++ * write_unlock()
++ * lock->writelocked=false
++ * rtmutex_unlock(m)
++ * read_lock()
++ * write_lock()
++ * rtmutex_lock(m)
++ * swait()
++ * rtmutex_lock(m)
++ *
++ * That would put Reader1 behind the writer waiting on
++ * Reader2 to call read_unlock() which might be unbound.
++ */
++ rt_mutex_init_waiter(&waiter, false);
++ rt_spin_lock_slowlock_locked(m, &waiter, flags);
++ /*
++ * The slowlock() above is guaranteed to return with the rtmutex is
++ * now held, so there can't be a writer active. Increment the reader
++ * count and immediately drop the rtmutex again.
++ */
++ atomic_inc(&lock->readers);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ rt_spin_lock_slowunlock(m);
++
++ debug_rt_mutex_free_waiter(&waiter);
+}
+
-+static inline void __raw_write_seqcount_end(seqcount_t *s)
- {
- smp_wmb();
- s->sequence++;
- }
-
-+static inline void raw_write_seqcount_end(seqcount_t *s)
++void __read_rt_unlock(struct rt_rw_lock *lock)
+{
-+ __raw_write_seqcount_end(s);
-+ preempt_enable_rt();
++ struct rt_mutex *m = &lock->rtmutex;
++ struct task_struct *tsk;
++
++ /*
++ * sem->readers can only hit 0 when a writer is waiting for the
++ * active readers to leave the critical region.
++ */
++ if (!atomic_dec_and_test(&lock->readers))
++ return;
++
++ raw_spin_lock_irq(&m->wait_lock);
++ /*
++ * Wake the writer, i.e. the rtmutex owner. It might release the
++ * rtmutex concurrently in the fast path, but to clean up the rw
++ * lock it needs to acquire m->wait_lock. The worst case which can
++ * happen is a spurious wakeup.
++ */
++ tsk = rt_mutex_owner(m);
++ if (tsk)
++ wake_up_process(tsk);
++
++ raw_spin_unlock_irq(&m->wait_lock);
+}
+
- /**
- * raw_write_seqcount_barrier - do a seq write barrier
- * @s: pointer to seqcount_t
-@@ -428,10 +438,32 @@ typedef struct {
- /*
- * Read side functions for starting and finalizing a read side section.
- */
-+#ifndef CONFIG_PREEMPT_RT_FULL
- static inline unsigned read_seqbegin(const seqlock_t *sl)
- {
- return read_seqcount_begin(&sl->seqcount);
- }
-+#else
-+/*
-+ * Starvation safe read side for RT
-+ */
-+static inline unsigned read_seqbegin(seqlock_t *sl)
++static void __write_unlock_common(struct rt_rw_lock *lock, int bias,
++ unsigned long flags)
+{
-+ unsigned ret;
++ struct rt_mutex *m = &lock->rtmutex;
+
-+repeat:
-+ ret = ACCESS_ONCE(sl->seqcount.sequence);
-+ if (unlikely(ret & 1)) {
-+ /*
-+ * Take the lock and let the writer proceed (i.e. evtl
-+ * boost it), otherwise we could loop here forever.
-+ */
-+ spin_unlock_wait(&sl->lock);
-+ goto repeat;
-+ }
-+ return ret;
-+}
-+#endif
-
- static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
- {
-@@ -446,36 +478,45 @@ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
- static inline void write_seqlock(seqlock_t *sl)
- {
- spin_lock(&sl->lock);
-- write_seqcount_begin(&sl->seqcount);
-+ __raw_write_seqcount_begin(&sl->seqcount);
++ atomic_add(READER_BIAS - bias, &lock->readers);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ rt_spin_lock_slowunlock(m);
+}
+
-+static inline int try_write_seqlock(seqlock_t *sl)
++void __sched __write_rt_lock(struct rt_rw_lock *lock)
+{
-+ if (spin_trylock(&sl->lock)) {
-+ __raw_write_seqcount_begin(&sl->seqcount);
-+ return 1;
-+ }
-+ return 0;
- }
-
- static inline void write_sequnlock(seqlock_t *sl)
- {
-- write_seqcount_end(&sl->seqcount);
-+ __raw_write_seqcount_end(&sl->seqcount);
- spin_unlock(&sl->lock);
- }
-
- static inline void write_seqlock_bh(seqlock_t *sl)
- {
- spin_lock_bh(&sl->lock);
-- write_seqcount_begin(&sl->seqcount);
-+ __raw_write_seqcount_begin(&sl->seqcount);
- }
-
- static inline void write_sequnlock_bh(seqlock_t *sl)
- {
-- write_seqcount_end(&sl->seqcount);
-+ __raw_write_seqcount_end(&sl->seqcount);
- spin_unlock_bh(&sl->lock);
- }
-
- static inline void write_seqlock_irq(seqlock_t *sl)
- {
- spin_lock_irq(&sl->lock);
-- write_seqcount_begin(&sl->seqcount);
-+ __raw_write_seqcount_begin(&sl->seqcount);
- }
-
- static inline void write_sequnlock_irq(seqlock_t *sl)
- {
-- write_seqcount_end(&sl->seqcount);
-+ __raw_write_seqcount_end(&sl->seqcount);
- spin_unlock_irq(&sl->lock);
- }
-
-@@ -484,7 +525,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
- unsigned long flags;
-
- spin_lock_irqsave(&sl->lock, flags);
-- write_seqcount_begin(&sl->seqcount);
-+ __raw_write_seqcount_begin(&sl->seqcount);
- return flags;
- }
-
-@@ -494,7 +535,7 @@ static inline unsigned long __write_seqlock_irqsave(seqlock_t *sl)
- static inline void
- write_sequnlock_irqrestore(seqlock_t *sl, unsigned long flags)
- {
-- write_seqcount_end(&sl->seqcount);
-+ __raw_write_seqcount_end(&sl->seqcount);
- spin_unlock_irqrestore(&sl->lock, flags);
- }
-
-diff --git a/include/linux/signal.h b/include/linux/signal.h
-index b63f63eaa39c..295540fdfc72 100644
---- a/include/linux/signal.h
-+++ b/include/linux/signal.h
-@@ -233,6 +233,7 @@ static inline void init_sigpending(struct sigpending *sig)
- }
-
- extern void flush_sigqueue(struct sigpending *queue);
-+extern void flush_task_sigqueue(struct task_struct *tsk);
-
- /* Test if 'sig' is valid signal. Use this instead of testing _NSIG directly */
- static inline int valid_signal(unsigned long sig)
-diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
-index 32810f279f8e..0db6e31161f6 100644
---- a/include/linux/skbuff.h
-+++ b/include/linux/skbuff.h
-@@ -284,6 +284,7 @@ struct sk_buff_head {
-
- __u32 qlen;
- spinlock_t lock;
-+ raw_spinlock_t raw_lock;
- };
-
- struct sk_buff;
-@@ -1573,6 +1574,12 @@ static inline void skb_queue_head_init(struct sk_buff_head *list)
- __skb_queue_head_init(list);
- }
-
-+static inline void skb_queue_head_init_raw(struct sk_buff_head *list)
++ struct rt_mutex *m = &lock->rtmutex;
++ struct task_struct *self = current;
++ unsigned long flags;
++
++ /* Take the rtmutex as a first step */
++ __rt_spin_lock(m);
++
++ /* Force readers into slow path */
++ atomic_sub(READER_BIAS, &lock->readers);
++
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++
++ raw_spin_lock(&self->pi_lock);
++ self->saved_state = self->state;
++ __set_current_state_no_track(TASK_UNINTERRUPTIBLE);
++ raw_spin_unlock(&self->pi_lock);
++
++ for (;;) {
++ /* Have all readers left the critical region? */
++ if (!atomic_read(&lock->readers)) {
++ atomic_set(&lock->readers, WRITER_BIAS);
++ raw_spin_lock(&self->pi_lock);
++ __set_current_state_no_track(self->saved_state);
++ self->saved_state = TASK_RUNNING;
++ raw_spin_unlock(&self->pi_lock);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ return;
++ }
++
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++
++ if (atomic_read(&lock->readers) != 0)
++ schedule();
++
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++
++ raw_spin_lock(&self->pi_lock);
++ __set_current_state_no_track(TASK_UNINTERRUPTIBLE);
++ raw_spin_unlock(&self->pi_lock);
++ }
++}
++
++int __write_rt_trylock(struct rt_rw_lock *lock)
+{
-+ raw_spin_lock_init(&list->raw_lock);
-+ __skb_queue_head_init(list);
++ struct rt_mutex *m = &lock->rtmutex;
++ unsigned long flags;
++
++ if (!__rt_mutex_trylock(m))
++ return 0;
++
++ atomic_sub(READER_BIAS, &lock->readers);
++
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++ if (!atomic_read(&lock->readers)) {
++ atomic_set(&lock->readers, WRITER_BIAS);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ return 1;
++ }
++ __write_unlock_common(lock, 0, flags);
++ return 0;
+}
+
- static inline void skb_queue_head_init_class(struct sk_buff_head *list,
- struct lock_class_key *class)
- {
-diff --git a/include/linux/smp.h b/include/linux/smp.h
-index 8e0cb7a0f836..b16ca967ad80 100644
---- a/include/linux/smp.h
-+++ b/include/linux/smp.h
-@@ -185,6 +185,9 @@ static inline void smp_init(void) { }
- #define get_cpu() ({ preempt_disable(); smp_processor_id(); })
- #define put_cpu() preempt_enable()
-
-+#define get_cpu_light() ({ migrate_disable(); smp_processor_id(); })
-+#define put_cpu_light() migrate_enable()
++void __write_rt_unlock(struct rt_rw_lock *lock)
++{
++ struct rt_mutex *m = &lock->rtmutex;
++ unsigned long flags;
+
- /*
- * Callback to arch code if there's nosmp or maxcpus=0 on the
- * boot command line:
-diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
-index 47dd0cebd204..02928fa5499d 100644
---- a/include/linux/spinlock.h
-+++ b/include/linux/spinlock.h
-@@ -271,7 +271,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
- #define raw_spin_can_lock(lock) (!raw_spin_is_locked(lock))
-
- /* Include rwlock functions */
--#include <linux/rwlock.h>
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# include <linux/rwlock_rt.h>
-+#else
-+# include <linux/rwlock.h>
-+#endif
-
- /*
- * Pull the _spin_*()/_read_*()/_write_*() functions/declarations:
-@@ -282,6 +286,10 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
- # include <linux/spinlock_api_up.h>
- #endif
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# include <linux/spinlock_rt.h>
-+#else /* PREEMPT_RT_FULL */
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++ __write_unlock_common(lock, WRITER_BIAS, flags);
++}
+
- /*
- * Map the spin_lock functions to the raw variants for PREEMPT_RT=n
- */
-@@ -347,6 +355,12 @@ static __always_inline void spin_unlock(spinlock_t *lock)
- raw_spin_unlock(&lock->rlock);
- }
-
-+static __always_inline int spin_unlock_no_deboost(spinlock_t *lock)
++/* Map the reader biased implementation */
++static inline int do_read_rt_trylock(rwlock_t *rwlock)
+{
-+ raw_spin_unlock(&lock->rlock);
-+ return 0;
++ return __read_rt_trylock(rwlock);
+}
+
- static __always_inline void spin_unlock_bh(spinlock_t *lock)
- {
- raw_spin_unlock_bh(&lock->rlock);
-@@ -416,4 +430,6 @@ extern int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
- #define atomic_dec_and_lock(atomic, lock) \
- __cond_lock(lock, _atomic_dec_and_lock(atomic, lock))
-
-+#endif /* !PREEMPT_RT_FULL */
++static inline int do_write_rt_trylock(rwlock_t *rwlock)
++{
++ return __write_rt_trylock(rwlock);
++}
+
- #endif /* __LINUX_SPINLOCK_H */
-diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h
-index 5344268e6e62..043263f30e81 100644
---- a/include/linux/spinlock_api_smp.h
-+++ b/include/linux/spinlock_api_smp.h
-@@ -189,6 +189,8 @@ static inline int __raw_spin_trylock_bh(raw_spinlock_t *lock)
- return 0;
- }
-
--#include <linux/rwlock_api_smp.h>
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+# include <linux/rwlock_api_smp.h>
-+#endif
-
- #endif /* __LINUX_SPINLOCK_API_SMP_H */
-diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
-new file mode 100644
-index 000000000000..3534cff3dd08
---- /dev/null
-+++ b/include/linux/spinlock_rt.h
-@@ -0,0 +1,164 @@
-+#ifndef __LINUX_SPINLOCK_RT_H
-+#define __LINUX_SPINLOCK_RT_H
++static inline void do_read_rt_lock(rwlock_t *rwlock)
++{
++ __read_rt_lock(rwlock);
++}
+
-+#ifndef __LINUX_SPINLOCK_H
-+#error Do not include directly. Use spinlock.h
-+#endif
++static inline void do_write_rt_lock(rwlock_t *rwlock)
++{
++ __write_rt_lock(rwlock);
++}
+
-+#include <linux/bug.h>
++static inline void do_read_rt_unlock(rwlock_t *rwlock)
++{
++ __read_rt_unlock(rwlock);
++}
+
-+extern void
-+__rt_spin_lock_init(spinlock_t *lock, char *name, struct lock_class_key *key);
++static inline void do_write_rt_unlock(rwlock_t *rwlock)
++{
++ __write_rt_unlock(rwlock);
++}
+
-+#define spin_lock_init(slock) \
-+do { \
-+ static struct lock_class_key __key; \
-+ \
-+ rt_mutex_init(&(slock)->lock); \
-+ __rt_spin_lock_init(slock, #slock, &__key); \
-+} while (0)
++static inline void do_rwlock_rt_init(rwlock_t *rwlock, const char *name,
++ struct lock_class_key *key)
++{
++ __rwlock_biased_rt_init(rwlock, name, key);
++}
+
-+void __lockfunc rt_spin_lock__no_mg(spinlock_t *lock);
-+void __lockfunc rt_spin_unlock__no_mg(spinlock_t *lock);
-+int __lockfunc rt_spin_trylock__no_mg(spinlock_t *lock);
++int __lockfunc rt_read_can_lock(rwlock_t *rwlock)
++{
++ return atomic_read(&rwlock->readers) < 0;
++}
+
-+extern void __lockfunc rt_spin_lock(spinlock_t *lock);
-+extern unsigned long __lockfunc rt_spin_lock_trace_flags(spinlock_t *lock);
-+extern void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass);
-+extern void __lockfunc rt_spin_unlock(spinlock_t *lock);
-+extern int __lockfunc rt_spin_unlock_no_deboost(spinlock_t *lock);
-+extern void __lockfunc rt_spin_unlock_wait(spinlock_t *lock);
-+extern int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags);
-+extern int __lockfunc rt_spin_trylock_bh(spinlock_t *lock);
-+extern int __lockfunc rt_spin_trylock(spinlock_t *lock);
-+extern int atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock);
++int __lockfunc rt_write_can_lock(rwlock_t *rwlock)
++{
++ return atomic_read(&rwlock->readers) == READER_BIAS;
++}
+
+/*
-+ * lockdep-less calls, for derived types like rwlock:
-+ * (for trylock they can use rt_mutex_trylock() directly.
++ * The common functions which get wrapped into the rwlock API.
+ */
-+extern void __lockfunc __rt_spin_lock__no_mg(struct rt_mutex *lock);
-+extern void __lockfunc __rt_spin_lock(struct rt_mutex *lock);
-+extern void __lockfunc __rt_spin_unlock(struct rt_mutex *lock);
++int __lockfunc rt_read_trylock(rwlock_t *rwlock)
++{
++ int ret;
+
-+#define spin_lock(lock) rt_spin_lock(lock)
++ sleeping_lock_inc();
++ migrate_disable();
++ ret = do_read_rt_trylock(rwlock);
++ if (ret) {
++ rwlock_acquire_read(&rwlock->dep_map, 0, 1, _RET_IP_);
++ } else {
++ migrate_enable();
++ sleeping_lock_dec();
++ }
++ return ret;
++}
++EXPORT_SYMBOL(rt_read_trylock);
+
-+#define spin_lock_bh(lock) \
-+ do { \
-+ local_bh_disable(); \
-+ rt_spin_lock(lock); \
-+ } while (0)
++int __lockfunc rt_write_trylock(rwlock_t *rwlock)
++{
++ int ret;
+
-+#define spin_lock_irq(lock) spin_lock(lock)
++ sleeping_lock_inc();
++ migrate_disable();
++ ret = do_write_rt_trylock(rwlock);
++ if (ret) {
++ rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
++ } else {
++ migrate_enable();
++ sleeping_lock_dec();
++ }
++ return ret;
++}
++EXPORT_SYMBOL(rt_write_trylock);
+
-+#define spin_do_trylock(lock) __cond_lock(lock, rt_spin_trylock(lock))
++void __lockfunc rt_read_lock(rwlock_t *rwlock)
++{
++ sleeping_lock_inc();
++ migrate_disable();
++ rwlock_acquire_read(&rwlock->dep_map, 0, 0, _RET_IP_);
++ do_read_rt_lock(rwlock);
++}
++EXPORT_SYMBOL(rt_read_lock);
+
-+#define spin_trylock(lock) \
-+({ \
-+ int __locked; \
-+ __locked = spin_do_trylock(lock); \
-+ __locked; \
-+})
++void __lockfunc rt_write_lock(rwlock_t *rwlock)
++{
++ sleeping_lock_inc();
++ migrate_disable();
++ rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
++ do_write_rt_lock(rwlock);
++}
++EXPORT_SYMBOL(rt_write_lock);
+
-+#ifdef CONFIG_LOCKDEP
-+# define spin_lock_nested(lock, subclass) \
-+ do { \
-+ rt_spin_lock_nested(lock, subclass); \
-+ } while (0)
++void __lockfunc rt_read_unlock(rwlock_t *rwlock)
++{
++ rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
++ do_read_rt_unlock(rwlock);
++ migrate_enable();
++ sleeping_lock_dec();
++}
++EXPORT_SYMBOL(rt_read_unlock);
+
-+#define spin_lock_bh_nested(lock, subclass) \
-+ do { \
-+ local_bh_disable(); \
-+ rt_spin_lock_nested(lock, subclass); \
-+ } while (0)
++void __lockfunc rt_write_unlock(rwlock_t *rwlock)
++{
++ rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
++ do_write_rt_unlock(rwlock);
++ migrate_enable();
++ sleeping_lock_dec();
++}
++EXPORT_SYMBOL(rt_write_unlock);
+
-+# define spin_lock_irqsave_nested(lock, flags, subclass) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ flags = 0; \
-+ rt_spin_lock_nested(lock, subclass); \
-+ } while (0)
-+#else
-+# define spin_lock_nested(lock, subclass) spin_lock(lock)
-+# define spin_lock_bh_nested(lock, subclass) spin_lock_bh(lock)
++void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key)
++{
++ do_rwlock_rt_init(rwlock, name, key);
++}
++EXPORT_SYMBOL(__rt_rwlock_init);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/rwsem-rt.c linux-4.14/kernel/locking/rwsem-rt.c
+--- linux-4.14.orig/kernel/locking/rwsem-rt.c 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/kernel/locking/rwsem-rt.c 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,269 @@
++/*
++ */
++#include <linux/rwsem.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/signal.h>
++#include <linux/export.h>
+
-+# define spin_lock_irqsave_nested(lock, flags, subclass) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ flags = 0; \
-+ spin_lock(lock); \
-+ } while (0)
++#include "rtmutex_common.h"
++
++/*
++ * RT-specific reader/writer semaphores
++ *
++ * down_write()
++ * 1) Lock sem->rtmutex
++ * 2) Remove the reader BIAS to force readers into the slow path
++ * 3) Wait until all readers have left the critical region
++ * 4) Mark it write locked
++ *
++ * up_write()
++ * 1) Remove the write locked marker
++ * 2) Set the reader BIAS so readers can use the fast path again
++ * 3) Unlock sem->rtmutex to release blocked readers
++ *
++ * down_read()
++ * 1) Try fast path acquisition (reader BIAS is set)
++ * 2) Take sem->rtmutex.wait_lock which protects the writelocked flag
++ * 3) If !writelocked, acquire it for read
++ * 4) If writelocked, block on sem->rtmutex
++ * 5) unlock sem->rtmutex, goto 1)
++ *
++ * up_read()
++ * 1) Try fast path release (reader count != 1)
++ * 2) Wake the writer waiting in down_write()#3
++ *
++ * down_read()#3 has the consequence, that rw semaphores on RT are not writer
++ * fair, but writers, which should be avoided in RT tasks (think mmap_sem),
++ * are subject to the rtmutex priority/DL inheritance mechanism.
++ *
++ * It's possible to make the rw semaphores writer fair by keeping a list of
++ * active readers. A blocked writer would force all newly incoming readers to
++ * block on the rtmutex, but the rtmutex would have to be proxy locked for one
++ * reader after the other. We can't use multi-reader inheritance because there
++ * is no way to support that with SCHED_DEADLINE. Implementing the one by one
++ * reader boosting/handover mechanism is a major surgery for a very dubious
++ * value.
++ *
++ * The risk of writer starvation is there, but the pathological use cases
++ * which trigger it are not necessarily the typical RT workloads.
++ */
++
++void __rwsem_init(struct rw_semaphore *sem, const char *name,
++ struct lock_class_key *key)
++{
++#ifdef CONFIG_DEBUG_LOCK_ALLOC
++ /*
++ * Make sure we are not reinitializing a held semaphore:
++ */
++ debug_check_no_locks_freed((void *)sem, sizeof(*sem));
++ lockdep_init_map(&sem->dep_map, name, key, 0);
+#endif
++ atomic_set(&sem->readers, READER_BIAS);
++}
++EXPORT_SYMBOL(__rwsem_init);
++
++int __down_read_trylock(struct rw_semaphore *sem)
++{
++ int r, old;
++
++ /*
++ * Increment reader count, if sem->readers < 0, i.e. READER_BIAS is
++ * set.
++ */
++ for (r = atomic_read(&sem->readers); r < 0;) {
++ old = atomic_cmpxchg(&sem->readers, r, r + 1);
++ if (likely(old == r))
++ return 1;
++ r = old;
++ }
++ return 0;
++}
++
++void __sched __down_read(struct rw_semaphore *sem)
++{
++ struct rt_mutex *m = &sem->rtmutex;
++ struct rt_mutex_waiter waiter;
++
++ if (__down_read_trylock(sem))
++ return;
++
++ might_sleep();
++ raw_spin_lock_irq(&m->wait_lock);
++ /*
++ * Allow readers as long as the writer has not completely
++ * acquired the semaphore for write.
++ */
++ if (atomic_read(&sem->readers) != WRITER_BIAS) {
++ atomic_inc(&sem->readers);
++ raw_spin_unlock_irq(&m->wait_lock);
++ return;
++ }
+
-+#define spin_lock_irqsave(lock, flags) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ flags = 0; \
-+ spin_lock(lock); \
-+ } while (0)
++ /*
++ * Call into the slow lock path with the rtmutex->wait_lock
++ * held, so this can't result in the following race:
++ *
++ * Reader1 Reader2 Writer
++ * down_read()
++ * down_write()
++ * rtmutex_lock(m)
++ * swait()
++ * down_read()
++ * unlock(m->wait_lock)
++ * up_read()
++ * swake()
++ * lock(m->wait_lock)
++ * sem->writelocked=true
++ * unlock(m->wait_lock)
++ *
++ * up_write()
++ * sem->writelocked=false
++ * rtmutex_unlock(m)
++ * down_read()
++ * down_write()
++ * rtmutex_lock(m)
++ * swait()
++ * rtmutex_lock(m)
++ *
++ * That would put Reader1 behind the writer waiting on
++ * Reader2 to call up_read() which might be unbound.
++ */
++ rt_mutex_init_waiter(&waiter, false);
++ rt_mutex_slowlock_locked(m, TASK_UNINTERRUPTIBLE, NULL,
++ RT_MUTEX_MIN_CHAINWALK, NULL,
++ &waiter);
++ /*
++ * The slowlock() above is guaranteed to return with the rtmutex is
++ * now held, so there can't be a writer active. Increment the reader
++ * count and immediately drop the rtmutex again.
++ */
++ atomic_inc(&sem->readers);
++ raw_spin_unlock_irq(&m->wait_lock);
++ __rt_mutex_unlock(m);
+
-+static inline unsigned long spin_lock_trace_flags(spinlock_t *lock)
++ debug_rt_mutex_free_waiter(&waiter);
++}
++
++void __up_read(struct rw_semaphore *sem)
+{
-+ unsigned long flags = 0;
-+#ifdef CONFIG_TRACE_IRQFLAGS
-+ flags = rt_spin_lock_trace_flags(lock);
-+#else
-+ spin_lock(lock); /* lock_local */
-+#endif
-+ return flags;
++ struct rt_mutex *m = &sem->rtmutex;
++ struct task_struct *tsk;
++
++ /*
++ * sem->readers can only hit 0 when a writer is waiting for the
++ * active readers to leave the critical region.
++ */
++ if (!atomic_dec_and_test(&sem->readers))
++ return;
++
++ might_sleep();
++ raw_spin_lock_irq(&m->wait_lock);
++ /*
++ * Wake the writer, i.e. the rtmutex owner. It might release the
++ * rtmutex concurrently in the fast path (due to a signal), but to
++ * clean up the rwsem it needs to acquire m->wait_lock. The worst
++ * case which can happen is a spurious wakeup.
++ */
++ tsk = rt_mutex_owner(m);
++ if (tsk)
++ wake_up_process(tsk);
++
++ raw_spin_unlock_irq(&m->wait_lock);
+}
+
-+/* FIXME: we need rt_spin_lock_nest_lock */
-+#define spin_lock_nest_lock(lock, nest_lock) spin_lock_nested(lock, 0)
++static void __up_write_unlock(struct rw_semaphore *sem, int bias,
++ unsigned long flags)
++{
++ struct rt_mutex *m = &sem->rtmutex;
+
-+#define spin_unlock(lock) rt_spin_unlock(lock)
-+#define spin_unlock_no_deboost(lock) rt_spin_unlock_no_deboost(lock)
++ atomic_add(READER_BIAS - bias, &sem->readers);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ __rt_mutex_unlock(m);
++}
+
-+#define spin_unlock_bh(lock) \
-+ do { \
-+ rt_spin_unlock(lock); \
-+ local_bh_enable(); \
-+ } while (0)
++static int __sched __down_write_common(struct rw_semaphore *sem, int state)
++{
++ struct rt_mutex *m = &sem->rtmutex;
++ unsigned long flags;
+
-+#define spin_unlock_irq(lock) spin_unlock(lock)
++ /* Take the rtmutex as a first step */
++ if (__rt_mutex_lock_state(m, state))
++ return -EINTR;
+
-+#define spin_unlock_irqrestore(lock, flags) \
-+ do { \
-+ typecheck(unsigned long, flags); \
-+ (void) flags; \
-+ spin_unlock(lock); \
-+ } while (0)
++ /* Force readers into slow path */
++ atomic_sub(READER_BIAS, &sem->readers);
++ might_sleep();
+
-+#define spin_trylock_bh(lock) __cond_lock(lock, rt_spin_trylock_bh(lock))
-+#define spin_trylock_irq(lock) spin_trylock(lock)
++ set_current_state(state);
++ for (;;) {
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++ /* Have all readers left the critical region? */
++ if (!atomic_read(&sem->readers)) {
++ atomic_set(&sem->readers, WRITER_BIAS);
++ __set_current_state(TASK_RUNNING);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ return 0;
++ }
+
-+#define spin_trylock_irqsave(lock, flags) \
-+ rt_spin_trylock_irqsave(lock, &(flags))
++ if (signal_pending_state(state, current)) {
++ __set_current_state(TASK_RUNNING);
++ __up_write_unlock(sem, 0, flags);
++ return -EINTR;
++ }
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
+
-+#define spin_unlock_wait(lock) rt_spin_unlock_wait(lock)
++ if (atomic_read(&sem->readers) != 0) {
++ schedule();
++ set_current_state(state);
++ }
++ }
++}
+
-+#ifdef CONFIG_GENERIC_LOCKBREAK
-+# define spin_is_contended(lock) ((lock)->break_lock)
-+#else
-+# define spin_is_contended(lock) (((void)(lock), 0))
-+#endif
++void __sched __down_write(struct rw_semaphore *sem)
++{
++ __down_write_common(sem, TASK_UNINTERRUPTIBLE);
++}
+
-+static inline int spin_can_lock(spinlock_t *lock)
++int __sched __down_write_killable(struct rw_semaphore *sem)
+{
-+ return !rt_mutex_is_locked(&lock->lock);
++ return __down_write_common(sem, TASK_KILLABLE);
+}
+
-+static inline int spin_is_locked(spinlock_t *lock)
++int __down_write_trylock(struct rw_semaphore *sem)
+{
-+ return rt_mutex_is_locked(&lock->lock);
++ struct rt_mutex *m = &sem->rtmutex;
++ unsigned long flags;
++
++ if (!__rt_mutex_trylock(m))
++ return 0;
++
++ atomic_sub(READER_BIAS, &sem->readers);
++
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++ if (!atomic_read(&sem->readers)) {
++ atomic_set(&sem->readers, WRITER_BIAS);
++ raw_spin_unlock_irqrestore(&m->wait_lock, flags);
++ return 1;
++ }
++ __up_write_unlock(sem, 0, flags);
++ return 0;
+}
+
-+static inline void assert_spin_locked(spinlock_t *lock)
++void __up_write(struct rw_semaphore *sem)
+{
-+ BUG_ON(!spin_is_locked(lock));
++ struct rt_mutex *m = &sem->rtmutex;
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++ __up_write_unlock(sem, WRITER_BIAS, flags);
+}
+
-+#define atomic_dec_and_lock(atomic, lock) \
-+ atomic_dec_and_spin_lock(atomic, lock)
++void __downgrade_write(struct rw_semaphore *sem)
++{
++ struct rt_mutex *m = &sem->rtmutex;
++ unsigned long flags;
+
-+#endif
-diff --git a/include/linux/spinlock_types.h b/include/linux/spinlock_types.h
-index 73548eb13a5d..10bac715ea96 100644
---- a/include/linux/spinlock_types.h
-+++ b/include/linux/spinlock_types.h
-@@ -9,80 +9,15 @@
- * Released under the General Public License (GPL).
++ raw_spin_lock_irqsave(&m->wait_lock, flags);
++ /* Release it and account current as reader */
++ __up_write_unlock(sem, WRITER_BIAS - 1, flags);
++}
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/spinlock.c linux-4.14/kernel/locking/spinlock.c
+--- linux-4.14.orig/kernel/locking/spinlock.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/locking/spinlock.c 2018-09-05 11:05:07.000000000 +0200
+@@ -125,8 +125,11 @@
+ * __[spin|read|write]_lock_bh()
*/
-
--#if defined(CONFIG_SMP)
--# include <asm/spinlock_types.h>
-+#include <linux/spinlock_types_raw.h>
+ BUILD_LOCK_OPS(spin, raw_spinlock);
+
+#ifndef CONFIG_PREEMPT_RT_FULL
-+# include <linux/spinlock_types_nort.h>
-+# include <linux/rwlock_types.h>
- #else
--# include <linux/spinlock_types_up.h>
-+# include <linux/rtmutex.h>
-+# include <linux/spinlock_types_rt.h>
-+# include <linux/rwlock_types_rt.h>
+ BUILD_LOCK_OPS(read, rwlock);
+ BUILD_LOCK_OPS(write, rwlock);
++#endif
+
#endif
--#include <linux/lockdep.h>
--
--typedef struct raw_spinlock {
-- arch_spinlock_t raw_lock;
--#ifdef CONFIG_GENERIC_LOCKBREAK
-- unsigned int break_lock;
--#endif
--#ifdef CONFIG_DEBUG_SPINLOCK
-- unsigned int magic, owner_cpu;
-- void *owner;
--#endif
--#ifdef CONFIG_DEBUG_LOCK_ALLOC
-- struct lockdep_map dep_map;
--#endif
--} raw_spinlock_t;
--
--#define SPINLOCK_MAGIC 0xdead4ead
--
--#define SPINLOCK_OWNER_INIT ((void *)-1L)
--
--#ifdef CONFIG_DEBUG_LOCK_ALLOC
--# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
--#else
--# define SPIN_DEP_MAP_INIT(lockname)
--#endif
--
--#ifdef CONFIG_DEBUG_SPINLOCK
--# define SPIN_DEBUG_INIT(lockname) \
-- .magic = SPINLOCK_MAGIC, \
-- .owner_cpu = -1, \
-- .owner = SPINLOCK_OWNER_INIT,
--#else
--# define SPIN_DEBUG_INIT(lockname)
--#endif
--
--#define __RAW_SPIN_LOCK_INITIALIZER(lockname) \
-- { \
-- .raw_lock = __ARCH_SPIN_LOCK_UNLOCKED, \
-- SPIN_DEBUG_INIT(lockname) \
-- SPIN_DEP_MAP_INIT(lockname) }
--
--#define __RAW_SPIN_LOCK_UNLOCKED(lockname) \
-- (raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
--
--#define DEFINE_RAW_SPINLOCK(x) raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
--
--typedef struct spinlock {
-- union {
-- struct raw_spinlock rlock;
--
--#ifdef CONFIG_DEBUG_LOCK_ALLOC
--# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
-- struct {
-- u8 __padding[LOCK_PADSIZE];
-- struct lockdep_map dep_map;
-- };
--#endif
-- };
--} spinlock_t;
--
--#define __SPIN_LOCK_INITIALIZER(lockname) \
-- { { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
--
--#define __SPIN_LOCK_UNLOCKED(lockname) \
-- (spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
--
--#define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
--
--#include <linux/rwlock_types.h>
--
- #endif /* __LINUX_SPINLOCK_TYPES_H */
-diff --git a/include/linux/spinlock_types_nort.h b/include/linux/spinlock_types_nort.h
-new file mode 100644
-index 000000000000..f1dac1fb1d6a
---- /dev/null
-+++ b/include/linux/spinlock_types_nort.h
-@@ -0,0 +1,33 @@
-+#ifndef __LINUX_SPINLOCK_TYPES_NORT_H
-+#define __LINUX_SPINLOCK_TYPES_NORT_H
-+
-+#ifndef __LINUX_SPINLOCK_TYPES_H
-+#error "Do not include directly. Include spinlock_types.h instead"
-+#endif
+@@ -210,6 +213,8 @@
+ EXPORT_SYMBOL(_raw_spin_unlock_bh);
+ #endif
+
++#ifndef CONFIG_PREEMPT_RT_FULL
+
-+/*
-+ * The non RT version maps spinlocks to raw_spinlocks
-+ */
-+typedef struct spinlock {
-+ union {
-+ struct raw_spinlock rlock;
+ #ifndef CONFIG_INLINE_READ_TRYLOCK
+ int __lockfunc _raw_read_trylock(rwlock_t *lock)
+ {
+@@ -354,6 +359,8 @@
+ EXPORT_SYMBOL(_raw_write_unlock_bh);
+ #endif
+
++#endif /* !PREEMPT_RT_FULL */
+
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+# define LOCK_PADSIZE (offsetof(struct raw_spinlock, dep_map))
-+ struct {
-+ u8 __padding[LOCK_PADSIZE];
-+ struct lockdep_map dep_map;
-+ };
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+
+ void __lockfunc _raw_spin_lock_nested(raw_spinlock_t *lock, int subclass)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/locking/spinlock_debug.c linux-4.14/kernel/locking/spinlock_debug.c
+--- linux-4.14.orig/kernel/locking/spinlock_debug.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/locking/spinlock_debug.c 2018-09-05 11:05:07.000000000 +0200
+@@ -31,6 +31,7 @@
+
+ EXPORT_SYMBOL(__raw_spin_lock_init);
+
++#ifndef CONFIG_PREEMPT_RT_FULL
+ void __rwlock_init(rwlock_t *lock, const char *name,
+ struct lock_class_key *key)
+ {
+@@ -48,6 +49,7 @@
+ }
+
+ EXPORT_SYMBOL(__rwlock_init);
+#endif
-+ };
-+} spinlock_t;
-+
-+#define __SPIN_LOCK_INITIALIZER(lockname) \
-+ { { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } }
-+
-+#define __SPIN_LOCK_UNLOCKED(lockname) \
-+ (spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname)
-+
-+#define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x)
+
+ static void spin_dump(raw_spinlock_t *lock, const char *msg)
+ {
+@@ -135,6 +137,7 @@
+ arch_spin_unlock(&lock->raw_lock);
+ }
+
++#ifndef CONFIG_PREEMPT_RT_FULL
+ static void rwlock_bug(rwlock_t *lock, const char *msg)
+ {
+ if (!debug_locks_off())
+@@ -224,3 +227,5 @@
+ debug_write_unlock(lock);
+ arch_write_unlock(&lock->raw_lock);
+ }
+
+#endif
-diff --git a/include/linux/spinlock_types_raw.h b/include/linux/spinlock_types_raw.h
-new file mode 100644
-index 000000000000..edffc4d53fc9
---- /dev/null
-+++ b/include/linux/spinlock_types_raw.h
-@@ -0,0 +1,56 @@
-+#ifndef __LINUX_SPINLOCK_TYPES_RAW_H
-+#define __LINUX_SPINLOCK_TYPES_RAW_H
-+
-+#if defined(CONFIG_SMP)
-+# include <asm/spinlock_types.h>
-+#else
-+# include <linux/spinlock_types_up.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/panic.c linux-4.14/kernel/panic.c
+--- linux-4.14.orig/kernel/panic.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/panic.c 2018-09-05 11:05:07.000000000 +0200
+@@ -482,9 +482,11 @@
+
+ static int init_oops_id(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ if (!oops_id)
+ get_random_bytes(&oops_id, sizeof(oops_id));
+ else
+#endif
+ oops_id++;
+
+ return 0;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/power/hibernate.c linux-4.14/kernel/power/hibernate.c
+--- linux-4.14.orig/kernel/power/hibernate.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/power/hibernate.c 2018-09-05 11:05:07.000000000 +0200
+@@ -287,6 +287,8 @@
+
+ local_irq_disable();
+
++ system_state = SYSTEM_SUSPEND;
+
-+#include <linux/lockdep.h>
-+
-+typedef struct raw_spinlock {
-+ arch_spinlock_t raw_lock;
-+#ifdef CONFIG_GENERIC_LOCKBREAK
-+ unsigned int break_lock;
-+#endif
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+ unsigned int magic, owner_cpu;
-+ void *owner;
-+#endif
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ struct lockdep_map dep_map;
+ error = syscore_suspend();
+ if (error) {
+ pr_err("Some system devices failed to power down, aborting hibernation\n");
+@@ -317,6 +319,7 @@
+ syscore_resume();
+
+ Enable_irqs:
++ system_state = SYSTEM_RUNNING;
+ local_irq_enable();
+
+ Enable_cpus:
+@@ -445,6 +448,7 @@
+ goto Enable_cpus;
+
+ local_irq_disable();
++ system_state = SYSTEM_SUSPEND;
+
+ error = syscore_suspend();
+ if (error)
+@@ -478,6 +482,7 @@
+ syscore_resume();
+
+ Enable_irqs:
++ system_state = SYSTEM_RUNNING;
+ local_irq_enable();
+
+ Enable_cpus:
+@@ -563,6 +568,7 @@
+ goto Enable_cpus;
+
+ local_irq_disable();
++ system_state = SYSTEM_SUSPEND;
+ syscore_suspend();
+ if (pm_wakeup_pending()) {
+ error = -EAGAIN;
+@@ -575,6 +581,7 @@
+
+ Power_up:
+ syscore_resume();
++ system_state = SYSTEM_RUNNING;
+ local_irq_enable();
+
+ Enable_cpus:
+@@ -672,6 +679,10 @@
+ return error;
+ }
+
++#ifndef CONFIG_SUSPEND
++bool pm_in_action;
+#endif
-+} raw_spinlock_t;
-+
-+#define SPINLOCK_MAGIC 0xdead4ead
+
-+#define SPINLOCK_OWNER_INIT ((void *)-1L)
+ /**
+ * hibernate - Carry out system hibernation, including saving the image.
+ */
+@@ -685,6 +696,8 @@
+ return -EPERM;
+ }
+
++ pm_in_action = true;
+
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname }
-+#else
-+# define SPIN_DEP_MAP_INIT(lockname)
-+#endif
+ lock_system_sleep();
+ /* The snapshot device should not be opened while we're running */
+ if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
+@@ -763,6 +776,7 @@
+ atomic_inc(&snapshot_device_available);
+ Unlock:
+ unlock_system_sleep();
++ pm_in_action = false;
+ pr_info("hibernation exit\n");
+
+ return error;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/power/suspend.c linux-4.14/kernel/power/suspend.c
+--- linux-4.14.orig/kernel/power/suspend.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/power/suspend.c 2018-09-05 11:05:07.000000000 +0200
+@@ -428,6 +428,8 @@
+ arch_suspend_disable_irqs();
+ BUG_ON(!irqs_disabled());
+
++ system_state = SYSTEM_SUSPEND;
+
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+# define SPIN_DEBUG_INIT(lockname) \
-+ .magic = SPINLOCK_MAGIC, \
-+ .owner_cpu = -1, \
-+ .owner = SPINLOCK_OWNER_INIT,
-+#else
-+# define SPIN_DEBUG_INIT(lockname)
-+#endif
+ error = syscore_suspend();
+ if (!error) {
+ *wakeup = pm_wakeup_pending();
+@@ -443,6 +445,8 @@
+ syscore_resume();
+ }
+
++ system_state = SYSTEM_RUNNING;
+
-+#define __RAW_SPIN_LOCK_INITIALIZER(lockname) \
-+ { \
-+ .raw_lock = __ARCH_SPIN_LOCK_UNLOCKED, \
-+ SPIN_DEBUG_INIT(lockname) \
-+ SPIN_DEP_MAP_INIT(lockname) }
+ arch_suspend_enable_irqs();
+ BUG_ON(irqs_disabled());
+
+@@ -589,6 +593,8 @@
+ return error;
+ }
+
++bool pm_in_action;
+
-+#define __RAW_SPIN_LOCK_UNLOCKED(lockname) \
-+ (raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname)
+ /**
+ * pm_suspend - Externally visible function for suspending the system.
+ * @state: System sleep state to enter.
+@@ -603,6 +609,7 @@
+ if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX)
+ return -EINVAL;
+
++ pm_in_action = true;
+ pr_info("suspend entry (%s)\n", mem_sleep_labels[state]);
+ error = enter_state(state);
+ if (error) {
+@@ -612,6 +619,7 @@
+ suspend_stats.success++;
+ }
+ pr_info("suspend exit\n");
++ pm_in_action = false;
+ return error;
+ }
+ EXPORT_SYMBOL(pm_suspend);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/printk/printk.c linux-4.14/kernel/printk/printk.c
+--- linux-4.14.orig/kernel/printk/printk.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/printk/printk.c 2018-09-05 11:05:07.000000000 +0200
+@@ -400,6 +400,65 @@
+ printk_safe_exit_irqrestore(flags); \
+ } while (0)
+
++#ifdef CONFIG_EARLY_PRINTK
++struct console *early_console;
+
-+#define DEFINE_RAW_SPINLOCK(x) raw_spinlock_t x = __RAW_SPIN_LOCK_UNLOCKED(x)
++static void early_vprintk(const char *fmt, va_list ap)
++{
++ if (early_console) {
++ char buf[512];
++ int n = vscnprintf(buf, sizeof(buf), fmt, ap);
+
-+#endif
-diff --git a/include/linux/spinlock_types_rt.h b/include/linux/spinlock_types_rt.h
-new file mode 100644
-index 000000000000..3e3d8c5f7a9a
---- /dev/null
-+++ b/include/linux/spinlock_types_rt.h
-@@ -0,0 +1,48 @@
-+#ifndef __LINUX_SPINLOCK_TYPES_RT_H
-+#define __LINUX_SPINLOCK_TYPES_RT_H
++ early_console->write(early_console, buf, n);
++ }
++}
+
-+#ifndef __LINUX_SPINLOCK_TYPES_H
-+#error "Do not include directly. Include spinlock_types.h instead"
-+#endif
++asmlinkage void early_printk(const char *fmt, ...)
++{
++ va_list ap;
+
-+#include <linux/cache.h>
++ va_start(ap, fmt);
++ early_vprintk(fmt, ap);
++ va_end(ap);
++}
+
+/*
-+ * PREEMPT_RT: spinlocks - an RT mutex plus lock-break field:
++ * This is independent of any log levels - a global
++ * kill switch that turns off all of printk.
++ *
++ * Used by the NMI watchdog if early-printk is enabled.
+ */
-+typedef struct spinlock {
-+ struct rt_mutex lock;
-+ unsigned int break_lock;
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ struct lockdep_map dep_map;
++static bool __read_mostly printk_killswitch;
++
++static int __init force_early_printk_setup(char *str)
++{
++ printk_killswitch = true;
++ return 0;
++}
++early_param("force_early_printk", force_early_printk_setup);
++
++void printk_kill(void)
++{
++ printk_killswitch = true;
++}
++
++#ifdef CONFIG_PRINTK
++static int forced_early_printk(const char *fmt, va_list ap)
++{
++ if (!printk_killswitch)
++ return 0;
++ early_vprintk(fmt, ap);
++ return 1;
++}
+#endif
-+} spinlock_t;
+
-+#ifdef CONFIG_DEBUG_RT_MUTEXES
-+# define __RT_SPIN_INITIALIZER(name) \
-+ { \
-+ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
-+ .save_state = 1, \
-+ .file = __FILE__, \
-+ .line = __LINE__ , \
-+ }
+#else
-+# define __RT_SPIN_INITIALIZER(name) \
-+ { \
-+ .wait_lock = __RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
-+ .save_state = 1, \
++static inline int forced_early_printk(const char *fmt, va_list ap)
++{
++ return 0;
++}
++#endif
++
+ #ifdef CONFIG_PRINTK
+ DECLARE_WAIT_QUEUE_HEAD(log_wait);
+ /* the next printk record to read by syslog(READ) or /proc/kmsg */
+@@ -1348,6 +1407,8 @@
+ {
+ char *text;
+ int len = 0;
++ int attempts = 0;
++ int num_msg;
+
+ text = kmalloc(LOG_LINE_MAX + PREFIX_MAX, GFP_KERNEL);
+ if (!text)
+@@ -1359,6 +1420,14 @@
+ u64 seq;
+ u32 idx;
+
++try_again:
++ attempts++;
++ if (attempts > 10) {
++ len = -EBUSY;
++ goto out;
++ }
++ num_msg = 0;
++
+ /*
+ * Find first record that fits, including all following records,
+ * into the user-provided buffer for this dump.
+@@ -1371,6 +1440,14 @@
+ len += msg_print_text(msg, true, NULL, 0);
+ idx = log_next(idx);
+ seq++;
++ num_msg++;
++ if (num_msg > 5) {
++ num_msg = 0;
++ logbuf_unlock_irq();
++ logbuf_lock_irq();
++ if (clear_seq < log_first_seq)
++ goto try_again;
++ }
+ }
+
+ /* move first record forward until length fits into the buffer */
+@@ -1382,6 +1459,14 @@
+ len -= msg_print_text(msg, true, NULL, 0);
+ idx = log_next(idx);
+ seq++;
++ num_msg++;
++ if (num_msg > 5) {
++ num_msg = 0;
++ logbuf_unlock_irq();
++ logbuf_lock_irq();
++ if (clear_seq < log_first_seq)
++ goto try_again;
++ }
+ }
+
+ /* last message fitting into this dump */
+@@ -1420,6 +1505,7 @@
+ clear_seq = log_next_seq;
+ clear_idx = log_next_idx;
+ }
++out:
+ logbuf_unlock_irq();
+
+ kfree(text);
+@@ -1558,6 +1644,12 @@
+ if (!console_drivers)
+ return;
+
++ if (IS_ENABLED(CONFIG_PREEMPT_RT_BASE)) {
++ if (in_irq() || in_nmi())
++ return;
+ }
-+#endif
-+
-+/*
-+.wait_list = PLIST_HEAD_INIT_RAW((name).lock.wait_list, (name).lock.wait_lock)
-+*/
+
-+#define __SPIN_LOCK_UNLOCKED(name) \
-+ { .lock = __RT_SPIN_INITIALIZER(name.lock), \
-+ SPIN_DEP_MAP_INIT(name) }
++ migrate_disable();
+ for_each_console(con) {
+ if (exclusive_console && con != exclusive_console)
+ continue;
+@@ -1573,6 +1665,7 @@
+ else
+ con->write(con, text, len);
+ }
++ migrate_enable();
+ }
+
+ int printk_delay_msec __read_mostly;
+@@ -1692,6 +1785,13 @@
+ int printed_len;
+ bool in_sched = false;
+
++ /*
++ * Fall back to early_printk if a debugging subsystem has
++ * killed printk output
++ */
++ if (unlikely(forced_early_printk(fmt, args)))
++ return 1;
+
-+#define DEFINE_SPINLOCK(name) \
-+ spinlock_t name = __SPIN_LOCK_UNLOCKED(name)
+ if (level == LOGLEVEL_SCHED) {
+ level = LOGLEVEL_DEFAULT;
+ in_sched = true;
+@@ -1748,12 +1848,22 @@
+
+ /* If called from the scheduler, we can not call up(). */
+ if (!in_sched) {
++ int may_trylock = 1;
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++ /*
++ * we can't take a sleeping lock with IRQs or preeption disabled
++ * so we can't print in these contexts
++ */
++ if (!(preempt_count() == 0 && !irqs_disabled()))
++ may_trylock = 0;
+#endif
-diff --git a/include/linux/srcu.h b/include/linux/srcu.h
-index dc8eb63c6568..e793d3a257da 100644
---- a/include/linux/srcu.h
-+++ b/include/linux/srcu.h
-@@ -84,10 +84,10 @@ int init_srcu_struct(struct srcu_struct *sp);
+ /*
+ * Try to acquire and then immediately release the console
+ * semaphore. The release will print out buffers and wake up
+ * /dev/kmsg and syslog() users.
+ */
+- if (console_trylock())
++ if (may_trylock && console_trylock())
+ console_unlock();
+ }
- void process_srcu(struct work_struct *work);
+@@ -1863,26 +1973,6 @@
--#define __SRCU_STRUCT_INIT(name) \
-+#define __SRCU_STRUCT_INIT(name, pcpu_name) \
- { \
- .completed = -300, \
-- .per_cpu_ref = &name##_srcu_array, \
-+ .per_cpu_ref = &pcpu_name, \
- .queue_lock = __SPIN_LOCK_UNLOCKED(name.queue_lock), \
- .running = false, \
- .batch_queue = RCU_BATCH_INIT(name.batch_queue), \
-@@ -119,7 +119,7 @@ void process_srcu(struct work_struct *work);
- */
- #define __DEFINE_SRCU(name, is_static) \
- static DEFINE_PER_CPU(struct srcu_struct_array, name##_srcu_array);\
-- is_static struct srcu_struct name = __SRCU_STRUCT_INIT(name)
-+ is_static struct srcu_struct name = __SRCU_STRUCT_INIT(name, name##_srcu_array)
- #define DEFINE_SRCU(name) __DEFINE_SRCU(name, /* not static */)
- #define DEFINE_STATIC_SRCU(name) __DEFINE_SRCU(name, static)
+ #endif /* CONFIG_PRINTK */
-diff --git a/include/linux/suspend.h b/include/linux/suspend.h
-index d9718378a8be..e81e6dc7dcb1 100644
---- a/include/linux/suspend.h
-+++ b/include/linux/suspend.h
-@@ -193,6 +193,12 @@ struct platform_freeze_ops {
- void (*end)(void);
- };
+-#ifdef CONFIG_EARLY_PRINTK
+-struct console *early_console;
+-
+-asmlinkage __visible void early_printk(const char *fmt, ...)
+-{
+- va_list ap;
+- char buf[512];
+- int n;
+-
+- if (!early_console)
+- return;
+-
+- va_start(ap, fmt);
+- n = vscnprintf(buf, sizeof(buf), fmt, ap);
+- va_end(ap);
+-
+- early_console->write(early_console, buf, n);
+-}
+-#endif
+-
+ static int __add_preferred_console(char *name, int idx, char *options,
+ char *brl_options)
+ {
+@@ -2229,10 +2319,15 @@
+ console_seq++;
+ raw_spin_unlock(&logbuf_lock);
-+#if defined(CONFIG_SUSPEND) || defined(CONFIG_HIBERNATION)
-+extern bool pm_in_action;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ printk_safe_exit_irqrestore(flags);
++ call_console_drivers(ext_text, ext_len, text, len);
+#else
-+# define pm_in_action false
+ stop_critical_timings(); /* don't trace print latency */
+ call_console_drivers(ext_text, ext_len, text, len);
+ start_critical_timings();
+ printk_safe_exit_irqrestore(flags);
+#endif
-+
- #ifdef CONFIG_SUSPEND
- /**
- * suspend_set_ops - set platform dependent suspend operations
-diff --git a/include/linux/swait.h b/include/linux/swait.h
-index c1f9c62a8a50..83f004a72320 100644
---- a/include/linux/swait.h
-+++ b/include/linux/swait.h
-@@ -87,6 +87,7 @@ static inline int swait_active(struct swait_queue_head *q)
- extern void swake_up(struct swait_queue_head *q);
- extern void swake_up_all(struct swait_queue_head *q);
- extern void swake_up_locked(struct swait_queue_head *q);
-+extern void swake_up_all_locked(struct swait_queue_head *q);
-
- extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
- extern void prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait, int state);
-diff --git a/include/linux/swap.h b/include/linux/swap.h
-index 55ff5593c193..52bf5477dc92 100644
---- a/include/linux/swap.h
-+++ b/include/linux/swap.h
-@@ -11,6 +11,7 @@
- #include <linux/fs.h>
- #include <linux/atomic.h>
- #include <linux/page-flags.h>
-+#include <linux/locallock.h>
- #include <asm/page.h>
-
- struct notifier_block;
-@@ -247,7 +248,8 @@ struct swap_info_struct {
- void *workingset_eviction(struct address_space *mapping, struct page *page);
- bool workingset_refault(void *shadow);
- void workingset_activation(struct page *page);
--extern struct list_lru workingset_shadow_nodes;
-+extern struct list_lru __workingset_shadow_nodes;
-+DECLARE_LOCAL_IRQ_LOCK(workingset_shadow_lock);
- static inline unsigned int workingset_node_pages(struct radix_tree_node *node)
+ if (do_cond_resched)
+ cond_resched();
+@@ -2286,6 +2381,11 @@
{
-@@ -292,6 +294,7 @@ extern unsigned long nr_free_pagecache_pages(void);
-
+ struct console *c;
- /* linux/mm/swap.c */
-+DECLARE_LOCAL_IRQ_LOCK(swapvec_lock);
- extern void lru_cache_add(struct page *);
- extern void lru_cache_add_anon(struct page *page);
- extern void lru_cache_add_file(struct page *page);
-diff --git a/include/linux/swork.h b/include/linux/swork.h
-new file mode 100644
-index 000000000000..f175fa9a6016
---- /dev/null
-+++ b/include/linux/swork.h
-@@ -0,0 +1,24 @@
-+#ifndef _LINUX_SWORK_H
-+#define _LINUX_SWORK_H
-+
-+#include <linux/list.h>
-+
-+struct swork_event {
-+ struct list_head item;
-+ unsigned long flags;
-+ void (*func)(struct swork_event *);
-+};
-+
-+static inline void INIT_SWORK(struct swork_event *event,
-+ void (*func)(struct swork_event *))
-+{
-+ event->flags = 0;
-+ event->func = func;
-+}
-+
-+bool swork_queue(struct swork_event *sev);
++ if (IS_ENABLED(CONFIG_PREEMPT_RT_BASE)) {
++ if (in_irq() || in_nmi())
++ return;
++ }
+
-+int swork_get(void);
-+void swork_put(void);
+ /*
+ * console_unblank can no longer be called in interrupt context unless
+ * oops_in_progress is set to 1..
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/ptrace.c linux-4.14/kernel/ptrace.c
+--- linux-4.14.orig/kernel/ptrace.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/ptrace.c 2018-09-05 11:05:07.000000000 +0200
+@@ -175,7 +175,14 @@
+
+ spin_lock_irq(&task->sighand->siglock);
+ if (task_is_traced(task) && !__fatal_signal_pending(task)) {
+- task->state = __TASK_TRACED;
++ unsigned long flags;
+
-+#endif /* _LINUX_SWORK_H */
-diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
-index 2873baf5372a..eb1a108f17ca 100644
---- a/include/linux/thread_info.h
-+++ b/include/linux/thread_info.h
-@@ -107,7 +107,17 @@ static inline int test_ti_thread_flag(struct thread_info *ti, int flag)
- #define test_thread_flag(flag) \
- test_ti_thread_flag(current_thread_info(), flag)
++ raw_spin_lock_irqsave(&task->pi_lock, flags);
++ if (task->state & __TASK_TRACED)
++ task->state = __TASK_TRACED;
++ else
++ task->saved_state = __TASK_TRACED;
++ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+ ret = true;
+ }
+ spin_unlock_irq(&task->sighand->siglock);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/Kconfig linux-4.14/kernel/rcu/Kconfig
+--- linux-4.14.orig/kernel/rcu/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/rcu/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -36,7 +36,7 @@
--#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED)
-+#ifdef CONFIG_PREEMPT_LAZY
-+#define tif_need_resched() (test_thread_flag(TIF_NEED_RESCHED) || \
-+ test_thread_flag(TIF_NEED_RESCHED_LAZY))
-+#define tif_need_resched_now() (test_thread_flag(TIF_NEED_RESCHED))
-+#define tif_need_resched_lazy() test_thread_flag(TIF_NEED_RESCHED_LAZY))
+ config RCU_EXPERT
+ bool "Make expert-level adjustments to RCU configuration"
+- default n
++ default y if PREEMPT_RT_FULL
+ help
+ This option needs to be enabled if you wish to make
+ expert-level adjustments to RCU configuration. By default,
+@@ -172,7 +172,7 @@
+
+ config RCU_FAST_NO_HZ
+ bool "Accelerate last non-dyntick-idle CPU's grace periods"
+- depends on NO_HZ_COMMON && SMP && RCU_EXPERT
++ depends on NO_HZ_COMMON && SMP && RCU_EXPERT && !PREEMPT_RT_FULL
+ default n
+ help
+ This option permits CPUs to enter dynticks-idle state even if
+@@ -191,7 +191,7 @@
+ config RCU_BOOST
+ bool "Enable RCU priority boosting"
+ depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
+- default n
++ default y if PREEMPT_RT_FULL
+ help
+ This option boosts the priority of preempted RCU readers that
+ block the current preemptible RCU grace period for too long.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/rcu.h linux-4.14/kernel/rcu/rcu.h
+--- linux-4.14.orig/kernel/rcu/rcu.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/rcu/rcu.h 2018-09-05 11:05:07.000000000 +0200
+@@ -462,18 +462,26 @@
+ extern unsigned long rcutorture_testseq;
+ extern unsigned long rcutorture_vernum;
+ unsigned long rcu_batches_started(void);
+-unsigned long rcu_batches_started_bh(void);
+ unsigned long rcu_batches_started_sched(void);
+ unsigned long rcu_batches_completed(void);
+-unsigned long rcu_batches_completed_bh(void);
+ unsigned long rcu_batches_completed_sched(void);
+ unsigned long rcu_exp_batches_completed(void);
+ unsigned long rcu_exp_batches_completed_sched(void);
+ unsigned long srcu_batches_completed(struct srcu_struct *sp);
+ void show_rcu_gp_kthreads(void);
+ void rcu_force_quiescent_state(void);
+-void rcu_bh_force_quiescent_state(void);
+ void rcu_sched_force_quiescent_state(void);
+
++#ifndef CONFIG_PREEMPT_RT_FULL
++void rcu_bh_force_quiescent_state(void);
++unsigned long rcu_batches_started_bh(void);
++unsigned long rcu_batches_completed_bh(void);
+#else
-+#define tif_need_resched() test_thread_flag(TIF_NEED_RESCHED)
-+#define tif_need_resched_now() test_thread_flag(TIF_NEED_RESCHED)
-+#define tif_need_resched_lazy() 0
++# define rcu_bh_force_quiescent_state rcu_force_quiescent_state
++# define rcu_batches_completed_bh rcu_batches_completed
++# define rcu_batches_started_bh rcu_batches_completed
+#endif
++
+ #endif /* #else #ifdef CONFIG_TINY_RCU */
- #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
- static inline int arch_within_stack_frames(const void * const stack,
-diff --git a/include/linux/timer.h b/include/linux/timer.h
-index 51d601f192d4..83cea629efe1 100644
---- a/include/linux/timer.h
-+++ b/include/linux/timer.h
-@@ -241,7 +241,7 @@ extern void add_timer(struct timer_list *timer);
+ #ifdef CONFIG_RCU_NOCB_CPU
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/rcu_segcblist.c linux-4.14/kernel/rcu/rcu_segcblist.c
+--- linux-4.14.orig/kernel/rcu/rcu_segcblist.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/rcu/rcu_segcblist.c 2018-09-05 11:05:07.000000000 +0200
+@@ -23,6 +23,7 @@
+ #include <linux/types.h>
+ #include <linux/kernel.h>
+ #include <linux/interrupt.h>
++#include <linux/rcupdate.h>
- extern int try_to_del_timer_sync(struct timer_list *timer);
+ #include "rcu_segcblist.h"
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
- extern int del_timer_sync(struct timer_list *timer);
- #else
- # define del_timer_sync(t) del_timer(t)
-diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
-index be007610ceb0..15154b13a53b 100644
---- a/include/linux/trace_events.h
-+++ b/include/linux/trace_events.h
-@@ -56,6 +56,9 @@ struct trace_entry {
- unsigned char flags;
- unsigned char preempt_count;
- int pid;
-+ unsigned short migrate_disable;
-+ unsigned short padding;
-+ unsigned char preempt_lazy_count;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/rcutorture.c linux-4.14/kernel/rcu/rcutorture.c
+--- linux-4.14.orig/kernel/rcu/rcutorture.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/rcu/rcutorture.c 2018-09-05 11:05:07.000000000 +0200
+@@ -417,6 +417,7 @@
+ .name = "rcu"
};
- #define TRACE_EVENT_TYPE_MAX \
-diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
-index f30c187ed785..83bf0f798426 100644
---- a/include/linux/uaccess.h
-+++ b/include/linux/uaccess.h
-@@ -24,6 +24,7 @@ static __always_inline void pagefault_disabled_dec(void)
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+ * Definitions for rcu_bh torture testing.
*/
- static inline void pagefault_disable(void)
- {
-+ migrate_disable();
- pagefault_disabled_inc();
- /*
- * make sure to have issued the store before a pagefault
-@@ -40,6 +41,7 @@ static inline void pagefault_enable(void)
- */
- barrier();
- pagefault_disabled_dec();
-+ migrate_enable();
- }
+@@ -456,6 +457,12 @@
+ .name = "rcu_bh"
+ };
++#else
++static struct rcu_torture_ops rcu_bh_ops = {
++ .ttype = INVALID_RCU_FLAVOR,
++};
++#endif
++
/*
-diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
-index 4a29c75b146e..0a294e950df8 100644
---- a/include/linux/uprobes.h
-+++ b/include/linux/uprobes.h
-@@ -27,6 +27,7 @@
- #include <linux/errno.h>
- #include <linux/rbtree.h>
- #include <linux/types.h>
-+#include <linux/wait.h>
+ * Don't even think about trying any of these in real life!!!
+ * The names includes "busted", and they really means it!
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/srcutree.c linux-4.14/kernel/rcu/srcutree.c
+--- linux-4.14.orig/kernel/rcu/srcutree.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/rcu/srcutree.c 2018-09-05 11:05:07.000000000 +0200
+@@ -36,6 +36,8 @@
+ #include <linux/delay.h>
+ #include <linux/module.h>
+ #include <linux/srcu.h>
++#include <linux/cpu.h>
++#include <linux/locallock.h>
- struct vm_area_struct;
- struct mm_struct;
-diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
-index 613771909b6e..e28c5a43229d 100644
---- a/include/linux/vmstat.h
-+++ b/include/linux/vmstat.h
-@@ -33,7 +33,9 @@ DECLARE_PER_CPU(struct vm_event_state, vm_event_states);
+ #include "rcu.h"
+ #include "rcu_segcblist.h"
+@@ -53,6 +55,33 @@
+ static void srcu_reschedule(struct srcu_struct *sp, unsigned long delay);
+ static void process_srcu(struct work_struct *work);
+
++/* Wrappers for lock acquisition and release, see raw_spin_lock_rcu_node(). */
++#define spin_lock_rcu_node(p) \
++do { \
++ spin_lock(&ACCESS_PRIVATE(p, lock)); \
++ smp_mb__after_unlock_lock(); \
++} while (0)
++
++#define spin_unlock_rcu_node(p) spin_unlock(&ACCESS_PRIVATE(p, lock))
++
++#define spin_lock_irq_rcu_node(p) \
++do { \
++ spin_lock_irq(&ACCESS_PRIVATE(p, lock)); \
++ smp_mb__after_unlock_lock(); \
++} while (0)
++
++#define spin_unlock_irq_rcu_node(p) \
++ spin_unlock_irq(&ACCESS_PRIVATE(p, lock))
++
++#define spin_lock_irqsave_rcu_node(p, flags) \
++do { \
++ spin_lock_irqsave(&ACCESS_PRIVATE(p, lock), flags); \
++ smp_mb__after_unlock_lock(); \
++} while (0)
++
++#define spin_unlock_irqrestore_rcu_node(p, flags) \
++ spin_unlock_irqrestore(&ACCESS_PRIVATE(p, lock), flags) \
++
+ /*
+ * Initialize SRCU combining tree. Note that statically allocated
+ * srcu_struct structures might already have srcu_read_lock() and
+@@ -77,7 +106,7 @@
+
+ /* Each pass through this loop initializes one srcu_node structure. */
+ rcu_for_each_node_breadth_first(sp, snp) {
+- raw_spin_lock_init(&ACCESS_PRIVATE(snp, lock));
++ spin_lock_init(&ACCESS_PRIVATE(snp, lock));
+ WARN_ON_ONCE(ARRAY_SIZE(snp->srcu_have_cbs) !=
+ ARRAY_SIZE(snp->srcu_data_have_cbs));
+ for (i = 0; i < ARRAY_SIZE(snp->srcu_have_cbs); i++) {
+@@ -111,7 +140,7 @@
+ snp_first = sp->level[level];
+ for_each_possible_cpu(cpu) {
+ sdp = per_cpu_ptr(sp->sda, cpu);
+- raw_spin_lock_init(&ACCESS_PRIVATE(sdp, lock));
++ spin_lock_init(&ACCESS_PRIVATE(sdp, lock));
+ rcu_segcblist_init(&sdp->srcu_cblist);
+ sdp->srcu_cblist_invoking = false;
+ sdp->srcu_gp_seq_needed = sp->srcu_gp_seq;
+@@ -170,7 +199,7 @@
+ /* Don't re-initialize a lock while it is held. */
+ debug_check_no_locks_freed((void *)sp, sizeof(*sp));
+ lockdep_init_map(&sp->dep_map, name, key, 0);
+- raw_spin_lock_init(&ACCESS_PRIVATE(sp, lock));
++ spin_lock_init(&ACCESS_PRIVATE(sp, lock));
+ return init_srcu_struct_fields(sp, false);
+ }
+ EXPORT_SYMBOL_GPL(__init_srcu_struct);
+@@ -187,7 +216,7 @@
*/
- static inline void __count_vm_event(enum vm_event_item item)
- {
-+ preempt_disable_rt();
- raw_cpu_inc(vm_event_states.event[item]);
-+ preempt_enable_rt();
+ int init_srcu_struct(struct srcu_struct *sp)
+ {
+- raw_spin_lock_init(&ACCESS_PRIVATE(sp, lock));
++ spin_lock_init(&ACCESS_PRIVATE(sp, lock));
+ return init_srcu_struct_fields(sp, false);
+ }
+ EXPORT_SYMBOL_GPL(init_srcu_struct);
+@@ -210,13 +239,13 @@
+ /* The smp_load_acquire() pairs with the smp_store_release(). */
+ if (!rcu_seq_state(smp_load_acquire(&sp->srcu_gp_seq_needed))) /*^^^*/
+ return; /* Already initialized. */
+- raw_spin_lock_irqsave_rcu_node(sp, flags);
++ spin_lock_irqsave_rcu_node(sp, flags);
+ if (!rcu_seq_state(sp->srcu_gp_seq_needed)) {
+- raw_spin_unlock_irqrestore_rcu_node(sp, flags);
++ spin_unlock_irqrestore_rcu_node(sp, flags);
+ return;
+ }
+ init_srcu_struct_fields(sp, true);
+- raw_spin_unlock_irqrestore_rcu_node(sp, flags);
++ spin_unlock_irqrestore_rcu_node(sp, flags);
}
- static inline void count_vm_event(enum vm_event_item item)
-@@ -43,7 +45,9 @@ static inline void count_vm_event(enum vm_event_item item)
+ /*
+@@ -425,21 +454,6 @@
+ }
- static inline void __count_vm_events(enum vm_event_item item, long delta)
+ /*
+- * Track online CPUs to guide callback workqueue placement.
+- */
+-DEFINE_PER_CPU(bool, srcu_online);
+-
+-void srcu_online_cpu(unsigned int cpu)
+-{
+- WRITE_ONCE(per_cpu(srcu_online, cpu), true);
+-}
+-
+-void srcu_offline_cpu(unsigned int cpu)
+-{
+- WRITE_ONCE(per_cpu(srcu_online, cpu), false);
+-}
+-
+-/*
+ * Place the workqueue handler on the specified CPU if online, otherwise
+ * just run it whereever. This is useful for placing workqueue handlers
+ * that are to invoke the specified CPU's callbacks.
+@@ -450,12 +464,12 @@
{
-+ preempt_disable_rt();
- raw_cpu_add(vm_event_states.event[item], delta);
-+ preempt_enable_rt();
+ bool ret;
+
+- preempt_disable();
+- if (READ_ONCE(per_cpu(srcu_online, cpu)))
++ cpus_read_lock();
++ if (cpu_online(cpu))
+ ret = queue_delayed_work_on(cpu, wq, dwork, delay);
+ else
+ ret = queue_delayed_work(wq, dwork, delay);
+- preempt_enable();
++ cpus_read_unlock();
+ return ret;
}
- static inline void count_vm_events(enum vm_event_item item, long delta)
-diff --git a/include/linux/wait.h b/include/linux/wait.h
-index 2408e8d5c05c..db50d6609195 100644
---- a/include/linux/wait.h
-+++ b/include/linux/wait.h
-@@ -8,6 +8,7 @@
- #include <linux/spinlock.h>
- #include <asm/current.h>
- #include <uapi/linux/wait.h>
-+#include <linux/atomic.h>
+@@ -513,7 +527,7 @@
+ mutex_lock(&sp->srcu_cb_mutex);
+
+ /* End the current grace period. */
+- raw_spin_lock_irq_rcu_node(sp);
++ spin_lock_irq_rcu_node(sp);
+ idx = rcu_seq_state(sp->srcu_gp_seq);
+ WARN_ON_ONCE(idx != SRCU_STATE_SCAN2);
+ cbdelay = srcu_get_delay(sp);
+@@ -522,7 +536,7 @@
+ gpseq = rcu_seq_current(&sp->srcu_gp_seq);
+ if (ULONG_CMP_LT(sp->srcu_gp_seq_needed_exp, gpseq))
+ sp->srcu_gp_seq_needed_exp = gpseq;
+- raw_spin_unlock_irq_rcu_node(sp);
++ spin_unlock_irq_rcu_node(sp);
+ mutex_unlock(&sp->srcu_gp_mutex);
+ /* A new grace period can start at this point. But only one. */
+
+@@ -530,7 +544,7 @@
+ idx = rcu_seq_ctr(gpseq) % ARRAY_SIZE(snp->srcu_have_cbs);
+ idxnext = (idx + 1) % ARRAY_SIZE(snp->srcu_have_cbs);
+ rcu_for_each_node_breadth_first(sp, snp) {
+- raw_spin_lock_irq_rcu_node(snp);
++ spin_lock_irq_rcu_node(snp);
+ cbs = false;
+ if (snp >= sp->level[rcu_num_lvls - 1])
+ cbs = snp->srcu_have_cbs[idx] == gpseq;
+@@ -540,7 +554,7 @@
+ snp->srcu_gp_seq_needed_exp = gpseq;
+ mask = snp->srcu_data_have_cbs[idx];
+ snp->srcu_data_have_cbs[idx] = 0;
+- raw_spin_unlock_irq_rcu_node(snp);
++ spin_unlock_irq_rcu_node(snp);
+ if (cbs)
+ srcu_schedule_cbs_snp(sp, snp, mask, cbdelay);
+
+@@ -548,11 +562,11 @@
+ if (!(gpseq & counter_wrap_check))
+ for (cpu = snp->grplo; cpu <= snp->grphi; cpu++) {
+ sdp = per_cpu_ptr(sp->sda, cpu);
+- raw_spin_lock_irqsave_rcu_node(sdp, flags);
++ spin_lock_irqsave_rcu_node(sdp, flags);
+ if (ULONG_CMP_GE(gpseq,
+ sdp->srcu_gp_seq_needed + 100))
+ sdp->srcu_gp_seq_needed = gpseq;
+- raw_spin_unlock_irqrestore_rcu_node(sdp, flags);
++ spin_unlock_irqrestore_rcu_node(sdp, flags);
+ }
+ }
- typedef struct __wait_queue wait_queue_t;
- typedef int (*wait_queue_func_t)(wait_queue_t *wait, unsigned mode, int flags, void *key);
-diff --git a/include/net/dst.h b/include/net/dst.h
-index 6835d224d47b..55a5a9698f14 100644
---- a/include/net/dst.h
-+++ b/include/net/dst.h
-@@ -446,7 +446,7 @@ static inline void dst_confirm(struct dst_entry *dst)
- static inline int dst_neigh_output(struct dst_entry *dst, struct neighbour *n,
- struct sk_buff *skb)
- {
-- const struct hh_cache *hh;
-+ struct hh_cache *hh;
-
- if (dst->pending_confirm) {
- unsigned long now = jiffies;
-diff --git a/include/net/gen_stats.h b/include/net/gen_stats.h
-index 231e121cc7d9..d125222b979d 100644
---- a/include/net/gen_stats.h
-+++ b/include/net/gen_stats.h
-@@ -5,6 +5,7 @@
- #include <linux/socket.h>
- #include <linux/rtnetlink.h>
- #include <linux/pkt_sched.h>
-+#include <net/net_seq_lock.h>
+@@ -560,17 +574,17 @@
+ mutex_unlock(&sp->srcu_cb_mutex);
+
+ /* Start a new grace period if needed. */
+- raw_spin_lock_irq_rcu_node(sp);
++ spin_lock_irq_rcu_node(sp);
+ gpseq = rcu_seq_current(&sp->srcu_gp_seq);
+ if (!rcu_seq_state(gpseq) &&
+ ULONG_CMP_LT(gpseq, sp->srcu_gp_seq_needed)) {
+ srcu_gp_start(sp);
+- raw_spin_unlock_irq_rcu_node(sp);
++ spin_unlock_irq_rcu_node(sp);
+ /* Throttle expedited grace periods: Should be rare! */
+ srcu_reschedule(sp, rcu_seq_ctr(gpseq) & 0x3ff
+ ? 0 : SRCU_INTERVAL);
+ } else {
+- raw_spin_unlock_irq_rcu_node(sp);
++ spin_unlock_irq_rcu_node(sp);
+ }
+ }
- struct gnet_stats_basic_cpu {
- struct gnet_stats_basic_packed bstats;
-@@ -33,11 +34,11 @@ int gnet_stats_start_copy_compat(struct sk_buff *skb, int type,
- spinlock_t *lock, struct gnet_dump *d,
- int padattr);
+@@ -590,18 +604,18 @@
+ if (rcu_seq_done(&sp->srcu_gp_seq, s) ||
+ ULONG_CMP_GE(READ_ONCE(snp->srcu_gp_seq_needed_exp), s))
+ return;
+- raw_spin_lock_irqsave_rcu_node(snp, flags);
++ spin_lock_irqsave_rcu_node(snp, flags);
+ if (ULONG_CMP_GE(snp->srcu_gp_seq_needed_exp, s)) {
+- raw_spin_unlock_irqrestore_rcu_node(snp, flags);
++ spin_unlock_irqrestore_rcu_node(snp, flags);
+ return;
+ }
+ WRITE_ONCE(snp->srcu_gp_seq_needed_exp, s);
+- raw_spin_unlock_irqrestore_rcu_node(snp, flags);
++ spin_unlock_irqrestore_rcu_node(snp, flags);
+ }
+- raw_spin_lock_irqsave_rcu_node(sp, flags);
++ spin_lock_irqsave_rcu_node(sp, flags);
+ if (!ULONG_CMP_LT(sp->srcu_gp_seq_needed_exp, s))
+ sp->srcu_gp_seq_needed_exp = s;
+- raw_spin_unlock_irqrestore_rcu_node(sp, flags);
++ spin_unlock_irqrestore_rcu_node(sp, flags);
+ }
--int gnet_stats_copy_basic(const seqcount_t *running,
-+int gnet_stats_copy_basic(net_seqlock_t *running,
- struct gnet_dump *d,
- struct gnet_stats_basic_cpu __percpu *cpu,
- struct gnet_stats_basic_packed *b);
--void __gnet_stats_copy_basic(const seqcount_t *running,
-+void __gnet_stats_copy_basic(net_seqlock_t *running,
- struct gnet_stats_basic_packed *bstats,
- struct gnet_stats_basic_cpu __percpu *cpu,
- struct gnet_stats_basic_packed *b);
-@@ -55,14 +56,14 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
- struct gnet_stats_basic_cpu __percpu *cpu_bstats,
- struct gnet_stats_rate_est64 *rate_est,
- spinlock_t *stats_lock,
-- seqcount_t *running, struct nlattr *opt);
-+ net_seqlock_t *running, struct nlattr *opt);
- void gen_kill_estimator(struct gnet_stats_basic_packed *bstats,
- struct gnet_stats_rate_est64 *rate_est);
- int gen_replace_estimator(struct gnet_stats_basic_packed *bstats,
- struct gnet_stats_basic_cpu __percpu *cpu_bstats,
- struct gnet_stats_rate_est64 *rate_est,
- spinlock_t *stats_lock,
-- seqcount_t *running, struct nlattr *opt);
-+ net_seqlock_t *running, struct nlattr *opt);
- bool gen_estimator_active(const struct gnet_stats_basic_packed *bstats,
- const struct gnet_stats_rate_est64 *rate_est);
- #endif
-diff --git a/include/net/neighbour.h b/include/net/neighbour.h
-index 8b683841e574..bf656008f6e7 100644
---- a/include/net/neighbour.h
-+++ b/include/net/neighbour.h
-@@ -446,7 +446,7 @@ static inline int neigh_hh_bridge(struct hh_cache *hh, struct sk_buff *skb)
+ /*
+@@ -623,12 +637,12 @@
+ for (; snp != NULL; snp = snp->srcu_parent) {
+ if (rcu_seq_done(&sp->srcu_gp_seq, s) && snp != sdp->mynode)
+ return; /* GP already done and CBs recorded. */
+- raw_spin_lock_irqsave_rcu_node(snp, flags);
++ spin_lock_irqsave_rcu_node(snp, flags);
+ if (ULONG_CMP_GE(snp->srcu_have_cbs[idx], s)) {
+ snp_seq = snp->srcu_have_cbs[idx];
+ if (snp == sdp->mynode && snp_seq == s)
+ snp->srcu_data_have_cbs[idx] |= sdp->grpmask;
+- raw_spin_unlock_irqrestore_rcu_node(snp, flags);
++ spin_unlock_irqrestore_rcu_node(snp, flags);
+ if (snp == sdp->mynode && snp_seq != s) {
+ srcu_schedule_cbs_sdp(sdp, do_norm
+ ? SRCU_INTERVAL
+@@ -644,11 +658,11 @@
+ snp->srcu_data_have_cbs[idx] |= sdp->grpmask;
+ if (!do_norm && ULONG_CMP_LT(snp->srcu_gp_seq_needed_exp, s))
+ snp->srcu_gp_seq_needed_exp = s;
+- raw_spin_unlock_irqrestore_rcu_node(snp, flags);
++ spin_unlock_irqrestore_rcu_node(snp, flags);
+ }
+
+ /* Top of tree, must ensure the grace period will be started. */
+- raw_spin_lock_irqsave_rcu_node(sp, flags);
++ spin_lock_irqsave_rcu_node(sp, flags);
+ if (ULONG_CMP_LT(sp->srcu_gp_seq_needed, s)) {
+ /*
+ * Record need for grace period s. Pair with load
+@@ -667,7 +681,7 @@
+ queue_delayed_work(system_power_efficient_wq, &sp->work,
+ srcu_get_delay(sp));
+ }
+- raw_spin_unlock_irqrestore_rcu_node(sp, flags);
++ spin_unlock_irqrestore_rcu_node(sp, flags);
}
- #endif
--static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb)
-+static inline int neigh_hh_output(struct hh_cache *hh, struct sk_buff *skb)
+ /*
+@@ -736,6 +750,8 @@
+ * negligible when amoritized over that time period, and the extra latency
+ * of a needlessly non-expedited grace period is similarly negligible.
+ */
++static DEFINE_LOCAL_IRQ_LOCK(sp_llock);
++
+ static bool srcu_might_be_idle(struct srcu_struct *sp)
{
- unsigned int seq;
- int hh_len;
-@@ -501,7 +501,7 @@ struct neighbour_cb {
+ unsigned long curseq;
+@@ -744,13 +760,13 @@
+ unsigned long t;
- #define NEIGH_CB(skb) ((struct neighbour_cb *)(skb)->cb)
+ /* If the local srcu_data structure has callbacks, not idle. */
+- local_irq_save(flags);
++ local_lock_irqsave(sp_llock, flags);
+ sdp = this_cpu_ptr(sp->sda);
+ if (rcu_segcblist_pend_cbs(&sdp->srcu_cblist)) {
+- local_irq_restore(flags);
++ local_unlock_irqrestore(sp_llock, flags);
+ return false; /* Callbacks already present, so not idle. */
+ }
+- local_irq_restore(flags);
++ local_unlock_irqrestore(sp_llock, flags);
--static inline void neigh_ha_snapshot(char *dst, const struct neighbour *n,
-+static inline void neigh_ha_snapshot(char *dst, struct neighbour *n,
- const struct net_device *dev)
- {
- unsigned int seq;
-diff --git a/include/net/net_seq_lock.h b/include/net/net_seq_lock.h
-new file mode 100644
-index 000000000000..a7034298a82a
---- /dev/null
-+++ b/include/net/net_seq_lock.h
-@@ -0,0 +1,15 @@
-+#ifndef __NET_NET_SEQ_LOCK_H__
-+#define __NET_NET_SEQ_LOCK_H__
+ /*
+ * No local callbacks, so probabalistically probe global state.
+@@ -828,9 +844,9 @@
+ return;
+ }
+ rhp->func = func;
+- local_irq_save(flags);
++ local_lock_irqsave(sp_llock, flags);
+ sdp = this_cpu_ptr(sp->sda);
+- raw_spin_lock_rcu_node(sdp);
++ spin_lock_rcu_node(sdp);
+ rcu_segcblist_enqueue(&sdp->srcu_cblist, rhp, false);
+ rcu_segcblist_advance(&sdp->srcu_cblist,
+ rcu_seq_current(&sp->srcu_gp_seq));
+@@ -844,7 +860,8 @@
+ sdp->srcu_gp_seq_needed_exp = s;
+ needexp = true;
+ }
+- raw_spin_unlock_irqrestore_rcu_node(sdp, flags);
++ spin_unlock_rcu_node(sdp);
++ local_unlock_irqrestore(sp_llock, flags);
+ if (needgp)
+ srcu_funnel_gp_start(sp, sdp, s, do_norm);
+ else if (needexp)
+@@ -900,7 +917,7 @@
+
+ /*
+ * Make sure that later code is ordered after the SRCU grace
+- * period. This pairs with the raw_spin_lock_irq_rcu_node()
++ * period. This pairs with the spin_lock_irq_rcu_node()
+ * in srcu_invoke_callbacks(). Unlike Tree RCU, this is needed
+ * because the current CPU might have been totally uninvolved with
+ * (and thus unordered against) that grace period.
+@@ -1024,7 +1041,7 @@
+ */
+ for_each_possible_cpu(cpu) {
+ sdp = per_cpu_ptr(sp->sda, cpu);
+- raw_spin_lock_irq_rcu_node(sdp);
++ spin_lock_irq_rcu_node(sdp);
+ atomic_inc(&sp->srcu_barrier_cpu_cnt);
+ sdp->srcu_barrier_head.func = srcu_barrier_cb;
+ debug_rcu_head_queue(&sdp->srcu_barrier_head);
+@@ -1033,7 +1050,7 @@
+ debug_rcu_head_unqueue(&sdp->srcu_barrier_head);
+ atomic_dec(&sp->srcu_barrier_cpu_cnt);
+ }
+- raw_spin_unlock_irq_rcu_node(sdp);
++ spin_unlock_irq_rcu_node(sdp);
+ }
+
+ /* Remove the initial count, at which point reaching zero can happen. */
+@@ -1082,17 +1099,17 @@
+ */
+ idx = rcu_seq_state(smp_load_acquire(&sp->srcu_gp_seq)); /* ^^^ */
+ if (idx == SRCU_STATE_IDLE) {
+- raw_spin_lock_irq_rcu_node(sp);
++ spin_lock_irq_rcu_node(sp);
+ if (ULONG_CMP_GE(sp->srcu_gp_seq, sp->srcu_gp_seq_needed)) {
+ WARN_ON_ONCE(rcu_seq_state(sp->srcu_gp_seq));
+- raw_spin_unlock_irq_rcu_node(sp);
++ spin_unlock_irq_rcu_node(sp);
+ mutex_unlock(&sp->srcu_gp_mutex);
+ return;
+ }
+ idx = rcu_seq_state(READ_ONCE(sp->srcu_gp_seq));
+ if (idx == SRCU_STATE_IDLE)
+ srcu_gp_start(sp);
+- raw_spin_unlock_irq_rcu_node(sp);
++ spin_unlock_irq_rcu_node(sp);
+ if (idx != SRCU_STATE_IDLE) {
+ mutex_unlock(&sp->srcu_gp_mutex);
+ return; /* Someone else started the grace period. */
+@@ -1141,19 +1158,19 @@
+ sdp = container_of(work, struct srcu_data, work.work);
+ sp = sdp->sp;
+ rcu_cblist_init(&ready_cbs);
+- raw_spin_lock_irq_rcu_node(sdp);
++ spin_lock_irq_rcu_node(sdp);
+ rcu_segcblist_advance(&sdp->srcu_cblist,
+ rcu_seq_current(&sp->srcu_gp_seq));
+ if (sdp->srcu_cblist_invoking ||
+ !rcu_segcblist_ready_cbs(&sdp->srcu_cblist)) {
+- raw_spin_unlock_irq_rcu_node(sdp);
++ spin_unlock_irq_rcu_node(sdp);
+ return; /* Someone else on the job or nothing to do. */
+ }
+
+ /* We are on the job! Extract and invoke ready callbacks. */
+ sdp->srcu_cblist_invoking = true;
+ rcu_segcblist_extract_done_cbs(&sdp->srcu_cblist, &ready_cbs);
+- raw_spin_unlock_irq_rcu_node(sdp);
++ spin_unlock_irq_rcu_node(sdp);
+ rhp = rcu_cblist_dequeue(&ready_cbs);
+ for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) {
+ debug_rcu_head_unqueue(rhp);
+@@ -1166,13 +1183,13 @@
+ * Update counts, accelerate new callbacks, and if needed,
+ * schedule another round of callback invocation.
+ */
+- raw_spin_lock_irq_rcu_node(sdp);
++ spin_lock_irq_rcu_node(sdp);
+ rcu_segcblist_insert_count(&sdp->srcu_cblist, &ready_cbs);
+ (void)rcu_segcblist_accelerate(&sdp->srcu_cblist,
+ rcu_seq_snap(&sp->srcu_gp_seq));
+ sdp->srcu_cblist_invoking = false;
+ more = rcu_segcblist_ready_cbs(&sdp->srcu_cblist);
+- raw_spin_unlock_irq_rcu_node(sdp);
++ spin_unlock_irq_rcu_node(sdp);
+ if (more)
+ srcu_schedule_cbs_sdp(sdp, 0);
+ }
+@@ -1185,7 +1202,7 @@
+ {
+ bool pushgp = true;
+
+- raw_spin_lock_irq_rcu_node(sp);
++ spin_lock_irq_rcu_node(sp);
+ if (ULONG_CMP_GE(sp->srcu_gp_seq, sp->srcu_gp_seq_needed)) {
+ if (!WARN_ON_ONCE(rcu_seq_state(sp->srcu_gp_seq))) {
+ /* All requests fulfilled, time to go idle. */
+@@ -1195,7 +1212,7 @@
+ /* Outstanding request and no GP. Start one. */
+ srcu_gp_start(sp);
+ }
+- raw_spin_unlock_irq_rcu_node(sp);
++ spin_unlock_irq_rcu_node(sp);
+
+ if (pushgp)
+ queue_delayed_work(system_power_efficient_wq, &sp->work, delay);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/tree.c linux-4.14/kernel/rcu/tree.c
+--- linux-4.14.orig/kernel/rcu/tree.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/rcu/tree.c 2018-09-05 11:05:07.000000000 +0200
+@@ -58,6 +58,11 @@
+ #include <linux/trace_events.h>
+ #include <linux/suspend.h>
+ #include <linux/ftrace.h>
++#include <linux/delay.h>
++#include <linux/gfp.h>
++#include <linux/oom.h>
++#include <linux/smpboot.h>
++#include "../time/tick-internal.h"
+
+ #include "tree.h"
+ #include "rcu.h"
+@@ -243,6 +248,19 @@
+ this_cpu_ptr(&rcu_sched_data), true);
+ }
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void rcu_preempt_qs(void);
+
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+# define net_seqlock_t seqlock_t
-+# define net_seq_begin(__r) read_seqbegin(__r)
-+# define net_seq_retry(__r, __s) read_seqretry(__r, __s)
++void rcu_bh_qs(void)
++{
++ unsigned long flags;
+
++ /* Callers to this function, rcu_preempt_qs(), must disable irqs. */
++ local_irq_save(flags);
++ rcu_preempt_qs();
++ local_irq_restore(flags);
++}
+#else
-+# define net_seqlock_t seqcount_t
-+# define net_seq_begin(__r) read_seqcount_begin(__r)
-+# define net_seq_retry(__r, __s) read_seqcount_retry(__r, __s)
-+#endif
-+
+ void rcu_bh_qs(void)
+ {
+ RCU_LOCKDEP_WARN(preemptible(), "rcu_bh_qs() invoked with preemption enabled!!!");
+@@ -253,6 +271,7 @@
+ __this_cpu_write(rcu_bh_data.cpu_no_qs.b.norm, false);
+ }
+ }
+#endif
-diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
-index 7adf4386ac8f..d3fd5c357268 100644
---- a/include/net/netns/ipv4.h
-+++ b/include/net/netns/ipv4.h
-@@ -69,6 +69,7 @@ struct netns_ipv4 {
- int sysctl_icmp_echo_ignore_all;
- int sysctl_icmp_echo_ignore_broadcasts;
-+ int sysctl_icmp_echo_sysrq;
- int sysctl_icmp_ignore_bogus_error_responses;
- int sysctl_icmp_ratelimit;
- int sysctl_icmp_ratemask;
-diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
-index e6aa0a249672..b57736f2a8a3 100644
---- a/include/net/sch_generic.h
-+++ b/include/net/sch_generic.h
-@@ -10,6 +10,7 @@
- #include <linux/dynamic_queue_limits.h>
- #include <net/gen_stats.h>
- #include <net/rtnetlink.h>
-+#include <net/net_seq_lock.h>
+ /*
+ * Steal a bit from the bottom of ->dynticks for idle entry/exit
+@@ -564,11 +583,13 @@
+ /*
+ * Return the number of RCU BH batches started thus far for debug & stats.
+ */
++#ifndef CONFIG_PREEMPT_RT_FULL
+ unsigned long rcu_batches_started_bh(void)
+ {
+ return rcu_bh_state.gpnum;
+ }
+ EXPORT_SYMBOL_GPL(rcu_batches_started_bh);
++#endif
- struct Qdisc_ops;
- struct qdisc_walker;
-@@ -86,7 +87,7 @@ struct Qdisc {
- struct sk_buff *gso_skb ____cacheline_aligned_in_smp;
- struct qdisc_skb_head q;
- struct gnet_stats_basic_packed bstats;
-- seqcount_t running;
-+ net_seqlock_t running;
- struct gnet_stats_queue qstats;
- unsigned long state;
- struct Qdisc *next_sched;
-@@ -98,13 +99,22 @@ struct Qdisc {
- spinlock_t busylock ____cacheline_aligned_in_smp;
- };
+ /*
+ * Return the number of RCU batches completed thus far for debug & stats.
+@@ -588,6 +609,7 @@
+ }
+ EXPORT_SYMBOL_GPL(rcu_batches_completed_sched);
--static inline bool qdisc_is_running(const struct Qdisc *qdisc)
-+static inline bool qdisc_is_running(struct Qdisc *qdisc)
- {
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ return spin_is_locked(&qdisc->running.lock) ? true : false;
-+#else
- return (raw_read_seqcount(&qdisc->running) & 1) ? true : false;
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+ * Return the number of RCU BH batches completed thus far for debug & stats.
+ */
+@@ -596,6 +618,7 @@
+ return rcu_bh_state.completed;
+ }
+ EXPORT_SYMBOL_GPL(rcu_batches_completed_bh);
+#endif
+
+ /*
+ * Return the number of RCU expedited batches completed thus far for
+@@ -619,6 +642,7 @@
}
+ EXPORT_SYMBOL_GPL(rcu_exp_batches_completed_sched);
- static inline bool qdisc_run_begin(struct Qdisc *qdisc)
- {
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ if (try_write_seqlock(&qdisc->running))
-+ return true;
-+ return false;
-+#else
- if (qdisc_is_running(qdisc))
- return false;
- /* Variant of write_seqcount_begin() telling lockdep a trylock
-@@ -113,11 +123,16 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
- raw_write_seqcount_begin(&qdisc->running);
- seqcount_acquire(&qdisc->running.dep_map, 0, 1, _RET_IP_);
- return true;
-+#endif
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+ * Force a quiescent state.
+ */
+@@ -637,6 +661,13 @@
}
+ EXPORT_SYMBOL_GPL(rcu_bh_force_quiescent_state);
- static inline void qdisc_run_end(struct Qdisc *qdisc)
- {
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ write_sequnlock(&qdisc->running);
+#else
- write_seqcount_end(&qdisc->running);
++void rcu_force_quiescent_state(void)
++{
++}
++EXPORT_SYMBOL_GPL(rcu_force_quiescent_state);
+#endif
- }
++
+ /*
+ * Force a quiescent state for RCU-sched.
+ */
+@@ -687,9 +718,11 @@
+ case RCU_FLAVOR:
+ rsp = rcu_state_p;
+ break;
++#ifndef CONFIG_PREEMPT_RT_FULL
+ case RCU_BH_FLAVOR:
+ rsp = &rcu_bh_state;
+ break;
++#endif
+ case RCU_SCHED_FLAVOR:
+ rsp = &rcu_sched_state;
+ break;
+@@ -2918,18 +2951,17 @@
+ /*
+ * Do RCU core processing for the current CPU.
+ */
+-static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused)
++static __latent_entropy void rcu_process_callbacks(void)
+ {
+ struct rcu_state *rsp;
- static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
-@@ -308,7 +323,7 @@ static inline spinlock_t *qdisc_root_sleeping_lock(const struct Qdisc *qdisc)
- return qdisc_lock(root);
+ if (cpu_is_offline(smp_processor_id()))
+ return;
+- trace_rcu_utilization(TPS("Start RCU core"));
+ for_each_rcu_flavor(rsp)
+ __rcu_process_callbacks(rsp);
+- trace_rcu_utilization(TPS("End RCU core"));
}
--static inline seqcount_t *qdisc_root_sleeping_running(const struct Qdisc *qdisc)
-+static inline net_seqlock_t *qdisc_root_sleeping_running(const struct Qdisc *qdisc)
++static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task);
+ /*
+ * Schedule RCU callback invocation. If the specified type of RCU
+ * does not support RCU priority boosting, just do a direct call,
+@@ -2941,18 +2973,105 @@
{
- struct Qdisc *root = qdisc_root_sleeping(qdisc);
-
-diff --git a/include/trace/events/hist.h b/include/trace/events/hist.h
-new file mode 100644
-index 000000000000..f7710de1b1f3
---- /dev/null
-+++ b/include/trace/events/hist.h
-@@ -0,0 +1,73 @@
-+#undef TRACE_SYSTEM
-+#define TRACE_SYSTEM hist
-+
-+#if !defined(_TRACE_HIST_H) || defined(TRACE_HEADER_MULTI_READ)
-+#define _TRACE_HIST_H
-+
-+#include "latency_hist.h"
-+#include <linux/tracepoint.h>
-+
-+#if !defined(CONFIG_PREEMPT_OFF_HIST) && !defined(CONFIG_INTERRUPT_OFF_HIST)
-+#define trace_preemptirqsoff_hist(a, b)
-+#define trace_preemptirqsoff_hist_rcuidle(a, b)
-+#else
-+TRACE_EVENT(preemptirqsoff_hist,
-+
-+ TP_PROTO(int reason, int starthist),
-+
-+ TP_ARGS(reason, starthist),
-+
-+ TP_STRUCT__entry(
-+ __field(int, reason)
-+ __field(int, starthist)
-+ ),
+ if (unlikely(!READ_ONCE(rcu_scheduler_fully_active)))
+ return;
+- if (likely(!rsp->boost)) {
+- rcu_do_batch(rsp, rdp);
++ rcu_do_batch(rsp, rdp);
++}
+
-+ TP_fast_assign(
-+ __entry->reason = reason;
-+ __entry->starthist = starthist;
-+ ),
++static void rcu_wake_cond(struct task_struct *t, int status)
++{
++ /*
++ * If the thread is yielding, only wake it when this
++ * is invoked from idle
++ */
++ if (t && (status != RCU_KTHREAD_YIELDING || is_idle_task(current)))
++ wake_up_process(t);
++}
+
-+ TP_printk("reason=%s starthist=%s", getaction(__entry->reason),
-+ __entry->starthist ? "start" : "stop")
-+);
-+#endif
++/*
++ * Wake up this CPU's rcuc kthread to do RCU core processing.
++ */
++static void invoke_rcu_core(void)
++{
++ unsigned long flags;
++ struct task_struct *t;
+
-+#ifndef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+#define trace_hrtimer_interrupt(a, b, c, d)
-+#else
-+TRACE_EVENT(hrtimer_interrupt,
-+
-+ TP_PROTO(int cpu, long long offset, struct task_struct *curr,
-+ struct task_struct *task),
-+
-+ TP_ARGS(cpu, offset, curr, task),
-+
-+ TP_STRUCT__entry(
-+ __field(int, cpu)
-+ __field(long long, offset)
-+ __array(char, ccomm, TASK_COMM_LEN)
-+ __field(int, cprio)
-+ __array(char, tcomm, TASK_COMM_LEN)
-+ __field(int, tprio)
-+ ),
-+
-+ TP_fast_assign(
-+ __entry->cpu = cpu;
-+ __entry->offset = offset;
-+ memcpy(__entry->ccomm, curr->comm, TASK_COMM_LEN);
-+ __entry->cprio = curr->prio;
-+ memcpy(__entry->tcomm, task != NULL ? task->comm : "<none>",
-+ task != NULL ? TASK_COMM_LEN : 7);
-+ __entry->tprio = task != NULL ? task->prio : -1;
-+ ),
-+
-+ TP_printk("cpu=%d offset=%lld curr=%s[%d] thread=%s[%d]",
-+ __entry->cpu, __entry->offset, __entry->ccomm,
-+ __entry->cprio, __entry->tcomm, __entry->tprio)
-+);
-+#endif
-+
-+#endif /* _TRACE_HIST_H */
-+
-+/* This part must be outside protection */
-+#include <trace/define_trace.h>
-diff --git a/include/trace/events/latency_hist.h b/include/trace/events/latency_hist.h
-new file mode 100644
-index 000000000000..d3f2fbd560b1
---- /dev/null
-+++ b/include/trace/events/latency_hist.h
-@@ -0,0 +1,29 @@
-+#ifndef _LATENCY_HIST_H
-+#define _LATENCY_HIST_H
-+
-+enum hist_action {
-+ IRQS_ON,
-+ PREEMPT_ON,
-+ TRACE_STOP,
-+ IRQS_OFF,
-+ PREEMPT_OFF,
-+ TRACE_START,
-+};
++ if (!cpu_online(smp_processor_id()))
+ return;
++ local_irq_save(flags);
++ __this_cpu_write(rcu_cpu_has_work, 1);
++ t = __this_cpu_read(rcu_cpu_kthread_task);
++ if (t != NULL && current != t)
++ rcu_wake_cond(t, __this_cpu_read(rcu_cpu_kthread_status));
++ local_irq_restore(flags);
++}
+
-+static char *actions[] = {
-+ "IRQS_ON",
-+ "PREEMPT_ON",
-+ "TRACE_STOP",
-+ "IRQS_OFF",
-+ "PREEMPT_OFF",
-+ "TRACE_START",
-+};
++static void rcu_cpu_kthread_park(unsigned int cpu)
++{
++ per_cpu(rcu_cpu_kthread_status, cpu) = RCU_KTHREAD_OFFCPU;
++}
+
-+static inline char *getaction(int action)
++static int rcu_cpu_kthread_should_run(unsigned int cpu)
+{
-+ if (action >= 0 && action <= sizeof(actions)/sizeof(actions[0]))
-+ return actions[action];
-+ return "unknown";
++ return __this_cpu_read(rcu_cpu_has_work);
+}
+
-+#endif /* _LATENCY_HIST_H */
-diff --git a/init/Kconfig b/init/Kconfig
-index 34407f15e6d3..2ce33a32e65d 100644
---- a/init/Kconfig
-+++ b/init/Kconfig
-@@ -506,7 +506,7 @@ config TINY_RCU
-
- config RCU_EXPERT
- bool "Make expert-level adjustments to RCU configuration"
-- default n
-+ default y if PREEMPT_RT_FULL
- help
- This option needs to be enabled if you wish to make
- expert-level adjustments to RCU configuration. By default,
-@@ -623,7 +623,7 @@ config RCU_FANOUT_LEAF
-
- config RCU_FAST_NO_HZ
- bool "Accelerate last non-dyntick-idle CPU's grace periods"
-- depends on NO_HZ_COMMON && SMP && RCU_EXPERT
-+ depends on NO_HZ_COMMON && SMP && RCU_EXPERT && !PREEMPT_RT_FULL
- default n
- help
- This option permits CPUs to enter dynticks-idle state even if
-@@ -650,7 +650,7 @@ config TREE_RCU_TRACE
- config RCU_BOOST
- bool "Enable RCU priority boosting"
- depends on RT_MUTEXES && PREEMPT_RCU && RCU_EXPERT
-- default n
-+ default y if PREEMPT_RT_FULL
- help
- This option boosts the priority of preempted RCU readers that
- block the current preemptible RCU grace period for too long.
-@@ -781,19 +781,6 @@ config RCU_NOCB_CPU_ALL
-
- endchoice
-
--config RCU_EXPEDITE_BOOT
-- bool
-- default n
-- help
-- This option enables expedited grace periods at boot time,
-- as if rcu_expedite_gp() had been invoked early in boot.
-- The corresponding rcu_unexpedite_gp() is invoked from
-- rcu_end_inkernel_boot(), which is intended to be invoked
-- at the end of the kernel-only boot sequence, just before
-- init is exec'ed.
--
-- Accept the default if unsure.
--
- endmenu # "RCU Subsystem"
-
- config BUILD_BIN2C
-@@ -1064,6 +1051,7 @@ config CFS_BANDWIDTH
- config RT_GROUP_SCHED
- bool "Group scheduling for SCHED_RR/FIFO"
- depends on CGROUP_SCHED
-+ depends on !PREEMPT_RT_FULL
- default n
- help
- This feature lets you explicitly allocate real CPU bandwidth
-@@ -1772,6 +1760,7 @@ choice
-
- config SLAB
- bool "SLAB"
-+ depends on !PREEMPT_RT_FULL
- select HAVE_HARDENED_USERCOPY_ALLOCATOR
- help
- The regular slab allocator that is established and known to work
-@@ -1792,6 +1781,7 @@ config SLUB
- config SLOB
- depends on EXPERT
- bool "SLOB (Simple Allocator)"
-+ depends on !PREEMPT_RT_FULL
- help
- SLOB replaces the stock allocator with a drastically simpler
- allocator. SLOB is generally more space efficient but
-@@ -1810,7 +1800,7 @@ config SLAB_FREELIST_RANDOM
-
- config SLUB_CPU_PARTIAL
- default y
-- depends on SLUB && SMP
-+ depends on SLUB && SMP && !PREEMPT_RT_FULL
- bool "SLUB per cpu partial cache"
- help
- Per cpu partial caches accellerate objects allocation and freeing
-diff --git a/init/Makefile b/init/Makefile
-index c4fb45525d08..821190dfaa75 100644
---- a/init/Makefile
-+++ b/init/Makefile
-@@ -35,4 +35,4 @@ $(obj)/version.o: include/generated/compile.h
- include/generated/compile.h: FORCE
- @$($(quiet)chk_compile.h)
- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/mkcompile_h $@ \
-- "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CC) $(KBUILD_CFLAGS)"
-+ "$(UTS_MACHINE)" "$(CONFIG_SMP)" "$(CONFIG_PREEMPT)" "$(CONFIG_PREEMPT_RT_FULL)" "$(CC) $(KBUILD_CFLAGS)"
-diff --git a/init/main.c b/init/main.c
-index 2858be732f6d..3c97c3c91d88 100644
---- a/init/main.c
-+++ b/init/main.c
-@@ -507,6 +507,7 @@ asmlinkage __visible void __init start_kernel(void)
- setup_command_line(command_line);
- setup_nr_cpu_ids();
- setup_per_cpu_areas();
-+ softirq_early_init();
- boot_cpu_state_init();
- smp_prepare_boot_cpu(); /* arch-specific boot-cpu hooks */
++/*
++ * Per-CPU kernel thread that invokes RCU callbacks. This replaces the
++ * RCU softirq used in flavors and configurations of RCU that do not
++ * support RCU priority boosting.
++ */
++static void rcu_cpu_kthread(unsigned int cpu)
++{
++ unsigned int *statusp = this_cpu_ptr(&rcu_cpu_kthread_status);
++ char work, *workp = this_cpu_ptr(&rcu_cpu_has_work);
++ int spincnt;
++
++ for (spincnt = 0; spincnt < 10; spincnt++) {
++ trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
++ local_bh_disable();
++ *statusp = RCU_KTHREAD_RUNNING;
++ this_cpu_inc(rcu_cpu_kthread_loops);
++ local_irq_disable();
++ work = *workp;
++ *workp = 0;
++ local_irq_enable();
++ if (work)
++ rcu_process_callbacks();
++ local_bh_enable();
++ if (*workp == 0) {
++ trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
++ *statusp = RCU_KTHREAD_WAITING;
++ return;
++ }
+ }
+- invoke_rcu_callbacks_kthread();
++ *statusp = RCU_KTHREAD_YIELDING;
++ trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield"));
++ schedule_timeout_interruptible(2);
++ trace_rcu_utilization(TPS("End CPU kthread@rcu_yield"));
++ *statusp = RCU_KTHREAD_WAITING;
+ }
-diff --git a/ipc/sem.c b/ipc/sem.c
-index 10b94bc59d4a..b8360eaacc7a 100644
---- a/ipc/sem.c
-+++ b/ipc/sem.c
-@@ -712,6 +712,13 @@ static int perform_atomic_semop(struct sem_array *sma, struct sem_queue *q)
- static void wake_up_sem_queue_prepare(struct list_head *pt,
- struct sem_queue *q, int error)
+-static void invoke_rcu_core(void)
++static struct smp_hotplug_thread rcu_cpu_thread_spec = {
++ .store = &rcu_cpu_kthread_task,
++ .thread_should_run = rcu_cpu_kthread_should_run,
++ .thread_fn = rcu_cpu_kthread,
++ .thread_comm = "rcuc/%u",
++ .setup = rcu_cpu_kthread_setup,
++ .park = rcu_cpu_kthread_park,
++};
++
++/*
++ * Spawn per-CPU RCU core processing kthreads.
++ */
++static int __init rcu_spawn_core_kthreads(void)
{
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ struct task_struct *p = q->sleeper;
-+ get_task_struct(p);
-+ q->status = error;
-+ wake_up_process(p);
-+ put_task_struct(p);
-+#else
- if (list_empty(pt)) {
- /*
- * Hold preempt off so that we don't get preempted and have the
-@@ -723,6 +730,7 @@ static void wake_up_sem_queue_prepare(struct list_head *pt,
- q->pid = error;
+- if (cpu_online(smp_processor_id()))
+- raise_softirq(RCU_SOFTIRQ);
++ int cpu;
++
++ for_each_possible_cpu(cpu)
++ per_cpu(rcu_cpu_has_work, cpu) = 0;
++ BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
++ return 0;
+ }
++early_initcall(rcu_spawn_core_kthreads);
- list_add_tail(&q->list, pt);
-+#endif
+ /*
+ * Handle any core-RCU processing required by a call_rcu() invocation.
+@@ -3113,6 +3232,7 @@
}
+ EXPORT_SYMBOL_GPL(call_rcu_sched);
++#ifndef CONFIG_PREEMPT_RT_FULL
/**
-@@ -736,6 +744,7 @@ static void wake_up_sem_queue_prepare(struct list_head *pt,
- */
- static void wake_up_sem_queue_do(struct list_head *pt)
- {
-+#ifndef CONFIG_PREEMPT_RT_BASE
- struct sem_queue *q, *t;
- int did_something;
-
-@@ -748,6 +757,7 @@ static void wake_up_sem_queue_do(struct list_head *pt)
- }
- if (did_something)
- preempt_enable();
-+#endif
+ * call_rcu_bh() - Queue an RCU for invocation after a quicker grace period.
+ * @head: structure to be used for queueing the RCU updates.
+@@ -3140,6 +3260,7 @@
+ __call_rcu(head, func, &rcu_bh_state, -1, 0);
}
+ EXPORT_SYMBOL_GPL(call_rcu_bh);
++#endif
- static void unlink_queue(struct sem_array *sma, struct sem_queue *q)
-diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
-index ebdb0043203a..b9e6aa7e5aa6 100644
---- a/kernel/Kconfig.locks
-+++ b/kernel/Kconfig.locks
-@@ -225,11 +225,11 @@ config ARCH_SUPPORTS_ATOMIC_RMW
+ /*
+ * Queue an RCU callback for lazy invocation after a grace period.
+@@ -3225,6 +3346,7 @@
+ }
+ EXPORT_SYMBOL_GPL(synchronize_sched);
- config MUTEX_SPIN_ON_OWNER
- def_bool y
-- depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW
-+ depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW && !PREEMPT_RT_FULL
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /**
+ * synchronize_rcu_bh - wait until an rcu_bh grace period has elapsed.
+ *
+@@ -3251,6 +3373,7 @@
+ wait_rcu_gp(call_rcu_bh);
+ }
+ EXPORT_SYMBOL_GPL(synchronize_rcu_bh);
++#endif
- config RWSEM_SPIN_ON_OWNER
- def_bool y
-- depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW
-+ depends on SMP && RWSEM_XCHGADD_ALGORITHM && ARCH_SUPPORTS_ATOMIC_RMW && !PREEMPT_RT_FULL
+ /**
+ * get_state_synchronize_rcu - Snapshot current RCU state
+@@ -3601,6 +3724,7 @@
+ mutex_unlock(&rsp->barrier_mutex);
+ }
- config LOCK_SPIN_ON_OWNER
- def_bool y
-diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
-index 3f9c97419f02..11dbe26a8279 100644
---- a/kernel/Kconfig.preempt
-+++ b/kernel/Kconfig.preempt
-@@ -1,3 +1,16 @@
-+config PREEMPT
-+ bool
-+ select PREEMPT_COUNT
-+
-+config PREEMPT_RT_BASE
-+ bool
-+ select PREEMPT
-+
-+config HAVE_PREEMPT_LAZY
-+ bool
-+
-+config PREEMPT_LAZY
-+ def_bool y if HAVE_PREEMPT_LAZY && PREEMPT_RT_FULL
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /**
+ * rcu_barrier_bh - Wait until all in-flight call_rcu_bh() callbacks complete.
+ */
+@@ -3609,6 +3733,7 @@
+ _rcu_barrier(&rcu_bh_state);
+ }
+ EXPORT_SYMBOL_GPL(rcu_barrier_bh);
++#endif
- choice
- prompt "Preemption Model"
-@@ -33,9 +46,9 @@ config PREEMPT_VOLUNTARY
+ /**
+ * rcu_barrier_sched - Wait for in-flight call_rcu_sched() callbacks.
+@@ -3741,8 +3866,6 @@
+ {
+ sync_sched_exp_online_cleanup(cpu);
+ rcutree_affinity_setting(cpu, -1);
+- if (IS_ENABLED(CONFIG_TREE_SRCU))
+- srcu_online_cpu(cpu);
+ return 0;
+ }
- Select this if you are building a kernel for a desktop system.
+@@ -3753,8 +3876,6 @@
+ int rcutree_offline_cpu(unsigned int cpu)
+ {
+ rcutree_affinity_setting(cpu, cpu);
+- if (IS_ENABLED(CONFIG_TREE_SRCU))
+- srcu_offline_cpu(cpu);
+ return 0;
+ }
--config PREEMPT
-+config PREEMPT__LL
- bool "Preemptible Kernel (Low-Latency Desktop)"
-- select PREEMPT_COUNT
-+ select PREEMPT
- select UNINLINE_SPIN_UNLOCK if !ARCH_INLINE_SPIN_UNLOCK
- help
- This option reduces the latency of the kernel by making
-@@ -52,6 +65,22 @@ config PREEMPT
- embedded system with latency requirements in the milliseconds
- range.
+@@ -4184,12 +4305,13 @@
-+config PREEMPT_RTB
-+ bool "Preemptible Kernel (Basic RT)"
-+ select PREEMPT_RT_BASE
-+ help
-+ This option is basically the same as (Low-Latency Desktop) but
-+ enables changes which are preliminary for the full preemptible
-+ RT kernel.
-+
-+config PREEMPT_RT_FULL
-+ bool "Fully Preemptible Kernel (RT)"
-+ depends on IRQ_FORCED_THREADING
-+ select PREEMPT_RT_BASE
-+ select PREEMPT_RCU
-+ help
-+ All and everything
-+
- endchoice
+ rcu_bootup_announce();
+ rcu_init_geometry();
++#ifndef CONFIG_PREEMPT_RT_FULL
+ rcu_init_one(&rcu_bh_state);
++#endif
+ rcu_init_one(&rcu_sched_state);
+ if (dump_tree)
+ rcu_dump_rcu_node_tree(&rcu_sched_state);
+ __rcu_init_preempt();
+- open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
- config PREEMPT_COUNT
-diff --git a/kernel/cgroup.c b/kernel/cgroup.c
-index 85bc9beb046d..3b8da75ba2e0 100644
---- a/kernel/cgroup.c
-+++ b/kernel/cgroup.c
-@@ -5040,10 +5040,10 @@ static void css_free_rcu_fn(struct rcu_head *rcu_head)
- queue_work(cgroup_destroy_wq, &css->destroy_work);
+ /*
+ * We don't need protection against CPU-hotplug here because
+@@ -4200,8 +4322,6 @@
+ for_each_online_cpu(cpu) {
+ rcutree_prepare_cpu(cpu);
+ rcu_cpu_starting(cpu);
+- if (IS_ENABLED(CONFIG_TREE_SRCU))
+- srcu_online_cpu(cpu);
+ }
}
--static void css_release_work_fn(struct work_struct *work)
-+static void css_release_work_fn(struct swork_event *sev)
- {
- struct cgroup_subsys_state *css =
-- container_of(work, struct cgroup_subsys_state, destroy_work);
-+ container_of(sev, struct cgroup_subsys_state, destroy_swork);
- struct cgroup_subsys *ss = css->ss;
- struct cgroup *cgrp = css->cgroup;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/tree.h linux-4.14/kernel/rcu/tree.h
+--- linux-4.14.orig/kernel/rcu/tree.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/rcu/tree.h 2018-09-05 11:05:07.000000000 +0200
+@@ -427,7 +427,9 @@
+ */
+ extern struct rcu_state rcu_sched_state;
-@@ -5086,8 +5086,8 @@ static void css_release(struct percpu_ref *ref)
- struct cgroup_subsys_state *css =
- container_of(ref, struct cgroup_subsys_state, refcnt);
++#ifndef CONFIG_PREEMPT_RT_FULL
+ extern struct rcu_state rcu_bh_state;
++#endif
-- INIT_WORK(&css->destroy_work, css_release_work_fn);
-- queue_work(cgroup_destroy_wq, &css->destroy_work);
-+ INIT_SWORK(&css->destroy_swork, css_release_work_fn);
-+ swork_queue(&css->destroy_swork);
- }
+ #ifdef CONFIG_PREEMPT_RCU
+ extern struct rcu_state rcu_preempt_state;
+@@ -436,12 +438,10 @@
+ int rcu_dynticks_snap(struct rcu_dynticks *rdtp);
+ bool rcu_eqs_special_set(int cpu);
- static void init_and_link_css(struct cgroup_subsys_state *css,
-@@ -5742,6 +5742,7 @@ static int __init cgroup_wq_init(void)
- */
- cgroup_destroy_wq = alloc_workqueue("cgroup_destroy", 0, 1);
- BUG_ON(!cgroup_destroy_wq);
-+ BUG_ON(swork_get());
+-#ifdef CONFIG_RCU_BOOST
+ DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
+ DECLARE_PER_CPU(int, rcu_cpu_kthread_cpu);
+ DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
+ DECLARE_PER_CPU(char, rcu_cpu_has_work);
+-#endif /* #ifdef CONFIG_RCU_BOOST */
- /*
- * Used to destroy pidlists and separate to serve as flush domain.
-diff --git a/kernel/cpu.c b/kernel/cpu.c
-index 217fd2e7f435..69444f1bc924 100644
---- a/kernel/cpu.c
-+++ b/kernel/cpu.c
-@@ -239,6 +239,289 @@ static struct {
- #define cpuhp_lock_acquire() lock_map_acquire(&cpu_hotplug.dep_map)
- #define cpuhp_lock_release() lock_map_release(&cpu_hotplug.dep_map)
+ #ifndef RCU_TREE_NONCORE
+
+@@ -461,10 +461,9 @@
+ static void __init __rcu_init_preempt(void);
+ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
+ static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
+-static void invoke_rcu_callbacks_kthread(void);
+ static bool rcu_is_callbacks_kthread(void);
++static void rcu_cpu_kthread_setup(unsigned int cpu);
+ #ifdef CONFIG_RCU_BOOST
+-static void rcu_preempt_do_callbacks(void);
+ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
+ struct rcu_node *rnp);
+ #endif /* #ifdef CONFIG_RCU_BOOST */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/tree_plugin.h linux-4.14/kernel/rcu/tree_plugin.h
+--- linux-4.14.orig/kernel/rcu/tree_plugin.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/rcu/tree_plugin.h 2018-09-05 11:05:07.000000000 +0200
+@@ -24,39 +24,16 @@
+ * Paul E. McKenney <paulmck@linux.vnet.ibm.com>
+ */
+
+-#include <linux/delay.h>
+-#include <linux/gfp.h>
+-#include <linux/oom.h>
+-#include <linux/sched/debug.h>
+-#include <linux/smpboot.h>
+-#include <uapi/linux/sched/types.h>
+-#include "../time/tick-internal.h"
+-
+-#ifdef CONFIG_RCU_BOOST
+-
+ #include "../locking/rtmutex_common.h"
+
+ /*
+ * Control variables for per-CPU and per-rcu_node kthreads. These
+ * handle all flavors of RCU.
+ */
+-static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task);
+ DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
+ DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
+ DEFINE_PER_CPU(char, rcu_cpu_has_work);
+
+-#else /* #ifdef CONFIG_RCU_BOOST */
+-
+-/*
+- * Some architectures do not define rt_mutexes, but if !CONFIG_RCU_BOOST,
+- * all uses are in dead code. Provide a definition to keep the compiler
+- * happy, but add WARN_ON_ONCE() to complain if used in the wrong place.
+- * This probably needs to be excluded from -rt builds.
+- */
+-#define rt_mutex_owner(a) ({ WARN_ON_ONCE(1); NULL; })
+-
+-#endif /* #else #ifdef CONFIG_RCU_BOOST */
+-
+ #ifdef CONFIG_RCU_NOCB_CPU
+ static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */
+ static bool have_rcu_nocb_mask; /* Was rcu_nocb_mask allocated? */
+@@ -324,9 +301,13 @@
+ struct task_struct *t = current;
+ struct rcu_data *rdp;
+ struct rcu_node *rnp;
++ int sleeping_l = 0;
+
+ RCU_LOCKDEP_WARN(!irqs_disabled(), "rcu_preempt_note_context_switch() invoked with interrupts enabled!!!\n");
+- WARN_ON_ONCE(!preempt && t->rcu_read_lock_nesting > 0);
++#if defined(CONFIG_PREEMPT_RT_FULL)
++ sleeping_l = t->sleeping_lock;
++#endif
++ WARN_ON_ONCE(!preempt && t->rcu_read_lock_nesting > 0 && !sleeping_l);
+ if (t->rcu_read_lock_nesting > 0 &&
+ !t->rcu_read_unlock_special.b.blocked) {
+
+@@ -463,7 +444,7 @@
+ }
+
+ /* Hardware IRQ handlers cannot block, complain if they get here. */
+- if (in_irq() || in_serving_softirq()) {
++ if (preempt_count() & (HARDIRQ_MASK | SOFTIRQ_OFFSET)) {
+ lockdep_rcu_suspicious(__FILE__, __LINE__,
+ "rcu_read_unlock() from irq or softirq with blocking in critical section!!!\n");
+ pr_alert("->rcu_read_unlock_special: %#x (b: %d, enq: %d nq: %d)\n",
+@@ -530,7 +511,7 @@
+
+ /* Unboost if we were boosted. */
+ if (IS_ENABLED(CONFIG_RCU_BOOST) && drop_boost_mutex)
+- rt_mutex_unlock(&rnp->boost_mtx);
++ rt_mutex_futex_unlock(&rnp->boost_mtx);
+
+ /*
+ * If this was the last task on the expedited lists,
+@@ -684,15 +665,6 @@
+ t->rcu_read_unlock_special.b.need_qs = true;
+ }
+
+-#ifdef CONFIG_RCU_BOOST
+-
+-static void rcu_preempt_do_callbacks(void)
+-{
+- rcu_do_batch(rcu_state_p, this_cpu_ptr(rcu_data_p));
+-}
+-
+-#endif /* #ifdef CONFIG_RCU_BOOST */
+-
+ /**
+ * call_rcu() - Queue an RCU callback for invocation after a grace period.
+ * @head: structure to be used for queueing the RCU updates.
+@@ -915,20 +887,23 @@
+
+ #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
-+/**
-+ * hotplug_pcp - per cpu hotplug descriptor
-+ * @unplug: set when pin_current_cpu() needs to sync tasks
-+ * @sync_tsk: the task that waits for tasks to finish pinned sections
-+ * @refcount: counter of tasks in pinned sections
-+ * @grab_lock: set when the tasks entering pinned sections should wait
-+ * @synced: notifier for @sync_tsk to tell cpu_down it's finished
-+ * @mutex: the mutex to make tasks wait (used when @grab_lock is true)
-+ * @mutex_init: zero if the mutex hasn't been initialized yet.
-+ *
-+ * Although @unplug and @sync_tsk may point to the same task, the @unplug
-+ * is used as a flag and still exists after @sync_tsk has exited and
-+ * @sync_tsk set to NULL.
-+ */
-+struct hotplug_pcp {
-+ struct task_struct *unplug;
-+ struct task_struct *sync_tsk;
-+ int refcount;
-+ int grab_lock;
-+ struct completion synced;
-+ struct completion unplug_wait;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ /*
-+ * Note, on PREEMPT_RT, the hotplug lock must save the state of
-+ * the task, otherwise the mutex will cause the task to fail
-+ * to sleep when required. (Because it's called from migrate_disable())
-+ *
-+ * The spinlock_t on PREEMPT_RT is a mutex that saves the task's
-+ * state.
-+ */
-+ spinlock_t lock;
-+#else
-+ struct mutex mutex;
-+#endif
-+ int mutex_init;
-+};
-+
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+# define hotplug_lock(hp) rt_spin_lock__no_mg(&(hp)->lock)
-+# define hotplug_unlock(hp) rt_spin_unlock__no_mg(&(hp)->lock)
-+#else
-+# define hotplug_lock(hp) mutex_lock(&(hp)->mutex)
-+# define hotplug_unlock(hp) mutex_unlock(&(hp)->mutex)
-+#endif
-+
-+static DEFINE_PER_CPU(struct hotplug_pcp, hotplug_pcp);
-+
-+/**
-+ * pin_current_cpu - Prevent the current cpu from being unplugged
-+ *
-+ * Lightweight version of get_online_cpus() to prevent cpu from being
-+ * unplugged when code runs in a migration disabled region.
-+ *
-+ * Must be called with preemption disabled (preempt_count = 1)!
-+ */
-+void pin_current_cpu(void)
-+{
-+ struct hotplug_pcp *hp;
-+ int force = 0;
-+
-+retry:
-+ hp = this_cpu_ptr(&hotplug_pcp);
-+
-+ if (!hp->unplug || hp->refcount || force || preempt_count() > 1 ||
-+ hp->unplug == current) {
-+ hp->refcount++;
-+ return;
-+ }
-+ if (hp->grab_lock) {
-+ preempt_enable();
-+ hotplug_lock(hp);
-+ hotplug_unlock(hp);
-+ } else {
-+ preempt_enable();
-+ /*
-+ * Try to push this task off of this CPU.
-+ */
-+ if (!migrate_me()) {
-+ preempt_disable();
-+ hp = this_cpu_ptr(&hotplug_pcp);
-+ if (!hp->grab_lock) {
-+ /*
-+ * Just let it continue it's already pinned
-+ * or about to sleep.
-+ */
-+ force = 1;
-+ goto retry;
-+ }
-+ preempt_enable();
-+ }
-+ }
-+ preempt_disable();
-+ goto retry;
-+}
-+
-+/**
-+ * unpin_current_cpu - Allow unplug of current cpu
-+ *
-+ * Must be called with preemption or interrupts disabled!
-+ */
-+void unpin_current_cpu(void)
-+{
-+ struct hotplug_pcp *hp = this_cpu_ptr(&hotplug_pcp);
-+
-+ WARN_ON(hp->refcount <= 0);
-+
-+ /* This is safe. sync_unplug_thread is pinned to this cpu */
-+ if (!--hp->refcount && hp->unplug && hp->unplug != current)
-+ wake_up_process(hp->unplug);
-+}
-+
-+static void wait_for_pinned_cpus(struct hotplug_pcp *hp)
-+{
-+ set_current_state(TASK_UNINTERRUPTIBLE);
-+ while (hp->refcount) {
-+ schedule_preempt_disabled();
-+ set_current_state(TASK_UNINTERRUPTIBLE);
-+ }
-+}
-+
-+static int sync_unplug_thread(void *data)
-+{
-+ struct hotplug_pcp *hp = data;
-+
-+ wait_for_completion(&hp->unplug_wait);
-+ preempt_disable();
-+ hp->unplug = current;
-+ wait_for_pinned_cpus(hp);
-+
-+ /*
-+ * This thread will synchronize the cpu_down() with threads
-+ * that have pinned the CPU. When the pinned CPU count reaches
-+ * zero, we inform the cpu_down code to continue to the next step.
-+ */
-+ set_current_state(TASK_UNINTERRUPTIBLE);
-+ preempt_enable();
-+ complete(&hp->synced);
-+
-+ /*
-+ * If all succeeds, the next step will need tasks to wait till
-+ * the CPU is offline before continuing. To do this, the grab_lock
-+ * is set and tasks going into pin_current_cpu() will block on the
-+ * mutex. But we still need to wait for those that are already in
-+ * pinned CPU sections. If the cpu_down() failed, the kthread_should_stop()
-+ * will kick this thread out.
-+ */
-+ while (!hp->grab_lock && !kthread_should_stop()) {
-+ schedule();
-+ set_current_state(TASK_UNINTERRUPTIBLE);
-+ }
-+
-+ /* Make sure grab_lock is seen before we see a stale completion */
-+ smp_mb();
-+
-+ /*
-+ * Now just before cpu_down() enters stop machine, we need to make
-+ * sure all tasks that are in pinned CPU sections are out, and new
-+ * tasks will now grab the lock, keeping them from entering pinned
-+ * CPU sections.
-+ */
-+ if (!kthread_should_stop()) {
-+ preempt_disable();
-+ wait_for_pinned_cpus(hp);
-+ preempt_enable();
-+ complete(&hp->synced);
-+ }
-+
-+ set_current_state(TASK_UNINTERRUPTIBLE);
-+ while (!kthread_should_stop()) {
-+ schedule();
-+ set_current_state(TASK_UNINTERRUPTIBLE);
-+ }
-+ set_current_state(TASK_RUNNING);
-+
-+ /*
-+ * Force this thread off this CPU as it's going down and
-+ * we don't want any more work on this CPU.
-+ */
-+ current->flags &= ~PF_NO_SETAFFINITY;
-+ set_cpus_allowed_ptr(current, cpu_present_mask);
-+ migrate_me();
-+ return 0;
-+}
-+
-+static void __cpu_unplug_sync(struct hotplug_pcp *hp)
-+{
-+ wake_up_process(hp->sync_tsk);
-+ wait_for_completion(&hp->synced);
-+}
-+
-+static void __cpu_unplug_wait(unsigned int cpu)
-+{
-+ struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
-+
-+ complete(&hp->unplug_wait);
-+ wait_for_completion(&hp->synced);
-+}
-+
+/*
-+ * Start the sync_unplug_thread on the target cpu and wait for it to
-+ * complete.
++ * If boosting, set rcuc kthreads to realtime priority.
+ */
-+static int cpu_unplug_begin(unsigned int cpu)
-+{
-+ struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
-+ int err;
-+
-+ /* Protected by cpu_hotplug.lock */
-+ if (!hp->mutex_init) {
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ spin_lock_init(&hp->lock);
-+#else
-+ mutex_init(&hp->mutex);
-+#endif
-+ hp->mutex_init = 1;
-+ }
-+
-+ /* Inform the scheduler to migrate tasks off this CPU */
-+ tell_sched_cpu_down_begin(cpu);
-+
-+ init_completion(&hp->synced);
-+ init_completion(&hp->unplug_wait);
-+
-+ hp->sync_tsk = kthread_create(sync_unplug_thread, hp, "sync_unplug/%d", cpu);
-+ if (IS_ERR(hp->sync_tsk)) {
-+ err = PTR_ERR(hp->sync_tsk);
-+ hp->sync_tsk = NULL;
-+ return err;
-+ }
-+ kthread_bind(hp->sync_tsk, cpu);
-+
-+ /*
-+ * Wait for tasks to get out of the pinned sections,
-+ * it's still OK if new tasks enter. Some CPU notifiers will
-+ * wait for tasks that are going to enter these sections and
-+ * we must not have them block.
-+ */
-+ wake_up_process(hp->sync_tsk);
-+ return 0;
-+}
-+
-+static void cpu_unplug_sync(unsigned int cpu)
-+{
-+ struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
-+
-+ init_completion(&hp->synced);
-+ /* The completion needs to be initialzied before setting grab_lock */
-+ smp_wmb();
-+
-+ /* Grab the mutex before setting grab_lock */
-+ hotplug_lock(hp);
-+ hp->grab_lock = 1;
-+
-+ /*
-+ * The CPU notifiers have been completed.
-+ * Wait for tasks to get out of pinned CPU sections and have new
-+ * tasks block until the CPU is completely down.
-+ */
-+ __cpu_unplug_sync(hp);
-+
-+ /* All done with the sync thread */
-+ kthread_stop(hp->sync_tsk);
-+ hp->sync_tsk = NULL;
-+}
-+
-+static void cpu_unplug_done(unsigned int cpu)
++static void rcu_cpu_kthread_setup(unsigned int cpu)
+{
-+ struct hotplug_pcp *hp = &per_cpu(hotplug_pcp, cpu);
-+
-+ hp->unplug = NULL;
-+ /* Let all tasks know cpu unplug is finished before cleaning up */
-+ smp_wmb();
+ #ifdef CONFIG_RCU_BOOST
++ struct sched_param sp;
+
+-#include "../locking/rtmutex_common.h"
+-
+-static void rcu_wake_cond(struct task_struct *t, int status)
+-{
+- /*
+- * If the thread is yielding, only wake it when this
+- * is invoked from idle
+- */
+- if (status != RCU_KTHREAD_YIELDING || is_idle_task(current))
+- wake_up_process(t);
++ sp.sched_priority = kthread_prio;
++ sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
++#endif /* #ifdef CONFIG_RCU_BOOST */
+ }
+
++#ifdef CONFIG_RCU_BOOST
+
-+ if (hp->sync_tsk)
-+ kthread_stop(hp->sync_tsk);
++#include "../locking/rtmutex_common.h"
+
-+ if (hp->grab_lock) {
-+ hotplug_unlock(hp);
-+ /* protected by cpu_hotplug.lock */
-+ hp->grab_lock = 0;
-+ }
-+ tell_sched_cpu_down_done(cpu);
-+}
-
- void get_online_cpus(void)
- {
-@@ -789,10 +1072,14 @@ static int takedown_cpu(unsigned int cpu)
- struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
- int err;
+ /*
+ * Carry out RCU priority boosting on the task indicated by ->exp_tasks
+ * or ->boost_tasks, advancing the pointer to the next task in the
+@@ -1071,23 +1046,6 @@
+ }
-+ __cpu_unplug_wait(cpu);
- /* Park the smpboot threads */
- kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
- smpboot_park_threads(cpu);
+ /*
+- * Wake up the per-CPU kthread to invoke RCU callbacks.
+- */
+-static void invoke_rcu_callbacks_kthread(void)
+-{
+- unsigned long flags;
+-
+- local_irq_save(flags);
+- __this_cpu_write(rcu_cpu_has_work, 1);
+- if (__this_cpu_read(rcu_cpu_kthread_task) != NULL &&
+- current != __this_cpu_read(rcu_cpu_kthread_task)) {
+- rcu_wake_cond(__this_cpu_read(rcu_cpu_kthread_task),
+- __this_cpu_read(rcu_cpu_kthread_status));
+- }
+- local_irq_restore(flags);
+-}
+-
+-/*
+ * Is the current CPU running the RCU-callbacks kthread?
+ * Caller must have preemption disabled.
+ */
+@@ -1141,67 +1099,6 @@
+ return 0;
+ }
-+ /* Notifiers are done. Don't let any more tasks pin this CPU. */
-+ cpu_unplug_sync(cpu);
-+
- /*
- * Prevent irq alloc/free while the dying cpu reorganizes the
- * interrupt affinities.
-@@ -877,6 +1164,9 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
- struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
- int prev_state, ret = 0;
- bool hasdied = false;
-+ int mycpu;
-+ cpumask_var_t cpumask;
-+ cpumask_var_t cpumask_org;
+-static void rcu_kthread_do_work(void)
+-{
+- rcu_do_batch(&rcu_sched_state, this_cpu_ptr(&rcu_sched_data));
+- rcu_do_batch(&rcu_bh_state, this_cpu_ptr(&rcu_bh_data));
+- rcu_preempt_do_callbacks();
+-}
+-
+-static void rcu_cpu_kthread_setup(unsigned int cpu)
+-{
+- struct sched_param sp;
+-
+- sp.sched_priority = kthread_prio;
+- sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
+-}
+-
+-static void rcu_cpu_kthread_park(unsigned int cpu)
+-{
+- per_cpu(rcu_cpu_kthread_status, cpu) = RCU_KTHREAD_OFFCPU;
+-}
+-
+-static int rcu_cpu_kthread_should_run(unsigned int cpu)
+-{
+- return __this_cpu_read(rcu_cpu_has_work);
+-}
+-
+-/*
+- * Per-CPU kernel thread that invokes RCU callbacks. This replaces the
+- * RCU softirq used in flavors and configurations of RCU that do not
+- * support RCU priority boosting.
+- */
+-static void rcu_cpu_kthread(unsigned int cpu)
+-{
+- unsigned int *statusp = this_cpu_ptr(&rcu_cpu_kthread_status);
+- char work, *workp = this_cpu_ptr(&rcu_cpu_has_work);
+- int spincnt;
+-
+- for (spincnt = 0; spincnt < 10; spincnt++) {
+- trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
+- local_bh_disable();
+- *statusp = RCU_KTHREAD_RUNNING;
+- this_cpu_inc(rcu_cpu_kthread_loops);
+- local_irq_disable();
+- work = *workp;
+- *workp = 0;
+- local_irq_enable();
+- if (work)
+- rcu_kthread_do_work();
+- local_bh_enable();
+- if (*workp == 0) {
+- trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
+- *statusp = RCU_KTHREAD_WAITING;
+- return;
+- }
+- }
+- *statusp = RCU_KTHREAD_YIELDING;
+- trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield"));
+- schedule_timeout_interruptible(2);
+- trace_rcu_utilization(TPS("End CPU kthread@rcu_yield"));
+- *statusp = RCU_KTHREAD_WAITING;
+-}
+-
+ /*
+ * Set the per-rcu_node kthread's affinity to cover all CPUs that are
+ * served by the rcu_node in question. The CPU hotplug lock is still
+@@ -1232,26 +1129,12 @@
+ free_cpumask_var(cm);
+ }
- if (num_online_cpus() == 1)
- return -EBUSY;
-@@ -884,7 +1174,34 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
- if (!cpu_present(cpu))
- return -EINVAL;
+-static struct smp_hotplug_thread rcu_cpu_thread_spec = {
+- .store = &rcu_cpu_kthread_task,
+- .thread_should_run = rcu_cpu_kthread_should_run,
+- .thread_fn = rcu_cpu_kthread,
+- .thread_comm = "rcuc/%u",
+- .setup = rcu_cpu_kthread_setup,
+- .park = rcu_cpu_kthread_park,
+-};
+-
+ /*
+ * Spawn boost kthreads -- called as soon as the scheduler is running.
+ */
+ static void __init rcu_spawn_boost_kthreads(void)
+ {
+ struct rcu_node *rnp;
+- int cpu;
+-
+- for_each_possible_cpu(cpu)
+- per_cpu(rcu_cpu_has_work, cpu) = 0;
+- BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
+ rcu_for_each_leaf_node(rcu_state_p, rnp)
+ (void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
+ }
+@@ -1274,11 +1157,6 @@
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ }
-+ /* Move the downtaker off the unplug cpu */
-+ if (!alloc_cpumask_var(&cpumask, GFP_KERNEL))
-+ return -ENOMEM;
-+ if (!alloc_cpumask_var(&cpumask_org, GFP_KERNEL)) {
-+ free_cpumask_var(cpumask);
-+ return -ENOMEM;
-+ }
-+
-+ cpumask_copy(cpumask_org, tsk_cpus_allowed(current));
-+ cpumask_andnot(cpumask, cpu_online_mask, cpumask_of(cpu));
-+ set_cpus_allowed_ptr(current, cpumask);
-+ free_cpumask_var(cpumask);
-+ migrate_disable();
-+ mycpu = smp_processor_id();
-+ if (mycpu == cpu) {
-+ printk(KERN_ERR "Yuck! Still on unplug CPU\n!");
-+ migrate_enable();
-+ ret = -EBUSY;
-+ goto restore_cpus;
-+ }
-+
-+ migrate_enable();
- cpu_hotplug_begin();
-+ ret = cpu_unplug_begin(cpu);
-+ if (ret) {
-+ printk("cpu_unplug_begin(%d) failed\n", cpu);
-+ goto out_cancel;
-+ }
+-static void invoke_rcu_callbacks_kthread(void)
+-{
+- WARN_ON_ONCE(1);
+-}
+-
+ static bool rcu_is_callbacks_kthread(void)
+ {
+ return false;
+@@ -1302,7 +1180,7 @@
- cpuhp_tasks_frozen = tasks_frozen;
+ #endif /* #else #ifdef CONFIG_RCU_BOOST */
-@@ -923,10 +1240,15 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+-#if !defined(CONFIG_RCU_FAST_NO_HZ)
++#if !defined(CONFIG_RCU_FAST_NO_HZ) || defined(CONFIG_PREEMPT_RT_FULL)
- hasdied = prev_state != st->state && st->state == CPUHP_OFFLINE;
- out:
-+ cpu_unplug_done(cpu);
-+out_cancel:
- cpu_hotplug_done();
- /* This post dead nonsense must die */
- if (!ret && hasdied)
- cpu_notify_nofail(CPU_POST_DEAD, cpu);
-+restore_cpus:
-+ set_cpus_allowed_ptr(current, cpumask_org);
-+ free_cpumask_var(cpumask_org);
- return ret;
+ /*
+ * Check to see if any future RCU-related work will need to be done
+@@ -1318,7 +1196,9 @@
+ *nextevt = KTIME_MAX;
+ return rcu_cpu_has_callbacks(NULL);
}
++#endif /* !defined(CONFIG_RCU_FAST_NO_HZ) || defined(CONFIG_PREEMPT_RT_FULL) */
-diff --git a/kernel/cpuset.c b/kernel/cpuset.c
-index 29f815d2ef7e..341b17f24f95 100644
---- a/kernel/cpuset.c
-+++ b/kernel/cpuset.c
-@@ -284,7 +284,7 @@ static struct cpuset top_cpuset = {
- */
++#if !defined(CONFIG_RCU_FAST_NO_HZ)
+ /*
+ * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up
+ * after it.
+@@ -1414,6 +1294,8 @@
+ return cbs_ready;
+ }
- static DEFINE_MUTEX(cpuset_mutex);
--static DEFINE_SPINLOCK(callback_lock);
-+static DEFINE_RAW_SPINLOCK(callback_lock);
++#ifndef CONFIG_PREEMPT_RT_FULL
++
+ /*
+ * Allow the CPU to enter dyntick-idle mode unless it has callbacks ready
+ * to invoke. If the CPU has callbacks, try to advance them. Tell the
+@@ -1456,6 +1338,7 @@
+ *nextevt = basemono + dj * TICK_NSEC;
+ return 0;
+ }
++#endif /* #ifndef CONFIG_PREEMPT_RT_FULL */
- static struct workqueue_struct *cpuset_migrate_mm_wq;
+ /*
+ * Prepare a CPU for idle from an RCU perspective. The first major task
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/rcu/update.c linux-4.14/kernel/rcu/update.c
+--- linux-4.14.orig/kernel/rcu/update.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/rcu/update.c 2018-09-05 11:05:07.000000000 +0200
+@@ -66,7 +66,7 @@
+ module_param(rcu_expedited, int, 0);
+ extern int rcu_normal; /* from sysctl */
+ module_param(rcu_normal, int, 0);
+-static int rcu_normal_after_boot;
++static int rcu_normal_after_boot = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);
+ module_param(rcu_normal_after_boot, int, 0);
+ #endif /* #ifndef CONFIG_TINY_RCU */
-@@ -907,9 +907,9 @@ static void update_cpumasks_hier(struct cpuset *cs, struct cpumask *new_cpus)
- continue;
- rcu_read_unlock();
+@@ -333,6 +333,7 @@
+ }
+ EXPORT_SYMBOL_GPL(rcu_read_lock_held);
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cpumask_copy(cp->effective_cpus, new_cpus);
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
++#ifndef CONFIG_PREEMPT_RT_FULL
+ /**
+ * rcu_read_lock_bh_held() - might we be in RCU-bh read-side critical section?
+ *
+@@ -359,6 +360,7 @@
+ return in_softirq() || irqs_disabled();
+ }
+ EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
++#endif
- WARN_ON(!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
- !cpumask_equal(cp->cpus_allowed, cp->effective_cpus));
-@@ -974,9 +974,9 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
- if (retval < 0)
- return retval;
+ #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cpumask_copy(cs->cpus_allowed, trialcs->cpus_allowed);
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/completion.c linux-4.14/kernel/sched/completion.c
+--- linux-4.14.orig/kernel/sched/completion.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/sched/completion.c 2018-09-05 11:05:07.000000000 +0200
+@@ -32,7 +32,7 @@
+ {
+ unsigned long flags;
- /* use trialcs->cpus_allowed as a temp variable */
- update_cpumasks_hier(cs, trialcs->cpus_allowed);
-@@ -1176,9 +1176,9 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
- continue;
- rcu_read_unlock();
+- spin_lock_irqsave(&x->wait.lock, flags);
++ raw_spin_lock_irqsave(&x->wait.lock, flags);
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cp->effective_mems = *new_mems;
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
+ /*
+ * Perform commit of crossrelease here.
+@@ -41,8 +41,8 @@
- WARN_ON(!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
- !nodes_equal(cp->mems_allowed, cp->effective_mems));
-@@ -1246,9 +1246,9 @@ static int update_nodemask(struct cpuset *cs, struct cpuset *trialcs,
- if (retval < 0)
- goto done;
+ if (x->done != UINT_MAX)
+ x->done++;
+- __wake_up_locked(&x->wait, TASK_NORMAL, 1);
+- spin_unlock_irqrestore(&x->wait.lock, flags);
++ swake_up_locked(&x->wait);
++ raw_spin_unlock_irqrestore(&x->wait.lock, flags);
+ }
+ EXPORT_SYMBOL(complete);
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cs->mems_allowed = trialcs->mems_allowed;
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
+@@ -66,10 +66,10 @@
+ {
+ unsigned long flags;
- /* use trialcs->mems_allowed as a temp variable */
- update_nodemasks_hier(cs, &trialcs->mems_allowed);
-@@ -1339,9 +1339,9 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
- spread_flag_changed = ((is_spread_slab(cs) != is_spread_slab(trialcs))
- || (is_spread_page(cs) != is_spread_page(trialcs)));
+- spin_lock_irqsave(&x->wait.lock, flags);
++ raw_spin_lock_irqsave(&x->wait.lock, flags);
+ x->done = UINT_MAX;
+- __wake_up_locked(&x->wait, TASK_NORMAL, 0);
+- spin_unlock_irqrestore(&x->wait.lock, flags);
++ swake_up_all_locked(&x->wait);
++ raw_spin_unlock_irqrestore(&x->wait.lock, flags);
+ }
+ EXPORT_SYMBOL(complete_all);
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cs->flags = trialcs->flags;
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
+@@ -78,20 +78,20 @@
+ long (*action)(long), long timeout, int state)
+ {
+ if (!x->done) {
+- DECLARE_WAITQUEUE(wait, current);
++ DECLARE_SWAITQUEUE(wait);
- if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
- rebuild_sched_domains_locked();
-@@ -1756,7 +1756,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
- cpuset_filetype_t type = seq_cft(sf)->private;
- int ret = 0;
+- __add_wait_queue_entry_tail_exclusive(&x->wait, &wait);
++ __prepare_to_swait(&x->wait, &wait);
+ do {
+ if (signal_pending_state(state, current)) {
+ timeout = -ERESTARTSYS;
+ break;
+ }
+ __set_current_state(state);
+- spin_unlock_irq(&x->wait.lock);
++ raw_spin_unlock_irq(&x->wait.lock);
+ timeout = action(timeout);
+- spin_lock_irq(&x->wait.lock);
++ raw_spin_lock_irq(&x->wait.lock);
+ } while (!x->done && timeout);
+- __remove_wait_queue(&x->wait, &wait);
++ __finish_swait(&x->wait, &wait);
+ if (!x->done)
+ return timeout;
+ }
+@@ -108,9 +108,9 @@
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
+ complete_acquire(x);
- switch (type) {
- case FILE_CPULIST:
-@@ -1775,7 +1775,7 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
- ret = -EINVAL;
- }
+- spin_lock_irq(&x->wait.lock);
++ raw_spin_lock_irq(&x->wait.lock);
+ timeout = do_wait_for_common(x, action, timeout, state);
+- spin_unlock_irq(&x->wait.lock);
++ raw_spin_unlock_irq(&x->wait.lock);
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
+ complete_release(x);
+
+@@ -299,12 +299,12 @@
+ if (!READ_ONCE(x->done))
+ return 0;
+
+- spin_lock_irqsave(&x->wait.lock, flags);
++ raw_spin_lock_irqsave(&x->wait.lock, flags);
+ if (!x->done)
+ ret = 0;
+ else if (x->done != UINT_MAX)
+ x->done--;
+- spin_unlock_irqrestore(&x->wait.lock, flags);
++ raw_spin_unlock_irqrestore(&x->wait.lock, flags);
return ret;
}
+ EXPORT_SYMBOL(try_wait_for_completion);
+@@ -330,8 +330,8 @@
+ * otherwise we can end up freeing the completion before complete()
+ * is done referencing it.
+ */
+- spin_lock_irqsave(&x->wait.lock, flags);
+- spin_unlock_irqrestore(&x->wait.lock, flags);
++ raw_spin_lock_irqsave(&x->wait.lock, flags);
++ raw_spin_unlock_irqrestore(&x->wait.lock, flags);
+ return true;
+ }
+ EXPORT_SYMBOL(completion_done);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/core.c linux-4.14/kernel/sched/core.c
+--- linux-4.14.orig/kernel/sched/core.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/sched/core.c 2018-09-05 11:05:07.000000000 +0200
+@@ -59,7 +59,11 @@
+ * Number of tasks to iterate in a single balance run.
+ * Limited because this is done with IRQs disabled.
+ */
++#ifndef CONFIG_PREEMPT_RT_FULL
+ const_debug unsigned int sysctl_sched_nr_migrate = 32;
++#else
++const_debug unsigned int sysctl_sched_nr_migrate = 8;
++#endif
-@@ -1989,12 +1989,12 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
+ /*
+ * period over which we average the RT time consumption, measured
+@@ -341,7 +345,7 @@
+ rq->hrtick_csd.info = rq;
+ #endif
- cpuset_inc();
+- hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ rq->hrtick_timer.function = hrtick;
+ }
+ #else /* CONFIG_SCHED_HRTICK */
+@@ -423,9 +427,15 @@
+ #endif
+ #endif
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) {
- cpumask_copy(cs->effective_cpus, parent->effective_cpus);
- cs->effective_mems = parent->effective_mems;
- }
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
+-void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++void __wake_q_add(struct wake_q_head *head, struct task_struct *task,
++ bool sleeper)
+ {
+- struct wake_q_node *node = &task->wake_q;
++ struct wake_q_node *node;
++
++ if (sleeper)
++ node = &task->wake_q_sleeper;
++ else
++ node = &task->wake_q;
- if (!test_bit(CGRP_CPUSET_CLONE_CHILDREN, &css->cgroup->flags))
- goto out_unlock;
-@@ -2021,12 +2021,12 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
+ /*
+ * Atomically grab the task, if ->wake_q is !nil already it means
+@@ -447,24 +457,32 @@
+ head->lastp = &node->next;
+ }
+
+-void wake_up_q(struct wake_q_head *head)
++void __wake_up_q(struct wake_q_head *head, bool sleeper)
+ {
+ struct wake_q_node *node = head->first;
+
+ while (node != WAKE_Q_TAIL) {
+ struct task_struct *task;
+
+- task = container_of(node, struct task_struct, wake_q);
++ if (sleeper)
++ task = container_of(node, struct task_struct, wake_q_sleeper);
++ else
++ task = container_of(node, struct task_struct, wake_q);
+ BUG_ON(!task);
+ /* Task can safely be re-inserted now: */
+ node = node->next;
+- task->wake_q.next = NULL;
+-
++ if (sleeper)
++ task->wake_q_sleeper.next = NULL;
++ else
++ task->wake_q.next = NULL;
+ /*
+ * wake_up_process() implies a wmb() to pair with the queueing
+ * in wake_q_add() so as not to miss wakeups.
+ */
+- wake_up_process(task);
++ if (sleeper)
++ wake_up_lock_sleeper(task);
++ else
++ wake_up_process(task);
+ put_task_struct(task);
}
- rcu_read_unlock();
+ }
+@@ -500,6 +518,48 @@
+ trace_sched_wake_idle_without_ipi(cpu);
+ }
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cs->mems_allowed = parent->mems_allowed;
- cs->effective_mems = parent->mems_allowed;
- cpumask_copy(cs->cpus_allowed, parent->cpus_allowed);
- cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
- out_unlock:
- mutex_unlock(&cpuset_mutex);
- return 0;
-@@ -2065,7 +2065,7 @@ static void cpuset_css_free(struct cgroup_subsys_state *css)
- static void cpuset_bind(struct cgroup_subsys_state *root_css)
++#ifdef CONFIG_PREEMPT_LAZY
++
++static int tsk_is_polling(struct task_struct *p)
++{
++#ifdef TIF_POLLING_NRFLAG
++ return test_tsk_thread_flag(p, TIF_POLLING_NRFLAG);
++#else
++ return 0;
++#endif
++}
++
++void resched_curr_lazy(struct rq *rq)
++{
++ struct task_struct *curr = rq->curr;
++ int cpu;
++
++ if (!sched_feat(PREEMPT_LAZY)) {
++ resched_curr(rq);
++ return;
++ }
++
++ lockdep_assert_held(&rq->lock);
++
++ if (test_tsk_need_resched(curr))
++ return;
++
++ if (test_tsk_need_resched_lazy(curr))
++ return;
++
++ set_tsk_need_resched_lazy(curr);
++
++ cpu = cpu_of(rq);
++ if (cpu == smp_processor_id())
++ return;
++
++ /* NEED_RESCHED_LAZY must be visible before we test polling */
++ smp_mb();
++ if (!tsk_is_polling(curr))
++ smp_send_reschedule(cpu);
++}
++#endif
++
+ void resched_cpu(int cpu)
{
- mutex_lock(&cpuset_mutex);
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
+ struct rq *rq = cpu_rq(cpu);
+@@ -523,11 +583,14 @@
+ */
+ int get_nohz_timer_target(void)
+ {
+- int i, cpu = smp_processor_id();
++ int i, cpu;
+ struct sched_domain *sd;
- if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) {
- cpumask_copy(top_cpuset.cpus_allowed, cpu_possible_mask);
-@@ -2076,7 +2076,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)
- top_cpuset.mems_allowed = top_cpuset.effective_mems;
- }
++ preempt_disable_rt();
++ cpu = smp_processor_id();
++
+ if (!idle_cpu(cpu) && is_housekeeping_cpu(cpu))
+- return cpu;
++ goto preempt_en_rt;
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
- mutex_unlock(&cpuset_mutex);
+ rcu_read_lock();
+ for_each_domain(cpu, sd) {
+@@ -546,6 +609,8 @@
+ cpu = housekeeping_any_cpu();
+ unlock:
+ rcu_read_unlock();
++preempt_en_rt:
++ preempt_enable_rt();
+ return cpu;
}
-@@ -2177,12 +2177,12 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
+@@ -912,7 +977,7 @@
+ */
+ static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
{
- bool is_empty;
-
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cpumask_copy(cs->cpus_allowed, new_cpus);
- cpumask_copy(cs->effective_cpus, new_cpus);
- cs->mems_allowed = *new_mems;
- cs->effective_mems = *new_mems;
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
+- if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ return false;
+ if (is_per_cpu_kthread(p))
+@@ -1007,7 +1072,7 @@
+ local_irq_disable();
/*
- * Don't call update_tasks_cpumask() if the cpuset becomes empty,
-@@ -2219,10 +2219,10 @@ hotplug_update_tasks(struct cpuset *cs,
- if (nodes_empty(*new_mems))
- *new_mems = parent_cs(cs)->effective_mems;
-
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- cpumask_copy(cs->effective_cpus, new_cpus);
- cs->effective_mems = *new_mems;
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
-
- if (cpus_updated)
- update_tasks_cpumask(cs);
-@@ -2308,21 +2308,21 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
-
- /* synchronize cpus_allowed to cpu_active_mask */
- if (cpus_updated) {
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- if (!on_dfl)
- cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
- cpumask_copy(top_cpuset.effective_cpus, &new_cpus);
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
- /* we don't mess with cpumasks of tasks in top_cpuset */
- }
-
- /* synchronize mems_allowed to N_MEMORY */
- if (mems_updated) {
-- spin_lock_irq(&callback_lock);
-+ raw_spin_lock_irq(&callback_lock);
- if (!on_dfl)
- top_cpuset.mems_allowed = new_mems;
- top_cpuset.effective_mems = new_mems;
-- spin_unlock_irq(&callback_lock);
-+ raw_spin_unlock_irq(&callback_lock);
- update_tasks_nodemask(&top_cpuset);
- }
-
-@@ -2420,11 +2420,11 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
+ * We need to explicitly wake pending tasks before running
+- * __migrate_task() such that we will not miss enforcing cpus_allowed
++ * __migrate_task() such that we will not miss enforcing cpus_ptr
+ * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
+ */
+ sched_ttwu_pending();
+@@ -1038,11 +1103,19 @@
+ */
+ void set_cpus_allowed_common(struct task_struct *p, const struct cpumask *new_mask)
{
- unsigned long flags;
+- cpumask_copy(&p->cpus_allowed, new_mask);
++ cpumask_copy(&p->cpus_mask, new_mask);
+ p->nr_cpus_allowed = cpumask_weight(new_mask);
+ }
-- spin_lock_irqsave(&callback_lock, flags);
-+ raw_spin_lock_irqsave(&callback_lock, flags);
- rcu_read_lock();
- guarantee_online_cpus(task_cs(tsk), pmask);
- rcu_read_unlock();
-- spin_unlock_irqrestore(&callback_lock, flags);
-+ raw_spin_unlock_irqrestore(&callback_lock, flags);
+-void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++int __migrate_disabled(struct task_struct *p)
++{
++ return p->migrate_disable;
++}
++#endif
++
++static void __do_set_cpus_allowed_tail(struct task_struct *p,
++ const struct cpumask *new_mask)
+ {
+ struct rq *rq = task_rq(p);
+ bool queued, running;
+@@ -1071,6 +1144,20 @@
+ set_curr_task(rq, p);
}
- void cpuset_cpus_allowed_fallback(struct task_struct *tsk)
-@@ -2472,11 +2472,11 @@ nodemask_t cpuset_mems_allowed(struct task_struct *tsk)
- nodemask_t mask;
- unsigned long flags;
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++ if (__migrate_disabled(p)) {
++ lockdep_assert_held(&p->pi_lock);
++
++ cpumask_copy(&p->cpus_mask, new_mask);
++ p->migrate_disable_update = 1;
++ return;
++ }
++#endif
++ __do_set_cpus_allowed_tail(p, new_mask);
++}
++
+ /*
+ * Change a given task's CPU affinity. Migrate the thread to a
+ * proper CPU and schedule it away if the CPU it's executing on
+@@ -1108,7 +1195,7 @@
+ goto out;
+ }
-- spin_lock_irqsave(&callback_lock, flags);
-+ raw_spin_lock_irqsave(&callback_lock, flags);
- rcu_read_lock();
- guarantee_online_mems(task_cs(tsk), &mask);
- rcu_read_unlock();
-- spin_unlock_irqrestore(&callback_lock, flags);
-+ raw_spin_unlock_irqrestore(&callback_lock, flags);
+- if (cpumask_equal(&p->cpus_allowed, new_mask))
++ if (cpumask_equal(p->cpus_ptr, new_mask))
+ goto out;
- return mask;
- }
-@@ -2568,14 +2568,14 @@ bool __cpuset_node_allowed(int node, gfp_t gfp_mask)
- return true;
+ if (!cpumask_intersects(new_mask, cpu_valid_mask)) {
+@@ -1129,9 +1216,16 @@
+ }
- /* Not hardwall and node outside mems_allowed: scan up cpusets */
-- spin_lock_irqsave(&callback_lock, flags);
-+ raw_spin_lock_irqsave(&callback_lock, flags);
+ /* Can the task run on the task's current CPU? If so, we're done */
+- if (cpumask_test_cpu(task_cpu(p), new_mask))
++ if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
+ goto out;
- rcu_read_lock();
- cs = nearest_hardwall_ancestor(task_cs(current));
- allowed = node_isset(node, cs->mems_allowed);
- rcu_read_unlock();
++#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++ if (__migrate_disabled(p)) {
++ p->migrate_disable_update = 1;
++ goto out;
++ }
++#endif
++
+ dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
+ if (task_running(rq, p) || p->state == TASK_WAKING) {
+ struct migration_arg arg = { p, dest_cpu };
+@@ -1269,10 +1363,10 @@
+ if (task_cpu(arg->src_task) != arg->src_cpu)
+ goto unlock;
-- spin_unlock_irqrestore(&callback_lock, flags);
-+ raw_spin_unlock_irqrestore(&callback_lock, flags);
- return allowed;
- }
+- if (!cpumask_test_cpu(arg->dst_cpu, &arg->src_task->cpus_allowed))
++ if (!cpumask_test_cpu(arg->dst_cpu, arg->src_task->cpus_ptr))
+ goto unlock;
-diff --git a/kernel/debug/kdb/kdb_io.c b/kernel/debug/kdb/kdb_io.c
-index fc1ef736253c..83c666537a7a 100644
---- a/kernel/debug/kdb/kdb_io.c
-+++ b/kernel/debug/kdb/kdb_io.c
-@@ -554,7 +554,6 @@ int vkdb_printf(enum kdb_msgsrc src, const char *fmt, va_list ap)
- int linecount;
- int colcount;
- int logging, saved_loglevel = 0;
-- int saved_trap_printk;
- int got_printf_lock = 0;
- int retlen = 0;
- int fnd, len;
-@@ -565,8 +564,6 @@ int vkdb_printf(enum kdb_msgsrc src, const char *fmt, va_list ap)
- unsigned long uninitialized_var(flags);
+- if (!cpumask_test_cpu(arg->src_cpu, &arg->dst_task->cpus_allowed))
++ if (!cpumask_test_cpu(arg->src_cpu, arg->dst_task->cpus_ptr))
+ goto unlock;
- preempt_disable();
-- saved_trap_printk = kdb_trap_printk;
-- kdb_trap_printk = 0;
+ __migrate_swap_task(arg->src_task, arg->dst_cpu);
+@@ -1313,10 +1407,10 @@
+ if (!cpu_active(arg.src_cpu) || !cpu_active(arg.dst_cpu))
+ goto out;
- /* Serialize kdb_printf if multiple cpus try to write at once.
- * But if any cpu goes recursive in kdb, just print the output,
-@@ -855,7 +852,6 @@ int vkdb_printf(enum kdb_msgsrc src, const char *fmt, va_list ap)
- } else {
- __release(kdb_printf_lock);
- }
-- kdb_trap_printk = saved_trap_printk;
- preempt_enable();
- return retlen;
- }
-@@ -865,9 +861,11 @@ int kdb_printf(const char *fmt, ...)
- va_list ap;
- int r;
+- if (!cpumask_test_cpu(arg.dst_cpu, &arg.src_task->cpus_allowed))
++ if (!cpumask_test_cpu(arg.dst_cpu, arg.src_task->cpus_ptr))
+ goto out;
-+ kdb_trap_printk++;
- va_start(ap, fmt);
- r = vkdb_printf(KDB_MSGSRC_INTERNAL, fmt, ap);
- va_end(ap);
-+ kdb_trap_printk--;
+- if (!cpumask_test_cpu(arg.src_cpu, &arg.dst_task->cpus_allowed))
++ if (!cpumask_test_cpu(arg.src_cpu, arg.dst_task->cpus_ptr))
+ goto out;
- return r;
- }
-diff --git a/kernel/events/core.c b/kernel/events/core.c
-index 02c8421f8c01..3748cb7b2d6e 100644
---- a/kernel/events/core.c
-+++ b/kernel/events/core.c
-@@ -1050,6 +1050,7 @@ static void __perf_mux_hrtimer_init(struct perf_cpu_context *cpuctx, int cpu)
- raw_spin_lock_init(&cpuctx->hrtimer_lock);
- hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
- timer->function = perf_mux_hrtimer_handler;
-+ timer->irqsafe = 1;
+ trace_sched_swap_numa(cur, arg.src_cpu, p, arg.dst_cpu);
+@@ -1326,6 +1420,18 @@
+ return ret;
}
- static int perf_mux_hrtimer_restart(struct perf_cpu_context *cpuctx)
-@@ -8335,6 +8336,7 @@ static void perf_swevent_init_hrtimer(struct perf_event *event)
-
- hrtimer_init(&hwc->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
- hwc->hrtimer.function = perf_swevent_hrtimer;
-+ hwc->hrtimer.irqsafe = 1;
++static bool check_task_state(struct task_struct *p, long match_state)
++{
++ bool match = false;
++
++ raw_spin_lock_irq(&p->pi_lock);
++ if (p->state == match_state || p->saved_state == match_state)
++ match = true;
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ return match;
++}
++
+ /*
+ * wait_task_inactive - wait for a thread to unschedule.
+ *
+@@ -1370,7 +1476,7 @@
+ * is actually now running somewhere else!
+ */
+ while (task_running(rq, p)) {
+- if (match_state && unlikely(p->state != match_state))
++ if (match_state && !check_task_state(p, match_state))
+ return 0;
+ cpu_relax();
+ }
+@@ -1385,7 +1491,8 @@
+ running = task_running(rq, p);
+ queued = task_on_rq_queued(p);
+ ncsw = 0;
+- if (!match_state || p->state == match_state)
++ if (!match_state || p->state == match_state ||
++ p->saved_state == match_state)
+ ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
+ task_rq_unlock(rq, p, &rf);
- /*
- * Since hrtimers have a fixed rate, we can do a static freq->period
-diff --git a/kernel/exit.c b/kernel/exit.c
-index 3076f3089919..fb2ebcf3ca7c 100644
---- a/kernel/exit.c
-+++ b/kernel/exit.c
-@@ -143,7 +143,7 @@ static void __exit_signal(struct task_struct *tsk)
- * Do this under ->siglock, we can race with another thread
- * doing sigqueue_free() if we have SIGQUEUE_PREALLOC signals.
- */
-- flush_sigqueue(&tsk->pending);
-+ flush_task_sigqueue(tsk);
- tsk->sighand = NULL;
- spin_unlock(&sighand->siglock);
+@@ -1460,7 +1567,7 @@
+ EXPORT_SYMBOL_GPL(kick_process);
-diff --git a/kernel/fork.c b/kernel/fork.c
-index ba8a01564985..47784f8aed37 100644
---- a/kernel/fork.c
-+++ b/kernel/fork.c
-@@ -76,6 +76,7 @@
- #include <linux/compiler.h>
- #include <linux/sysctl.h>
- #include <linux/kcov.h>
-+#include <linux/kprobes.h>
+ /*
+- * ->cpus_allowed is protected by both rq->lock and p->pi_lock
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
+ *
+ * A few notes on cpu_active vs cpu_online:
+ *
+@@ -1500,14 +1607,14 @@
+ for_each_cpu(dest_cpu, nodemask) {
+ if (!cpu_active(dest_cpu))
+ continue;
+- if (cpumask_test_cpu(dest_cpu, &p->cpus_allowed))
++ if (cpumask_test_cpu(dest_cpu, p->cpus_ptr))
+ return dest_cpu;
+ }
+ }
- #include <asm/pgtable.h>
- #include <asm/pgalloc.h>
-@@ -376,13 +377,24 @@ static inline void put_signal_struct(struct signal_struct *sig)
- if (atomic_dec_and_test(&sig->sigcnt))
- free_signal_struct(sig);
- }
--
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+static
-+#endif
- void __put_task_struct(struct task_struct *tsk)
+ for (;;) {
+ /* Any allowed, online CPU? */
+- for_each_cpu(dest_cpu, &p->cpus_allowed) {
++ for_each_cpu(dest_cpu, p->cpus_ptr) {
+ if (!is_cpu_allowed(p, dest_cpu))
+ continue;
+
+@@ -1551,7 +1658,7 @@
+ }
+
+ /*
+- * The caller (fork, wakeup) owns p->pi_lock, ->cpus_allowed is stable.
++ * The caller (fork, wakeup) owns p->pi_lock, ->cpus_ptr is stable.
+ */
+ static inline
+ int select_task_rq(struct task_struct *p, int cpu, int sd_flags, int wake_flags)
+@@ -1561,11 +1668,11 @@
+ if (p->nr_cpus_allowed > 1)
+ cpu = p->sched_class->select_task_rq(p, cpu, sd_flags, wake_flags);
+ else
+- cpu = cpumask_any(&p->cpus_allowed);
++ cpu = cpumask_any(p->cpus_ptr);
+
+ /*
+ * In order not to call set_task_cpu() on a blocking task we need
+- * to rely on ttwu() to place the task on a valid ->cpus_allowed
++ * to rely on ttwu() to place the task on a valid ->cpus_ptr
+ * CPU.
+ *
+ * Since this is common to all placement strategies, this lives here.
+@@ -1668,10 +1775,6 @@
{
- WARN_ON(!tsk->exit_state);
- WARN_ON(atomic_read(&tsk->usage));
- WARN_ON(tsk == current);
+ activate_task(rq, p, en_flags);
+ p->on_rq = TASK_ON_RQ_QUEUED;
+-
+- /* If a worker is waking up, notify the workqueue: */
+- if (p->flags & PF_WQ_WORKER)
+- wq_worker_waking_up(p, cpu_of(rq));
+ }
+ /*
+@@ -1995,8 +2098,27 @@
+ */
+ raw_spin_lock_irqsave(&p->pi_lock, flags);
+ smp_mb__after_spinlock();
+- if (!(p->state & state))
++ if (!(p->state & state)) {
++ /*
++ * The task might be running due to a spinlock sleeper
++ * wakeup. Check the saved state and set it to running
++ * if the wakeup condition is true.
++ */
++ if (!(wake_flags & WF_LOCK_SLEEPER)) {
++ if (p->saved_state & state) {
++ p->saved_state = TASK_RUNNING;
++ success = 1;
++ }
++ }
+ goto out;
++ }
++
+ /*
-+ * Remove function-return probe instances associated with this
-+ * task and put them back on the free list.
++ * If this is a regular wakeup, then we can unconditionally
++ * clear the saved state of a "lock sleeper".
+ */
-+ kprobe_flush_task(tsk);
-+
-+ /* Task is done with its stack. */
-+ put_task_stack(tsk);
-+
- cgroup_free(tsk);
- task_numa_free(tsk);
- security_task_free(tsk);
-@@ -393,7 +405,18 @@ void __put_task_struct(struct task_struct *tsk)
- if (!profile_handoff_task(tsk))
- free_task(tsk);
- }
-+#ifndef CONFIG_PREEMPT_RT_BASE
- EXPORT_SYMBOL_GPL(__put_task_struct);
-+#else
-+void __put_task_struct_cb(struct rcu_head *rhp)
-+{
-+ struct task_struct *tsk = container_of(rhp, struct task_struct, put_rcu);
-+
-+ __put_task_struct(tsk);
-+
-+}
-+EXPORT_SYMBOL_GPL(__put_task_struct_cb);
-+#endif
++ if (!(wake_flags & WF_LOCK_SLEEPER))
++ p->saved_state = TASK_RUNNING;
- void __init __weak arch_task_cache_init(void) { }
+ trace_sched_waking(p);
-@@ -852,6 +875,19 @@ void __mmdrop(struct mm_struct *mm)
+@@ -2093,56 +2215,6 @@
}
- EXPORT_SYMBOL_GPL(__mmdrop);
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+/*
-+ * RCU callback for delayed mm drop. Not strictly rcu, but we don't
-+ * want another facility to make this work.
+ /**
+- * try_to_wake_up_local - try to wake up a local task with rq lock held
+- * @p: the thread to be awakened
+- * @rf: request-queue flags for pinning
+- *
+- * Put @p on the run-queue if it's not already there. The caller must
+- * ensure that this_rq() is locked, @p is bound to this_rq() and not
+- * the current task.
+- */
+-static void try_to_wake_up_local(struct task_struct *p, struct rq_flags *rf)
+-{
+- struct rq *rq = task_rq(p);
+-
+- if (WARN_ON_ONCE(rq != this_rq()) ||
+- WARN_ON_ONCE(p == current))
+- return;
+-
+- lockdep_assert_held(&rq->lock);
+-
+- if (!raw_spin_trylock(&p->pi_lock)) {
+- /*
+- * This is OK, because current is on_cpu, which avoids it being
+- * picked for load-balance and preemption/IRQs are still
+- * disabled avoiding further scheduler activity on it and we've
+- * not yet picked a replacement task.
+- */
+- rq_unlock(rq, rf);
+- raw_spin_lock(&p->pi_lock);
+- rq_relock(rq, rf);
+- }
+-
+- if (!(p->state & TASK_NORMAL))
+- goto out;
+-
+- trace_sched_waking(p);
+-
+- if (!task_on_rq_queued(p)) {
+- if (p->in_iowait) {
+- delayacct_blkio_end(p);
+- atomic_dec(&rq->nr_iowait);
+- }
+- ttwu_activate(rq, p, ENQUEUE_WAKEUP | ENQUEUE_NOCLOCK);
+- }
+-
+- ttwu_do_wakeup(rq, p, 0, rf);
+- ttwu_stat(p, smp_processor_id(), 0);
+-out:
+- raw_spin_unlock(&p->pi_lock);
+-}
+-
+-/**
+ * wake_up_process - Wake up a specific process
+ * @p: The process to be woken up.
+ *
+@@ -2160,6 +2232,18 @@
+ }
+ EXPORT_SYMBOL(wake_up_process);
+
++/**
++ * wake_up_lock_sleeper - Wake up a specific process blocked on a "sleeping lock"
++ * @p: The process to be woken up.
++ *
++ * Same as wake_up_process() above, but wake_flags=WF_LOCK_SLEEPER to indicate
++ * the nature of the wakeup.
+ */
-+void __mmdrop_delayed(struct rcu_head *rhp)
++int wake_up_lock_sleeper(struct task_struct *p)
+{
-+ struct mm_struct *mm = container_of(rhp, struct mm_struct, delayed_drop);
-+
-+ __mmdrop(mm);
++ return try_to_wake_up(p, TASK_UNINTERRUPTIBLE, WF_LOCK_SLEEPER);
+}
-+#endif
+
- static inline void __mmput(struct mm_struct *mm)
- {
- VM_BUG_ON(atomic_read(&mm->mm_users));
-@@ -1426,6 +1462,9 @@ static void rt_mutex_init_task(struct task_struct *p)
- */
- static void posix_cpu_timers_init(struct task_struct *tsk)
+ int wake_up_state(struct task_struct *p, unsigned int state)
{
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ tsk->posix_timer_list = NULL;
+ return try_to_wake_up(p, state, 0);
+@@ -2420,6 +2504,9 @@
+ p->on_cpu = 0;
+ #endif
+ init_task_preempt_count(p);
++#ifdef CONFIG_HAVE_PREEMPT_LAZY
++ task_thread_info(p)->preempt_lazy_count = 0;
+#endif
- tsk->cputime_expires.prof_exp = 0;
- tsk->cputime_expires.virt_exp = 0;
- tsk->cputime_expires.sched_exp = 0;
-@@ -1552,6 +1591,7 @@ static __latent_entropy struct task_struct *copy_process(
- spin_lock_init(&p->alloc_lock);
-
- init_sigpending(&p->pending);
-+ p->sigqueue_cache = NULL;
-
- p->utime = p->stime = p->gtime = 0;
- p->utimescaled = p->stimescaled = 0;
-diff --git a/kernel/futex.c b/kernel/futex.c
-index 2c4be467fecd..064917c2d9a5 100644
---- a/kernel/futex.c
-+++ b/kernel/futex.c
-@@ -904,7 +904,9 @@ void exit_pi_state_list(struct task_struct *curr)
- * task still owns the PI-state:
- */
- if (head->next != next) {
-+ raw_spin_unlock_irq(&curr->pi_lock);
- spin_unlock(&hb->lock);
-+ raw_spin_lock_irq(&curr->pi_lock);
- continue;
- }
-
-@@ -1299,6 +1301,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this,
- struct futex_pi_state *pi_state = this->pi_state;
- u32 uninitialized_var(curval), newval;
- WAKE_Q(wake_q);
-+ WAKE_Q(wake_sleeper_q);
- bool deboost;
- int ret = 0;
-
-@@ -1365,7 +1368,8 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this,
+ #ifdef CONFIG_SMP
+ plist_node_init(&p->pushable_tasks, MAX_PRIO);
+ RB_CLEAR_NODE(&p->pushable_dl_tasks);
+@@ -2462,7 +2549,7 @@
+ #ifdef CONFIG_SMP
+ /*
+ * Fork balancing, do it here and not earlier because:
+- * - cpus_allowed can change in the fork path
++ * - cpus_ptr can change in the fork path
+ * - any previously selected CPU might disappear through hotplug
+ *
+ * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
+@@ -2675,21 +2762,16 @@
+ finish_arch_post_lock_switch();
- raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
+ fire_sched_in_preempt_notifiers(current);
++ /*
++ * We use mmdrop_delayed() here so we don't have to do the
++ * full __mmdrop() when we are the last user.
++ */
+ if (mm)
+- mmdrop(mm);
++ mmdrop_delayed(mm);
+ if (unlikely(prev_state == TASK_DEAD)) {
+ if (prev->sched_class->task_dead)
+ prev->sched_class->task_dead(prev);
-- deboost = rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q);
-+ deboost = rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q,
-+ &wake_sleeper_q);
+- /*
+- * Remove function-return probe instances associated with this
+- * task and put them back on the free list.
+- */
+- kprobe_flush_task(prev);
+-
+- /* Task is done with its stack. */
+- put_task_stack(prev);
+-
+ put_task_struct(prev);
+ }
- /*
- * First unlock HB so the waiter does not spin on it once he got woken
-@@ -1373,8 +1377,9 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this,
- * deboost first (and lose our higher priority), then the task might get
- * scheduled away before the wake up can take place.
- */
-- spin_unlock(&hb->lock);
-+ deboost |= spin_unlock_no_deboost(&hb->lock);
- wake_up_q(&wake_q);
-+ wake_up_q_sleeper(&wake_sleeper_q);
- if (deboost)
- rt_mutex_adjust_prio(current);
+@@ -3336,25 +3418,13 @@
+ atomic_inc(&rq->nr_iowait);
+ delayacct_blkio_start();
+ }
+-
+- /*
+- * If a worker went to sleep, notify and ask workqueue
+- * whether it wants to wake up a task to maintain
+- * concurrency.
+- */
+- if (prev->flags & PF_WQ_WORKER) {
+- struct task_struct *to_wakeup;
+-
+- to_wakeup = wq_worker_sleeping(prev);
+- if (to_wakeup)
+- try_to_wake_up_local(to_wakeup, &rf);
+- }
+ }
+ switch_count = &prev->nvcsw;
+ }
-@@ -1924,6 +1929,16 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags,
- requeue_pi_wake_futex(this, &key2, hb2);
- drop_count++;
- continue;
-+ } else if (ret == -EAGAIN) {
-+ /*
-+ * Waiter was woken by timeout or
-+ * signal and has set pi_blocked_on to
-+ * PI_WAKEUP_INPROGRESS before we
-+ * tried to enqueue it on the rtmutex.
-+ */
-+ this->pi_state = NULL;
-+ put_pi_state(pi_state);
-+ continue;
- } else if (ret) {
- /*
- * rt_mutex_start_proxy_lock() detected a
-@@ -2814,7 +2829,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
- struct hrtimer_sleeper timeout, *to = NULL;
- struct rt_mutex_waiter rt_waiter;
- struct rt_mutex *pi_mutex = NULL;
-- struct futex_hash_bucket *hb;
-+ struct futex_hash_bucket *hb, *hb2;
- union futex_key key2 = FUTEX_KEY_INIT;
- struct futex_q q = futex_q_init;
- int res, ret;
-@@ -2839,10 +2854,7 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
- * The waiter is allocated on our stack, manipulated by the requeue
- * code while we sleep on uaddr.
- */
-- debug_rt_mutex_init_waiter(&rt_waiter);
-- RB_CLEAR_NODE(&rt_waiter.pi_tree_entry);
-- RB_CLEAR_NODE(&rt_waiter.tree_entry);
-- rt_waiter.task = NULL;
-+ rt_mutex_init_waiter(&rt_waiter, false);
+ next = pick_next_task(rq, prev, &rf);
+ clear_tsk_need_resched(prev);
++ clear_tsk_need_resched_lazy(prev);
+ clear_preempt_need_resched();
- ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
- if (unlikely(ret != 0))
-@@ -2873,20 +2885,55 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
- /* Queue the futex_q, drop the hb lock, wait for wakeup. */
- futex_wait_queue_me(hb, &q, to);
+ if (likely(prev != next)) {
+@@ -3407,8 +3477,19 @@
-- spin_lock(&hb->lock);
-- ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
-- spin_unlock(&hb->lock);
-- if (ret)
-- goto out_put_keys;
+ static inline void sched_submit_work(struct task_struct *tsk)
+ {
+- if (!tsk->state || tsk_is_pi_blocked(tsk))
++ if (!tsk->state)
++ return;
+ /*
-+ * On RT we must avoid races with requeue and trying to block
-+ * on two mutexes (hb->lock and uaddr2's rtmutex) by
-+ * serializing access to pi_blocked_on with pi_lock.
++ * If a worker went to sleep, notify and ask workqueue whether
++ * it wants to wake up a task to maintain concurrency.
+ */
-+ raw_spin_lock_irq(¤t->pi_lock);
-+ if (current->pi_blocked_on) {
-+ /*
-+ * We have been requeued or are in the process of
-+ * being requeued.
-+ */
-+ raw_spin_unlock_irq(¤t->pi_lock);
-+ } else {
-+ /*
-+ * Setting pi_blocked_on to PI_WAKEUP_INPROGRESS
-+ * prevents a concurrent requeue from moving us to the
-+ * uaddr2 rtmutex. After that we can safely acquire
-+ * (and possibly block on) hb->lock.
-+ */
-+ current->pi_blocked_on = PI_WAKEUP_INPROGRESS;
-+ raw_spin_unlock_irq(¤t->pi_lock);
++ if (tsk->flags & PF_WQ_WORKER)
++ wq_worker_sleeping(tsk);
+
-+ spin_lock(&hb->lock);
+
-+ /*
-+ * Clean up pi_blocked_on. We might leak it otherwise
-+ * when we succeeded with the hb->lock in the fast
-+ * path.
-+ */
-+ raw_spin_lock_irq(¤t->pi_lock);
-+ current->pi_blocked_on = NULL;
-+ raw_spin_unlock_irq(¤t->pi_lock);
++ if (tsk_is_pi_blocked(tsk))
+ return;
+
-+ ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
-+ spin_unlock(&hb->lock);
-+ if (ret)
-+ goto out_put_keys;
-+ }
-
/*
-- * In order for us to be here, we know our q.key == key2, and since
-- * we took the hb->lock above, we also know that futex_requeue() has
-- * completed and we no longer have to concern ourselves with a wakeup
-- * race with the atomic proxy lock acquisition by the requeue code. The
-- * futex_requeue dropped our key1 reference and incremented our key2
-- * reference count.
-+ * In order to be here, we have either been requeued, are in
-+ * the process of being requeued, or requeue successfully
-+ * acquired uaddr2 on our behalf. If pi_blocked_on was
-+ * non-null above, we may be racing with a requeue. Do not
-+ * rely on q->lock_ptr to be hb2->lock until after blocking on
-+ * hb->lock or hb2->lock. The futex_requeue dropped our key1
-+ * reference and incremented our key2 reference count.
- */
-+ hb2 = hash_futex(&key2);
-
- /* Check if the requeue code acquired the second futex for us. */
- if (!q.rt_waiter) {
-@@ -2895,14 +2942,15 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
- * did a lock-steal - fix up the PI-state in that case.
- */
- if (q.pi_state && (q.pi_state->owner != current)) {
-- spin_lock(q.lock_ptr);
-+ spin_lock(&hb2->lock);
-+ BUG_ON(&hb2->lock != q.lock_ptr);
- ret = fixup_pi_state_owner(uaddr2, &q, current);
- /*
- * Drop the reference to the pi state which
- * the requeue_pi() code acquired for us.
- */
- put_pi_state(q.pi_state);
-- spin_unlock(q.lock_ptr);
-+ spin_unlock(&hb2->lock);
- }
- } else {
- /*
-@@ -2915,7 +2963,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
- ret = rt_mutex_finish_proxy_lock(pi_mutex, to, &rt_waiter);
- debug_rt_mutex_free_waiter(&rt_waiter);
+ * If we are going to sleep and we have plugged IO queued,
+ * make sure to submit it to avoid deadlocks.
+@@ -3417,6 +3498,12 @@
+ blk_schedule_flush_plug(tsk);
+ }
-- spin_lock(q.lock_ptr);
-+ spin_lock(&hb2->lock);
-+ BUG_ON(&hb2->lock != q.lock_ptr);
- /*
- * Fixup the pi_state owner and possibly acquire the lock if we
- * haven't already.
-diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
-index d3f24905852c..f87aa8fdcc51 100644
---- a/kernel/irq/handle.c
-+++ b/kernel/irq/handle.c
-@@ -181,10 +181,16 @@ irqreturn_t handle_irq_event_percpu(struct irq_desc *desc)
++static void sched_update_worker(struct task_struct *tsk)
++{
++ if (tsk->flags & PF_WQ_WORKER)
++ wq_worker_running(tsk);
++}
++
+ asmlinkage __visible void __sched schedule(void)
{
- irqreturn_t retval;
- unsigned int flags = 0;
-+ struct pt_regs *regs = get_irq_regs();
-+ u64 ip = regs ? instruction_pointer(regs) : 0;
+ struct task_struct *tsk = current;
+@@ -3427,6 +3514,7 @@
+ __schedule(false);
+ sched_preempt_enable_no_resched();
+ } while (need_resched());
++ sched_update_worker(tsk);
+ }
+ EXPORT_SYMBOL(schedule);
- retval = __handle_irq_event_percpu(desc, &flags);
+@@ -3515,6 +3603,30 @@
+ } while (need_resched());
+ }
-- add_interrupt_randomness(desc->irq_data.irq, flags);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ desc->random_ip = ip;
++#ifdef CONFIG_PREEMPT_LAZY
++/*
++ * If TIF_NEED_RESCHED is then we allow to be scheduled away since this is
++ * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
++ * preempt_lazy_count counter >0.
++ */
++static __always_inline int preemptible_lazy(void)
++{
++ if (test_thread_flag(TIF_NEED_RESCHED))
++ return 1;
++ if (current_thread_info()->preempt_lazy_count)
++ return 0;
++ return 1;
++}
++
+#else
-+ add_interrupt_randomness(desc->irq_data.irq, flags, ip);
++
++static inline int preemptible_lazy(void)
++{
++ return 1;
++}
++
+#endif
++
+ #ifdef CONFIG_PREEMPT
+ /*
+ * this is the entry point to schedule() from in-kernel preemption
+@@ -3529,7 +3641,8 @@
+ */
+ if (likely(!preemptible()))
+ return;
+-
++ if (!preemptible_lazy())
++ return;
+ preempt_schedule_common();
+ }
+ NOKPROBE_SYMBOL(preempt_schedule);
+@@ -3556,6 +3669,9 @@
+ if (likely(!preemptible()))
+ return;
- if (!noirqdebug)
- note_interrupt(desc, retval);
-diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
-index 6b669593e7eb..e357bf6c59d5 100644
---- a/kernel/irq/manage.c
-+++ b/kernel/irq/manage.c
-@@ -22,6 +22,7 @@
- #include "internals.h"
++ if (!preemptible_lazy())
++ return;
++
+ do {
+ /*
+ * Because the function tracer can trace preempt_count_sub()
+@@ -3578,7 +3694,16 @@
+ * an infinite recursion.
+ */
+ prev_ctx = exception_enter();
++ /*
++ * The add/subtract must not be traced by the function
++ * tracer. But we still want to account for the
++ * preempt off latency tracer. Since the _notrace versions
++ * of add/subtract skip the accounting for latency tracer
++ * we must force it manually.
++ */
++ start_critical_timings();
+ __schedule(true);
++ stop_critical_timings();
+ exception_exit(prev_ctx);
- #ifdef CONFIG_IRQ_FORCED_THREADING
-+# ifndef CONFIG_PREEMPT_RT_BASE
- __read_mostly bool force_irqthreads;
+ preempt_latency_stop(1);
+@@ -4164,7 +4289,7 @@
+ * the entire root_domain to become SCHED_DEADLINE. We
+ * will also fail if there's no bandwidth available.
+ */
+- if (!cpumask_subset(span, &p->cpus_allowed) ||
++ if (!cpumask_subset(span, p->cpus_ptr) ||
+ rq->rd->dl_bw.bw == 0) {
+ task_rq_unlock(rq, p, &rf);
+ return -EPERM;
+@@ -4758,7 +4883,7 @@
+ goto out_unlock;
- static int __init setup_forced_irqthreads(char *arg)
-@@ -30,6 +31,7 @@ static int __init setup_forced_irqthreads(char *arg)
+ raw_spin_lock_irqsave(&p->pi_lock, flags);
+- cpumask_and(mask, &p->cpus_allowed, cpu_active_mask);
++ cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+
+ out_unlock:
+@@ -4877,6 +5002,7 @@
+ }
+ EXPORT_SYMBOL(__cond_resched_lock);
+
++#ifndef CONFIG_PREEMPT_RT_FULL
+ int __sched __cond_resched_softirq(void)
+ {
+ BUG_ON(!in_softirq());
+@@ -4890,6 +5016,7 @@
return 0;
}
- early_param("threadirqs", setup_forced_irqthreads);
-+# endif
- #endif
+ EXPORT_SYMBOL(__cond_resched_softirq);
++#endif
- static void __synchronize_hardirq(struct irq_desc *desc)
-@@ -233,7 +235,12 @@ int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask,
+ /**
+ * yield - yield the current processor to other threads.
+@@ -5284,7 +5411,9 @@
- if (desc->affinity_notify) {
- kref_get(&desc->affinity_notify->kref);
-+
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ swork_queue(&desc->affinity_notify->swork);
-+#else
- schedule_work(&desc->affinity_notify->work);
+ /* Set the preempt count _outside_ the spinlocks! */
+ init_idle_preempt_count(idle, cpu);
+-
++#ifdef CONFIG_HAVE_PREEMPT_LAZY
++ task_thread_info(idle)->preempt_lazy_count = 0;
+#endif
- }
- irqd_set(data, IRQD_AFFINITY_SET);
+ /*
+ * The idle tasks have their own, simple scheduling class:
+ */
+@@ -5323,7 +5452,7 @@
+ * allowed nodes is unnecessary. Thus, cpusets are not
+ * applicable for such threads. This prevents checking for
+ * success of set_cpus_allowed_ptr() on all attached tasks
+- * before cpus_allowed may be changed.
++ * before cpus_mask may be changed.
+ */
+ if (p->flags & PF_NO_SETAFFINITY) {
+ ret = -EINVAL;
+@@ -5350,7 +5479,7 @@
+ if (curr_cpu == target_cpu)
+ return 0;
+
+- if (!cpumask_test_cpu(target_cpu, &p->cpus_allowed))
++ if (!cpumask_test_cpu(target_cpu, p->cpus_ptr))
+ return -EINVAL;
+
+ /* TODO: This is not properly updating schedstats */
+@@ -5389,6 +5518,8 @@
+ #endif /* CONFIG_NUMA_BALANCING */
-@@ -271,10 +278,8 @@ int irq_set_affinity_hint(unsigned int irq, const struct cpumask *m)
+ #ifdef CONFIG_HOTPLUG_CPU
++static DEFINE_PER_CPU(struct mm_struct *, idle_last_mm);
++
+ /*
+ * Ensure that the idle task is using init_mm right before its CPU goes
+ * offline.
+@@ -5403,7 +5534,12 @@
+ switch_mm(mm, &init_mm, current);
+ finish_arch_post_lock_switch();
+ }
+- mmdrop(mm);
++ /*
++ * Defer the cleanup to an alive cpu. On RT we can neither
++ * call mmdrop() nor mmdrop_delayed() from here.
++ */
++ per_cpu(idle_last_mm, smp_processor_id()) = mm;
++
}
- EXPORT_SYMBOL_GPL(irq_set_affinity_hint);
--static void irq_affinity_notify(struct work_struct *work)
-+static void _irq_affinity_notify(struct irq_affinity_notify *notify)
- {
-- struct irq_affinity_notify *notify =
-- container_of(work, struct irq_affinity_notify, work);
- struct irq_desc *desc = irq_to_desc(notify->irq);
- cpumask_var_t cpumask;
- unsigned long flags;
-@@ -296,6 +301,35 @@ static void irq_affinity_notify(struct work_struct *work)
- kref_put(¬ify->kref, notify->release);
+ /*
+@@ -5487,7 +5623,7 @@
+ put_prev_task(rq, next);
+
+ /*
+- * Rules for changing task_struct::cpus_allowed are holding
++ * Rules for changing task_struct::cpus_mask are holding
+ * both pi_lock and rq->lock, such that holding either
+ * stabilizes the mask.
+ *
+@@ -5718,6 +5854,10 @@
+ update_max_interval();
+ nohz_balance_exit_idle(cpu);
+ hrtick_clear(rq);
++ if (per_cpu(idle_last_mm, cpu)) {
++ mmdrop_delayed(per_cpu(idle_last_mm, cpu));
++ per_cpu(idle_last_mm, cpu) = NULL;
++ }
+ return 0;
}
+ #endif
+@@ -5964,7 +6104,7 @@
+ #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
+ static inline int preempt_count_equals(int preempt_offset)
+ {
+- int nested = preempt_count() + rcu_preempt_depth();
++ int nested = preempt_count() + sched_rcu_preempt_depth();
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+static void init_helper_thread(void)
-+{
-+ static int init_sworker_once;
+ return (nested == preempt_offset);
+ }
+@@ -6756,3 +6896,197 @@
+ /* 10 */ 39045157, 49367440, 61356676, 76695844, 95443717,
+ /* 15 */ 119304647, 148102320, 186737708, 238609294, 286331153,
+ };
+
-+ if (init_sworker_once)
-+ return;
-+ if (WARN_ON(swork_get()))
-+ return;
-+ init_sworker_once = 1;
-+}
++#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
+
-+static void irq_affinity_notify(struct swork_event *swork)
++static inline void
++update_nr_migratory(struct task_struct *p, long delta)
+{
-+ struct irq_affinity_notify *notify =
-+ container_of(swork, struct irq_affinity_notify, swork);
-+ _irq_affinity_notify(notify);
++ if (unlikely((p->sched_class == &rt_sched_class ||
++ p->sched_class == &dl_sched_class) &&
++ p->nr_cpus_allowed > 1)) {
++ if (p->sched_class == &rt_sched_class)
++ task_rq(p)->rt.rt_nr_migratory += delta;
++ else
++ task_rq(p)->dl.dl_nr_migratory += delta;
++ }
+}
+
-+#else
++static inline void
++migrate_disable_update_cpus_allowed(struct task_struct *p)
++{
++ struct rq *rq;
++ struct rq_flags rf;
+
-+static void irq_affinity_notify(struct work_struct *work)
++ p->cpus_ptr = cpumask_of(smp_processor_id());
++
++ rq = task_rq_lock(p, &rf);
++ update_nr_migratory(p, -1);
++ p->nr_cpus_allowed = 1;
++ task_rq_unlock(rq, p, &rf);
++}
++
++static inline void
++migrate_enable_update_cpus_allowed(struct task_struct *p)
+{
-+ struct irq_affinity_notify *notify =
-+ container_of(work, struct irq_affinity_notify, work);
-+ _irq_affinity_notify(notify);
++ struct rq *rq;
++ struct rq_flags rf;
++
++ p->cpus_ptr = &p->cpus_mask;
++
++ rq = task_rq_lock(p, &rf);
++ p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
++ update_nr_migratory(p, 1);
++ task_rq_unlock(rq, p, &rf);
+}
-+#endif
+
- /**
- * irq_set_affinity_notifier - control notification of IRQ affinity changes
- * @irq: Interrupt for which to enable/disable notification
-@@ -324,7 +358,12 @@ irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify)
- if (notify) {
- notify->irq = irq;
- kref_init(¬ify->kref);
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ INIT_SWORK(¬ify->swork, irq_affinity_notify);
-+ init_helper_thread();
-+#else
- INIT_WORK(¬ify->work, irq_affinity_notify);
++void migrate_disable(void)
++{
++ struct task_struct *p = current;
++
++ if (in_atomic() || irqs_disabled()) {
++#ifdef CONFIG_SCHED_DEBUG
++ p->migrate_disable_atomic++;
+#endif
- }
-
- raw_spin_lock_irqsave(&desc->lock, flags);
-@@ -879,7 +918,15 @@ irq_forced_thread_fn(struct irq_desc *desc, struct irqaction *action)
- local_bh_disable();
- ret = action->thread_fn(action->irq, action->dev_id);
- irq_finalize_oneshot(desc, action);
-- local_bh_enable();
-+ /*
-+ * Interrupts which have real time requirements can be set up
-+ * to avoid softirq processing in the thread handler. This is
-+ * safe as these interrupts do not raise soft interrupts.
-+ */
-+ if (irq_settings_no_softirq_call(desc))
-+ _local_bh_enable();
-+ else
-+ local_bh_enable();
- return ret;
- }
-
-@@ -976,6 +1023,12 @@ static int irq_thread(void *data)
- if (action_ret == IRQ_WAKE_THREAD)
- irq_wake_secondary(desc, action);
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ migrate_disable();
-+ add_interrupt_randomness(action->irq, 0,
-+ desc->random_ip ^ (unsigned long) action);
-+ migrate_enable();
++ return;
++ }
++#ifdef CONFIG_SCHED_DEBUG
++ if (unlikely(p->migrate_disable_atomic)) {
++ tracing_off();
++ WARN_ON_ONCE(1);
++ }
+#endif
- wake_threads_waitq(desc);
- }
-
-@@ -1336,6 +1389,9 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new)
- irqd_set(&desc->irq_data, IRQD_NO_BALANCING);
- }
-
-+ if (new->flags & IRQF_NO_SOFTIRQ_CALL)
-+ irq_settings_set_no_softirq_call(desc);
+
- /* Set default affinity mask once everything is setup */
- setup_affinity(desc, mask);
-
-@@ -2061,7 +2117,7 @@ EXPORT_SYMBOL_GPL(irq_get_irqchip_state);
- * This call sets the internal irqchip state of an interrupt,
- * depending on the value of @which.
- *
-- * This function should be called with preemption disabled if the
-+ * This function should be called with migration disabled if the
- * interrupt controller has per-cpu registers.
- */
- int irq_set_irqchip_state(unsigned int irq, enum irqchip_irq_state which,
-diff --git a/kernel/irq/settings.h b/kernel/irq/settings.h
-index 320579d89091..2df2d4445b1e 100644
---- a/kernel/irq/settings.h
-+++ b/kernel/irq/settings.h
-@@ -16,6 +16,7 @@ enum {
- _IRQ_PER_CPU_DEVID = IRQ_PER_CPU_DEVID,
- _IRQ_IS_POLLED = IRQ_IS_POLLED,
- _IRQ_DISABLE_UNLAZY = IRQ_DISABLE_UNLAZY,
-+ _IRQ_NO_SOFTIRQ_CALL = IRQ_NO_SOFTIRQ_CALL,
- _IRQF_MODIFY_MASK = IRQF_MODIFY_MASK,
- };
-
-@@ -30,6 +31,7 @@ enum {
- #define IRQ_PER_CPU_DEVID GOT_YOU_MORON
- #define IRQ_IS_POLLED GOT_YOU_MORON
- #define IRQ_DISABLE_UNLAZY GOT_YOU_MORON
-+#define IRQ_NO_SOFTIRQ_CALL GOT_YOU_MORON
- #undef IRQF_MODIFY_MASK
- #define IRQF_MODIFY_MASK GOT_YOU_MORON
-
-@@ -40,6 +42,16 @@ irq_settings_clr_and_set(struct irq_desc *desc, u32 clr, u32 set)
- desc->status_use_accessors |= (set & _IRQF_MODIFY_MASK);
- }
-
-+static inline bool irq_settings_no_softirq_call(struct irq_desc *desc)
-+{
-+ return desc->status_use_accessors & _IRQ_NO_SOFTIRQ_CALL;
++ if (p->migrate_disable) {
++ p->migrate_disable++;
++ return;
++ }
++
++ preempt_disable();
++ preempt_lazy_disable();
++ pin_current_cpu();
++
++ migrate_disable_update_cpus_allowed(p);
++ p->migrate_disable = 1;
++
++ preempt_enable();
+}
++EXPORT_SYMBOL(migrate_disable);
+
-+static inline void irq_settings_set_no_softirq_call(struct irq_desc *desc)
++void migrate_enable(void)
+{
-+ desc->status_use_accessors |= _IRQ_NO_SOFTIRQ_CALL;
++ struct task_struct *p = current;
++
++ if (in_atomic() || irqs_disabled()) {
++#ifdef CONFIG_SCHED_DEBUG
++ p->migrate_disable_atomic--;
++#endif
++ return;
++ }
++
++#ifdef CONFIG_SCHED_DEBUG
++ if (unlikely(p->migrate_disable_atomic)) {
++ tracing_off();
++ WARN_ON_ONCE(1);
++ }
++#endif
++
++ WARN_ON_ONCE(p->migrate_disable <= 0);
++ if (p->migrate_disable > 1) {
++ p->migrate_disable--;
++ return;
++ }
++
++ preempt_disable();
++
++ p->migrate_disable = 0;
++ migrate_enable_update_cpus_allowed(p);
++
++ if (p->migrate_disable_update) {
++ struct rq *rq;
++ struct rq_flags rf;
++
++ rq = task_rq_lock(p, &rf);
++ update_rq_clock(rq);
++
++ __do_set_cpus_allowed_tail(p, &p->cpus_mask);
++ task_rq_unlock(rq, p, &rf);
++
++ p->migrate_disable_update = 0;
++
++ WARN_ON(smp_processor_id() != task_cpu(p));
++ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++ const struct cpumask *cpu_valid_mask = cpu_active_mask;
++ struct migration_arg arg;
++ unsigned int dest_cpu;
++
++ if (p->flags & PF_KTHREAD) {
++ /*
++ * Kernel threads are allowed on online && !active CPUs
++ */
++ cpu_valid_mask = cpu_online_mask;
++ }
++ dest_cpu = cpumask_any_and(cpu_valid_mask, &p->cpus_mask);
++ arg.task = p;
++ arg.dest_cpu = dest_cpu;
++
++ unpin_current_cpu();
++ preempt_lazy_enable();
++ preempt_enable();
++ stop_one_cpu(task_cpu(p), migration_cpu_stop, &arg);
++ tlb_migrate_finish(p->mm);
++
++ return;
++ }
++ }
++ unpin_current_cpu();
++ preempt_lazy_enable();
++ preempt_enable();
+}
++EXPORT_SYMBOL(migrate_enable);
+
- static inline bool irq_settings_is_per_cpu(struct irq_desc *desc)
- {
- return desc->status_use_accessors & _IRQ_PER_CPU;
-diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
-index 5707f97a3e6a..73f38dc7a7fb 100644
---- a/kernel/irq/spurious.c
-+++ b/kernel/irq/spurious.c
-@@ -442,6 +442,10 @@ MODULE_PARM_DESC(noirqdebug, "Disable irq lockup detection when true");
-
- static int __init irqfixup_setup(char *str)
- {
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ pr_warn("irqfixup boot option not supported w/ CONFIG_PREEMPT_RT_BASE\n");
-+ return 1;
++#elif !defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT_BASE)
++void migrate_disable(void)
++{
++ struct task_struct *p = current;
++
++ if (in_atomic() || irqs_disabled()) {
++#ifdef CONFIG_SCHED_DEBUG
++ p->migrate_disable_atomic++;
+#endif
- irqfixup = 1;
- printk(KERN_WARNING "Misrouted IRQ fixup support enabled.\n");
- printk(KERN_WARNING "This may impact system performance.\n");
-@@ -454,6 +458,10 @@ module_param(irqfixup, int, 0644);
-
- static int __init irqpoll_setup(char *str)
- {
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ pr_warn("irqpoll boot option not supported w/ CONFIG_PREEMPT_RT_BASE\n");
-+ return 1;
++ return;
++ }
++#ifdef CONFIG_SCHED_DEBUG
++ if (unlikely(p->migrate_disable_atomic)) {
++ tracing_off();
++ WARN_ON_ONCE(1);
++ }
++#endif
++
++ p->migrate_disable++;
++}
++EXPORT_SYMBOL(migrate_disable);
++
++void migrate_enable(void)
++{
++ struct task_struct *p = current;
++
++ if (in_atomic() || irqs_disabled()) {
++#ifdef CONFIG_SCHED_DEBUG
++ p->migrate_disable_atomic--;
++#endif
++ return;
++ }
++
++#ifdef CONFIG_SCHED_DEBUG
++ if (unlikely(p->migrate_disable_atomic)) {
++ tracing_off();
++ WARN_ON_ONCE(1);
++ }
+#endif
- irqfixup = 2;
- printk(KERN_WARNING "Misrouted IRQ fixup and polling support "
- "enabled\n");
-diff --git a/kernel/irq_work.c b/kernel/irq_work.c
-index bcf107ce0854..2899ba0d23d1 100644
---- a/kernel/irq_work.c
-+++ b/kernel/irq_work.c
-@@ -17,6 +17,7 @@
- #include <linux/cpu.h>
- #include <linux/notifier.h>
- #include <linux/smp.h>
-+#include <linux/interrupt.h>
- #include <asm/processor.h>
-
-
-@@ -65,6 +66,8 @@ void __weak arch_irq_work_raise(void)
- */
- bool irq_work_queue_on(struct irq_work *work, int cpu)
- {
-+ struct llist_head *list;
+
- /* All work should have been flushed before going offline */
- WARN_ON_ONCE(cpu_is_offline(cpu));
++ WARN_ON_ONCE(p->migrate_disable <= 0);
++ p->migrate_disable--;
++}
++EXPORT_SYMBOL(migrate_enable);
++#endif
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/cpudeadline.c linux-4.14/kernel/sched/cpudeadline.c
+--- linux-4.14.orig/kernel/sched/cpudeadline.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/sched/cpudeadline.c 2018-09-05 11:05:07.000000000 +0200
+@@ -127,13 +127,13 @@
+ const struct sched_dl_entity *dl_se = &p->dl;
-@@ -75,7 +78,12 @@ bool irq_work_queue_on(struct irq_work *work, int cpu)
- if (!irq_work_claim(work))
- return false;
+ if (later_mask &&
+- cpumask_and(later_mask, cp->free_cpus, &p->cpus_allowed)) {
++ cpumask_and(later_mask, cp->free_cpus, p->cpus_ptr)) {
+ return 1;
+ } else {
+ int best_cpu = cpudl_maximum(cp);
+ WARN_ON(best_cpu != -1 && !cpu_present(best_cpu));
+
+- if (cpumask_test_cpu(best_cpu, &p->cpus_allowed) &&
++ if (cpumask_test_cpu(best_cpu, p->cpus_ptr) &&
+ dl_time_before(dl_se->deadline, cp->elements[0].dl)) {
+ if (later_mask)
+ cpumask_set_cpu(best_cpu, later_mask);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/cpupri.c linux-4.14/kernel/sched/cpupri.c
+--- linux-4.14.orig/kernel/sched/cpupri.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/sched/cpupri.c 2018-09-05 11:05:07.000000000 +0200
+@@ -103,11 +103,11 @@
+ if (skip)
+ continue;
-- if (llist_add(&work->llnode, &per_cpu(raised_list, cpu)))
-+ if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL) && !(work->flags & IRQ_WORK_HARD_IRQ))
-+ list = &per_cpu(lazy_list, cpu);
-+ else
-+ list = &per_cpu(raised_list, cpu);
-+
-+ if (llist_add(&work->llnode, list))
- arch_send_call_function_single_ipi(cpu);
+- if (cpumask_any_and(&p->cpus_allowed, vec->mask) >= nr_cpu_ids)
++ if (cpumask_any_and(p->cpus_ptr, vec->mask) >= nr_cpu_ids)
+ continue;
- return true;
-@@ -86,6 +94,9 @@ EXPORT_SYMBOL_GPL(irq_work_queue_on);
- /* Enqueue the irq work @work on the current CPU */
- bool irq_work_queue(struct irq_work *work)
+ if (lowest_mask) {
+- cpumask_and(lowest_mask, &p->cpus_allowed, vec->mask);
++ cpumask_and(lowest_mask, p->cpus_ptr, vec->mask);
+
+ /*
+ * We have to ensure that we have at least one bit
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/deadline.c linux-4.14/kernel/sched/deadline.c
+--- linux-4.14.orig/kernel/sched/deadline.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/sched/deadline.c 2018-09-05 11:05:07.000000000 +0200
+@@ -504,7 +504,7 @@
+ * If we cannot preempt any rq, fall back to pick any
+ * online cpu.
+ */
+- cpu = cpumask_any_and(cpu_active_mask, &p->cpus_allowed);
++ cpu = cpumask_any_and(cpu_active_mask, p->cpus_ptr);
+ if (cpu >= nr_cpu_ids) {
+ /*
+ * Fail to find any suitable cpu.
+@@ -1020,7 +1020,7 @@
{
-+ struct llist_head *list;
-+ bool lazy_work, realtime = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);
-+
- /* Only queue if not already pending */
- if (!irq_work_claim(work))
- return false;
-@@ -93,13 +104,15 @@ bool irq_work_queue(struct irq_work *work)
- /* Queue the entry and raise the IPI if needed. */
- preempt_disable();
+ struct hrtimer *timer = &dl_se->dl_timer;
-- /* If the work is "lazy", handle it from next tick if any */
-- if (work->flags & IRQ_WORK_LAZY) {
-- if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) &&
-- tick_nohz_tick_stopped())
-- arch_irq_work_raise();
-- } else {
-- if (llist_add(&work->llnode, this_cpu_ptr(&raised_list)))
-+ lazy_work = work->flags & IRQ_WORK_LAZY;
-+
-+ if (lazy_work || (realtime && !(work->flags & IRQ_WORK_HARD_IRQ)))
-+ list = this_cpu_ptr(&lazy_list);
-+ else
-+ list = this_cpu_ptr(&raised_list);
-+
-+ if (llist_add(&work->llnode, list)) {
-+ if (!lazy_work || tick_nohz_tick_stopped())
- arch_irq_work_raise();
+- hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ timer->function = dl_task_timer;
+ }
+
+@@ -1749,7 +1749,7 @@
+ static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
+ {
+ if (!task_running(rq, p) &&
+- cpumask_test_cpu(cpu, &p->cpus_allowed))
++ cpumask_test_cpu(cpu, p->cpus_ptr))
+ return 1;
+ return 0;
+ }
+@@ -1899,7 +1899,7 @@
+ /* Retry if something changed. */
+ if (double_lock_balance(rq, later_rq)) {
+ if (unlikely(task_rq(task) != rq ||
+- !cpumask_test_cpu(later_rq->cpu, &task->cpus_allowed) ||
++ !cpumask_test_cpu(later_rq->cpu, task->cpus_ptr) ||
+ task_running(rq, task) ||
+ !dl_task(task) ||
+ !task_on_rq_queued(task))) {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/debug.c linux-4.14/kernel/sched/debug.c
+--- linux-4.14.orig/kernel/sched/debug.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/sched/debug.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1017,6 +1017,10 @@
+ P(dl.runtime);
+ P(dl.deadline);
}
++#if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
++ P(migrate_disable);
++#endif
++ P(nr_cpus_allowed);
+ #undef PN_SCHEDSTAT
+ #undef PN
+ #undef __PN
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/fair.c linux-4.14/kernel/sched/fair.c
+--- linux-4.14.orig/kernel/sched/fair.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/sched/fair.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1596,7 +1596,7 @@
+ */
+ if (cur) {
+ /* Skip this swap candidate if cannot move to the source cpu */
+- if (!cpumask_test_cpu(env->src_cpu, &cur->cpus_allowed))
++ if (!cpumask_test_cpu(env->src_cpu, cur->cpus_ptr))
+ goto unlock;
-@@ -116,9 +129,8 @@ bool irq_work_needs_cpu(void)
- raised = this_cpu_ptr(&raised_list);
- lazy = this_cpu_ptr(&lazy_list);
+ /*
+@@ -1706,7 +1706,7 @@
-- if (llist_empty(raised) || arch_irq_work_has_interrupt())
-- if (llist_empty(lazy))
-- return false;
-+ if (llist_empty(raised) && llist_empty(lazy))
-+ return false;
+ for_each_cpu(cpu, cpumask_of_node(env->dst_nid)) {
+ /* Skip this CPU if the source task cannot migrate */
+- if (!cpumask_test_cpu(cpu, &env->p->cpus_allowed))
++ if (!cpumask_test_cpu(cpu, env->p->cpus_ptr))
+ continue;
- /* All work should have been flushed before going offline */
- WARN_ON_ONCE(cpu_is_offline(smp_processor_id()));
-@@ -132,7 +144,7 @@ static void irq_work_run_list(struct llist_head *list)
- struct irq_work *work;
- struct llist_node *llnode;
+ env->dst_cpu = cpu;
+@@ -3840,7 +3840,7 @@
+ ideal_runtime = sched_slice(cfs_rq, curr);
+ delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
+ if (delta_exec > ideal_runtime) {
+- resched_curr(rq_of(cfs_rq));
++ resched_curr_lazy(rq_of(cfs_rq));
+ /*
+ * The current task ran long enough, ensure it doesn't get
+ * re-elected due to buddy favours.
+@@ -3864,7 +3864,7 @@
+ return;
-- BUG_ON(!irqs_disabled());
-+ BUG_ON_NONRT(!irqs_disabled());
+ if (delta > ideal_runtime)
+- resched_curr(rq_of(cfs_rq));
++ resched_curr_lazy(rq_of(cfs_rq));
+ }
- if (llist_empty(list))
+ static void
+@@ -4006,7 +4006,7 @@
+ * validating it and just reschedule.
+ */
+ if (queued) {
+- resched_curr(rq_of(cfs_rq));
++ resched_curr_lazy(rq_of(cfs_rq));
return;
-@@ -169,7 +181,16 @@ static void irq_work_run_list(struct llist_head *list)
- void irq_work_run(void)
- {
- irq_work_run_list(this_cpu_ptr(&raised_list));
-- irq_work_run_list(this_cpu_ptr(&lazy_list));
-+ if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL)) {
-+ /*
-+ * NOTE: we raise softirq via IPI for safety,
-+ * and execute in irq_work_tick() to move the
-+ * overhead from hard to soft irq context.
-+ */
-+ if (!llist_empty(this_cpu_ptr(&lazy_list)))
-+ raise_softirq(TIMER_SOFTIRQ);
-+ } else
-+ irq_work_run_list(this_cpu_ptr(&lazy_list));
+ }
+ /*
+@@ -4188,7 +4188,7 @@
+ * hierarchy can be throttled
+ */
+ if (!assign_cfs_rq_runtime(cfs_rq) && likely(cfs_rq->curr))
+- resched_curr(rq_of(cfs_rq));
++ resched_curr_lazy(rq_of(cfs_rq));
}
- EXPORT_SYMBOL_GPL(irq_work_run);
-@@ -179,8 +200,17 @@ void irq_work_tick(void)
+ static __always_inline
+@@ -4837,7 +4837,7 @@
- if (!llist_empty(raised) && !arch_irq_work_has_interrupt())
- irq_work_run_list(raised);
-+
-+ if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
-+ irq_work_run_list(this_cpu_ptr(&lazy_list));
-+}
-+
-+#if defined(CONFIG_IRQ_WORK) && defined(CONFIG_PREEMPT_RT_FULL)
-+void irq_work_tick_soft(void)
-+{
- irq_work_run_list(this_cpu_ptr(&lazy_list));
- }
-+#endif
+ if (delta < 0) {
+ if (rq->curr == p)
+- resched_curr(rq);
++ resched_curr_lazy(rq);
+ return;
+ }
+ hrtick_start(rq, delta);
+@@ -5475,7 +5475,7 @@
- /*
- * Synchronize against the irq_work @entry, ensures the entry is not
-diff --git a/kernel/ksysfs.c b/kernel/ksysfs.c
-index ee1bc1bb8feb..ddef07958840 100644
---- a/kernel/ksysfs.c
-+++ b/kernel/ksysfs.c
-@@ -136,6 +136,15 @@ KERNEL_ATTR_RO(vmcoreinfo);
+ /* Skip over this group if it has no CPUs allowed */
+ if (!cpumask_intersects(sched_group_span(group),
+- &p->cpus_allowed))
++ p->cpus_ptr))
+ continue;
- #endif /* CONFIG_KEXEC_CORE */
+ local_group = cpumask_test_cpu(this_cpu,
+@@ -5595,7 +5595,7 @@
+ return cpumask_first(sched_group_span(group));
+
+ /* Traverse only the allowed CPUs */
+- for_each_cpu_and(i, sched_group_span(group), &p->cpus_allowed) {
++ for_each_cpu_and(i, sched_group_span(group), p->cpus_ptr) {
+ if (idle_cpu(i)) {
+ struct rq *rq = cpu_rq(i);
+ struct cpuidle_state *idle = idle_get_state(rq);
+@@ -5698,7 +5698,7 @@
+ if (!test_idle_cores(target, false))
+ return -1;
+
+- cpumask_and(cpus, sched_domain_span(sd), &p->cpus_allowed);
++ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+
+ for_each_cpu_wrap(core, cpus, target) {
+ bool idle = true;
+@@ -5732,7 +5732,7 @@
+ return -1;
+
+ for_each_cpu(cpu, cpu_smt_mask(target)) {
+- if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ continue;
+ if (idle_cpu(cpu))
+ return cpu;
+@@ -5795,7 +5795,7 @@
+ for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
+ if (!--nr)
+ return -1;
+- if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ continue;
+ if (idle_cpu(cpu))
+ break;
+@@ -5950,7 +5950,7 @@
+ if (sd_flag & SD_BALANCE_WAKE) {
+ record_wakee(p);
+ want_affine = !wake_wide(p) && !wake_cap(p, cpu, prev_cpu)
+- && cpumask_test_cpu(cpu, &p->cpus_allowed);
++ && cpumask_test_cpu(cpu, p->cpus_ptr);
+ }
-+#if defined(CONFIG_PREEMPT_RT_FULL)
-+static ssize_t realtime_show(struct kobject *kobj,
-+ struct kobj_attribute *attr, char *buf)
-+{
-+ return sprintf(buf, "%d\n", 1);
-+}
-+KERNEL_ATTR_RO(realtime);
-+#endif
-+
- /* whether file capabilities are enabled */
- static ssize_t fscaps_show(struct kobject *kobj,
- struct kobj_attribute *attr, char *buf)
-@@ -225,6 +234,9 @@ static struct attribute * kernel_attrs[] = {
- &rcu_expedited_attr.attr,
- &rcu_normal_attr.attr,
- #endif
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ &realtime_attr.attr,
-+#endif
- NULL
- };
+ rcu_read_lock();
+@@ -6231,7 +6231,7 @@
+ return;
-diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
-index 6f88e352cd4f..5e27fb1079e7 100644
---- a/kernel/locking/Makefile
-+++ b/kernel/locking/Makefile
-@@ -2,7 +2,7 @@
- # and is generally not a function of system call inputs.
- KCOV_INSTRUMENT := n
+ preempt:
+- resched_curr(rq);
++ resched_curr_lazy(rq);
+ /*
+ * Only set the backward buddy when the current task is still
+ * on the rq. This can happen when a wakeup gets interleaved
+@@ -6699,14 +6699,14 @@
+ /*
+ * We do not migrate tasks that are:
+ * 1) throttled_lb_pair, or
+- * 2) cannot be migrated to this CPU due to cpus_allowed, or
++ * 2) cannot be migrated to this CPU due to cpus_ptr, or
+ * 3) running (obviously), or
+ * 4) are cache-hot on their current CPU.
+ */
+ if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
+ return 0;
--obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
-+obj-y += semaphore.o percpu-rwsem.o
+- if (!cpumask_test_cpu(env->dst_cpu, &p->cpus_allowed)) {
++ if (!cpumask_test_cpu(env->dst_cpu, p->cpus_ptr)) {
+ int cpu;
- ifdef CONFIG_FUNCTION_TRACER
- CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE)
-@@ -11,7 +11,11 @@ CFLAGS_REMOVE_mutex-debug.o = $(CC_FLAGS_FTRACE)
- CFLAGS_REMOVE_rtmutex-debug.o = $(CC_FLAGS_FTRACE)
- endif
+ schedstat_inc(p->se.statistics.nr_failed_migrations_affine);
+@@ -6726,7 +6726,7 @@
-+ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
-+obj-y += mutex.o
- obj-$(CONFIG_DEBUG_MUTEXES) += mutex-debug.o
-+obj-y += rwsem.o
-+endif
- obj-$(CONFIG_LOCKDEP) += lockdep.o
- ifeq ($(CONFIG_PROC_FS),y)
- obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
-@@ -24,7 +28,10 @@ obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
- obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
- obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
- obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock_debug.o
-+ifneq ($(CONFIG_PREEMPT_RT_FULL),y)
- obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
- obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem-xadd.o
-+endif
-+obj-$(CONFIG_PREEMPT_RT_FULL) += rt.o
- obj-$(CONFIG_QUEUED_RWLOCKS) += qrwlock.o
- obj-$(CONFIG_LOCK_TORTURE_TEST) += locktorture.o
-diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
-index 4d7ffc0a0d00..9e52009c192e 100644
---- a/kernel/locking/lockdep.c
-+++ b/kernel/locking/lockdep.c
-@@ -3689,6 +3689,7 @@ static void check_flags(unsigned long flags)
- }
- }
+ /* Prevent to re-select dst_cpu via env's cpus */
+ for_each_cpu_and(cpu, env->dst_grpmask, env->cpus) {
+- if (cpumask_test_cpu(cpu, &p->cpus_allowed)) {
++ if (cpumask_test_cpu(cpu, p->cpus_ptr)) {
+ env->flags |= LBF_DST_PINNED;
+ env->new_dst_cpu = cpu;
+ break;
+@@ -7295,7 +7295,7 @@
-+#ifndef CONFIG_PREEMPT_RT_FULL
+ /*
+ * Group imbalance indicates (and tries to solve) the problem where balancing
+- * groups is inadequate due to ->cpus_allowed constraints.
++ * groups is inadequate due to ->cpus_ptr constraints.
+ *
+ * Imagine a situation of two groups of 4 cpus each and 4 tasks each with a
+ * cpumask covering 1 cpu of the first group and 3 cpus of the second group.
+@@ -7871,7 +7871,7 @@
/*
- * We dont accurately track softirq state in e.g.
- * hardirq contexts (such as on 4KSTACKS), so only
-@@ -3703,6 +3704,7 @@ static void check_flags(unsigned long flags)
- DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
- }
+ * If the busiest group is imbalanced the below checks don't
+ * work because they assume all things are equal, which typically
+- * isn't true due to cpus_allowed constraints and the like.
++ * isn't true due to cpus_ptr constraints and the like.
+ */
+ if (busiest->group_type == group_imbalanced)
+ goto force_balance;
+@@ -8263,7 +8263,7 @@
+ * if the curr task on busiest cpu can't be
+ * moved to this_cpu
+ */
+- if (!cpumask_test_cpu(this_cpu, &busiest->curr->cpus_allowed)) {
++ if (!cpumask_test_cpu(this_cpu, busiest->curr->cpus_ptr)) {
+ raw_spin_unlock_irqrestore(&busiest->lock,
+ flags);
+ env.flags |= LBF_ALL_PINNED;
+@@ -9085,7 +9085,7 @@
+ * 'current' within the tree based on its new key value.
+ */
+ swap(curr->vruntime, se->vruntime);
+- resched_curr(rq);
++ resched_curr_lazy(rq);
}
+
+ se->vruntime -= cfs_rq->min_vruntime;
+@@ -9109,7 +9109,7 @@
+ */
+ if (rq->curr == p) {
+ if (p->prio > oldprio)
+- resched_curr(rq);
++ resched_curr_lazy(rq);
+ } else
+ check_preempt_curr(rq, p, 0);
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/features.h linux-4.14/kernel/sched/features.h
+--- linux-4.14.orig/kernel/sched/features.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/sched/features.h 2018-09-05 11:05:07.000000000 +0200
+@@ -46,11 +46,19 @@
+ */
+ SCHED_FEAT(NONTASK_CAPACITY, true)
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++SCHED_FEAT(TTWU_QUEUE, false)
++# ifdef CONFIG_PREEMPT_LAZY
++SCHED_FEAT(PREEMPT_LAZY, true)
++# endif
++#else
++
+ /*
+ * Queue remote wakeups on the target CPU and process them
+ * using the scheduler IPI. Reduces rq->lock contention/bounces.
+ */
+ SCHED_FEAT(TTWU_QUEUE, true)
+#endif
- if (!debug_locks)
- print_irqtrace_events(current);
-diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
-index f8c5af52a131..788068773e61 100644
---- a/kernel/locking/locktorture.c
-+++ b/kernel/locking/locktorture.c
-@@ -26,7 +26,6 @@
- #include <linux/kthread.h>
- #include <linux/sched/rt.h>
- #include <linux/spinlock.h>
--#include <linux/rwlock.h>
- #include <linux/mutex.h>
- #include <linux/rwsem.h>
- #include <linux/smp.h>
-diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
-index ce182599cf2e..2ad3a1e8344c 100644
---- a/kernel/locking/percpu-rwsem.c
-+++ b/kernel/locking/percpu-rwsem.c
-@@ -18,7 +18,7 @@ int __percpu_init_rwsem(struct percpu_rw_semaphore *sem,
- /* ->rw_sem represents the whole percpu_rw_semaphore for lockdep */
- rcu_sync_init(&sem->rss, RCU_SCHED_SYNC);
- __init_rwsem(&sem->rw_sem, name, rwsem_key);
-- init_waitqueue_head(&sem->writer);
-+ init_swait_queue_head(&sem->writer);
- sem->readers_block = 0;
- return 0;
- }
-@@ -103,7 +103,7 @@ void __percpu_up_read(struct percpu_rw_semaphore *sem)
- __this_cpu_dec(*sem->read_count);
+ /*
+ * When doing wakeups, attempt to limit superfluous scans of the LLC domain.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/Makefile linux-4.14/kernel/sched/Makefile
+--- linux-4.14.orig/kernel/sched/Makefile 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/sched/Makefile 2018-09-05 11:05:07.000000000 +0200
+@@ -18,7 +18,7 @@
+
+ obj-y += core.o loadavg.o clock.o cputime.o
+ obj-y += idle_task.o fair.o rt.o deadline.o
+-obj-y += wait.o wait_bit.o swait.o completion.o idle.o
++obj-y += wait.o wait_bit.o swait.o swork.o completion.o idle.o
+ obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o
+ obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o
+ obj-$(CONFIG_SCHEDSTATS) += stats.o
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/rt.c linux-4.14/kernel/sched/rt.c
+--- linux-4.14.orig/kernel/sched/rt.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/sched/rt.c 2018-09-05 11:05:07.000000000 +0200
+@@ -47,8 +47,8 @@
- /* Prod writer to recheck readers_active */
-- wake_up(&sem->writer);
-+ swake_up(&sem->writer);
- }
- EXPORT_SYMBOL_GPL(__percpu_up_read);
+ raw_spin_lock_init(&rt_b->rt_runtime_lock);
-@@ -160,7 +160,7 @@ void percpu_down_write(struct percpu_rw_semaphore *sem)
- */
+- hrtimer_init(&rt_b->rt_period_timer,
+- CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&rt_b->rt_period_timer, CLOCK_MONOTONIC,
++ HRTIMER_MODE_REL_HARD);
+ rt_b->rt_period_timer.function = sched_rt_period_timer;
+ }
- /* Wait for all now active readers to complete. */
-- wait_event(sem->writer, readers_active_check(sem));
-+ swait_event(sem->writer, readers_active_check(sem));
+@@ -1594,7 +1594,7 @@
+ static int pick_rt_task(struct rq *rq, struct task_struct *p, int cpu)
+ {
+ if (!task_running(rq, p) &&
+- cpumask_test_cpu(cpu, &p->cpus_allowed))
++ cpumask_test_cpu(cpu, p->cpus_ptr))
+ return 1;
+ return 0;
}
- EXPORT_SYMBOL_GPL(percpu_down_write);
+@@ -1729,7 +1729,7 @@
+ * Also make sure that it wasn't scheduled on its rq.
+ */
+ if (unlikely(task_rq(task) != rq ||
+- !cpumask_test_cpu(lowest_rq->cpu, &task->cpus_allowed) ||
++ !cpumask_test_cpu(lowest_rq->cpu, task->cpus_ptr) ||
+ task_running(rq, task) ||
+ !rt_task(task) ||
+ !task_on_rq_queued(task))) {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/sched.h linux-4.14/kernel/sched/sched.h
+--- linux-4.14.orig/kernel/sched/sched.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/sched/sched.h 2018-09-05 11:05:07.000000000 +0200
+@@ -1354,6 +1354,7 @@
+ #define WF_SYNC 0x01 /* waker goes to sleep after wakeup */
+ #define WF_FORK 0x02 /* child wakeup after fork */
+ #define WF_MIGRATED 0x4 /* internal use, task got migrated */
++#define WF_LOCK_SLEEPER 0x08 /* wakeup spinlock "sleeper" */
-diff --git a/kernel/locking/rt.c b/kernel/locking/rt.c
-new file mode 100644
-index 000000000000..665754c00e1e
---- /dev/null
-+++ b/kernel/locking/rt.c
-@@ -0,0 +1,498 @@
-+/*
-+ * kernel/rt.c
-+ *
-+ * Real-Time Preemption Support
-+ *
-+ * started by Ingo Molnar:
-+ *
-+ * Copyright (C) 2004-2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
-+ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
-+ *
-+ * historic credit for proving that Linux spinlocks can be implemented via
-+ * RT-aware mutexes goes to many people: The Pmutex project (Dirk Grambow
-+ * and others) who prototyped it on 2.4 and did lots of comparative
-+ * research and analysis; TimeSys, for proving that you can implement a
-+ * fully preemptible kernel via the use of IRQ threading and mutexes;
-+ * Bill Huey for persuasively arguing on lkml that the mutex model is the
-+ * right one; and to MontaVista, who ported pmutexes to 2.6.
-+ *
-+ * This code is a from-scratch implementation and is not based on pmutexes,
-+ * but the idea of converting spinlocks to mutexes is used here too.
-+ *
-+ * lock debugging, locking tree, deadlock detection:
-+ *
-+ * Copyright (C) 2004, LynuxWorks, Inc., Igor Manyilov, Bill Huey
-+ * Released under the General Public License (GPL).
-+ *
-+ * Includes portions of the generic R/W semaphore implementation from:
-+ *
-+ * Copyright (c) 2001 David Howells (dhowells@redhat.com).
-+ * - Derived partially from idea by Andrea Arcangeli <andrea@suse.de>
-+ * - Derived also from comments by Linus
-+ *
-+ * Pending ownership of locks and ownership stealing:
-+ *
-+ * Copyright (C) 2005, Kihon Technologies Inc., Steven Rostedt
-+ *
-+ * (also by Steven Rostedt)
-+ * - Converted single pi_lock to individual task locks.
-+ *
-+ * By Esben Nielsen:
-+ * Doing priority inheritance with help of the scheduler.
-+ *
-+ * Copyright (C) 2006, Timesys Corp., Thomas Gleixner <tglx@timesys.com>
-+ * - major rework based on Esben Nielsens initial patch
-+ * - replaced thread_info references by task_struct refs
-+ * - removed task->pending_owner dependency
-+ * - BKL drop/reacquire for semaphore style locks to avoid deadlocks
-+ * in the scheduler return path as discussed with Steven Rostedt
-+ *
-+ * Copyright (C) 2006, Kihon Technologies Inc.
-+ * Steven Rostedt <rostedt@goodmis.org>
-+ * - debugged and patched Thomas Gleixner's rework.
-+ * - added back the cmpxchg to the rework.
-+ * - turned atomic require back on for SMP.
-+ */
-+
-+#include <linux/spinlock.h>
-+#include <linux/rtmutex.h>
-+#include <linux/sched.h>
-+#include <linux/delay.h>
-+#include <linux/module.h>
-+#include <linux/kallsyms.h>
-+#include <linux/syscalls.h>
-+#include <linux/interrupt.h>
-+#include <linux/plist.h>
-+#include <linux/fs.h>
-+#include <linux/futex.h>
-+#include <linux/hrtimer.h>
-+
-+#include "rtmutex_common.h"
-+
-+/*
-+ * struct mutex functions
-+ */
-+void __mutex_do_init(struct mutex *mutex, const char *name,
-+ struct lock_class_key *key)
-+{
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ /*
-+ * Make sure we are not reinitializing a held lock:
-+ */
-+ debug_check_no_locks_freed((void *)mutex, sizeof(*mutex));
-+ lockdep_init_map(&mutex->dep_map, name, key, 0);
-+#endif
-+ mutex->lock.save_state = 0;
-+}
-+EXPORT_SYMBOL(__mutex_do_init);
-+
-+void __lockfunc _mutex_lock(struct mutex *lock)
-+{
-+ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
-+ rt_mutex_lock(&lock->lock);
-+}
-+EXPORT_SYMBOL(_mutex_lock);
-+
-+int __lockfunc _mutex_lock_interruptible(struct mutex *lock)
-+{
-+ int ret;
-+
-+ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
-+ ret = rt_mutex_lock_interruptible(&lock->lock);
-+ if (ret)
-+ mutex_release(&lock->dep_map, 1, _RET_IP_);
-+ return ret;
-+}
-+EXPORT_SYMBOL(_mutex_lock_interruptible);
-+
-+int __lockfunc _mutex_lock_killable(struct mutex *lock)
-+{
-+ int ret;
-+
-+ mutex_acquire(&lock->dep_map, 0, 0, _RET_IP_);
-+ ret = rt_mutex_lock_killable(&lock->lock);
-+ if (ret)
-+ mutex_release(&lock->dep_map, 1, _RET_IP_);
-+ return ret;
-+}
-+EXPORT_SYMBOL(_mutex_lock_killable);
-+
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+void __lockfunc _mutex_lock_nested(struct mutex *lock, int subclass)
-+{
-+ mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
-+ rt_mutex_lock(&lock->lock);
-+}
-+EXPORT_SYMBOL(_mutex_lock_nested);
-+
-+void __lockfunc _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
-+{
-+ mutex_acquire_nest(&lock->dep_map, 0, 0, nest, _RET_IP_);
-+ rt_mutex_lock(&lock->lock);
-+}
-+EXPORT_SYMBOL(_mutex_lock_nest_lock);
-+
-+int __lockfunc _mutex_lock_interruptible_nested(struct mutex *lock, int subclass)
-+{
-+ int ret;
-+
-+ mutex_acquire_nest(&lock->dep_map, subclass, 0, NULL, _RET_IP_);
-+ ret = rt_mutex_lock_interruptible(&lock->lock);
-+ if (ret)
-+ mutex_release(&lock->dep_map, 1, _RET_IP_);
-+ return ret;
-+}
-+EXPORT_SYMBOL(_mutex_lock_interruptible_nested);
-+
-+int __lockfunc _mutex_lock_killable_nested(struct mutex *lock, int subclass)
+ /*
+ * To aid in avoiding the subversion of "niceness" due to uneven distribution
+@@ -1545,6 +1546,15 @@
+ extern void resched_curr(struct rq *rq);
+ extern void resched_cpu(int cpu);
+
++#ifdef CONFIG_PREEMPT_LAZY
++extern void resched_curr_lazy(struct rq *rq);
++#else
++static inline void resched_curr_lazy(struct rq *rq)
+{
-+ int ret;
-+
-+ mutex_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
-+ ret = rt_mutex_lock_killable(&lock->lock);
-+ if (ret)
-+ mutex_release(&lock->dep_map, 1, _RET_IP_);
-+ return ret;
++ resched_curr(rq);
+}
-+EXPORT_SYMBOL(_mutex_lock_killable_nested);
+#endif
+
-+int __lockfunc _mutex_trylock(struct mutex *lock)
+ extern struct rt_bandwidth def_rt_bandwidth;
+ extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/swait.c linux-4.14/kernel/sched/swait.c
+--- linux-4.14.orig/kernel/sched/swait.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/sched/swait.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/sched/signal.h>
+ #include <linux/swait.h>
++#include <linux/suspend.h>
+
+ void __init_swait_queue_head(struct swait_queue_head *q, const char *name,
+ struct lock_class_key *key)
+@@ -30,6 +31,25 @@
+ }
+ EXPORT_SYMBOL(swake_up_locked);
+
++void swake_up_all_locked(struct swait_queue_head *q)
+{
-+ int ret = rt_mutex_trylock(&lock->lock);
-+
-+ if (ret)
-+ mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
++ struct swait_queue *curr;
++ int wakes = 0;
+
-+ return ret;
-+}
-+EXPORT_SYMBOL(_mutex_trylock);
++ while (!list_empty(&q->task_list)) {
+
-+void __lockfunc _mutex_unlock(struct mutex *lock)
-+{
-+ mutex_release(&lock->dep_map, 1, _RET_IP_);
-+ rt_mutex_unlock(&lock->lock);
++ curr = list_first_entry(&q->task_list, typeof(*curr),
++ task_list);
++ wake_up_process(curr->task);
++ list_del_init(&curr->task_list);
++ wakes++;
++ }
++ if (pm_in_action)
++ return;
++ WARN(wakes > 2, "complete_all() with %d waiters\n", wakes);
+}
-+EXPORT_SYMBOL(_mutex_unlock);
++EXPORT_SYMBOL(swake_up_all_locked);
+
+ void swake_up(struct swait_queue_head *q)
+ {
+ unsigned long flags;
+@@ -49,6 +69,7 @@
+ struct swait_queue *curr;
+ LIST_HEAD(tmp);
+
++ WARN_ON(irqs_disabled());
+ raw_spin_lock_irq(&q->lock);
+ list_splice_init(&q->task_list, &tmp);
+ while (!list_empty(&tmp)) {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/swork.c linux-4.14/kernel/sched/swork.c
+--- linux-4.14.orig/kernel/sched/swork.c 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/kernel/sched/swork.c 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,173 @@
+/*
-+ * rwlock_t functions
++ * Copyright (C) 2014 BMW Car IT GmbH, Daniel Wagner daniel.wagner@bmw-carit.de
++ *
++ * Provides a framework for enqueuing callbacks from irq context
++ * PREEMPT_RT_FULL safe. The callbacks are executed in kthread context.
+ */
-+int __lockfunc rt_write_trylock(rwlock_t *rwlock)
-+{
-+ int ret;
+
-+ migrate_disable();
-+ ret = rt_mutex_trylock(&rwlock->lock);
-+ if (ret)
-+ rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
-+ else
-+ migrate_enable();
++#include <linux/swait.h>
++#include <linux/swork.h>
++#include <linux/kthread.h>
++#include <linux/slab.h>
++#include <linux/spinlock.h>
++#include <linux/export.h>
++
++#define SWORK_EVENT_PENDING (1 << 0)
++
++static DEFINE_MUTEX(worker_mutex);
++static struct sworker *glob_worker;
++
++struct sworker {
++ struct list_head events;
++ struct swait_queue_head wq;
+
-+ return ret;
-+}
-+EXPORT_SYMBOL(rt_write_trylock);
++ raw_spinlock_t lock;
++
++ struct task_struct *task;
++ int refs;
++};
+
-+int __lockfunc rt_write_trylock_irqsave(rwlock_t *rwlock, unsigned long *flags)
++static bool swork_readable(struct sworker *worker)
+{
-+ int ret;
++ bool r;
+
-+ *flags = 0;
-+ ret = rt_write_trylock(rwlock);
-+ return ret;
++ if (kthread_should_stop())
++ return true;
++
++ raw_spin_lock_irq(&worker->lock);
++ r = !list_empty(&worker->events);
++ raw_spin_unlock_irq(&worker->lock);
++
++ return r;
+}
-+EXPORT_SYMBOL(rt_write_trylock_irqsave);
+
-+int __lockfunc rt_read_trylock(rwlock_t *rwlock)
++static int swork_kthread(void *arg)
+{
-+ struct rt_mutex *lock = &rwlock->lock;
-+ int ret = 1;
++ struct sworker *worker = arg;
+
-+ /*
-+ * recursive read locks succeed when current owns the lock,
-+ * but not when read_depth == 0 which means that the lock is
-+ * write locked.
-+ */
-+ if (rt_mutex_owner(lock) != current) {
-+ migrate_disable();
-+ ret = rt_mutex_trylock(lock);
-+ if (ret)
-+ rwlock_acquire(&rwlock->dep_map, 0, 1, _RET_IP_);
-+ else
-+ migrate_enable();
++ for (;;) {
++ swait_event_interruptible(worker->wq,
++ swork_readable(worker));
++ if (kthread_should_stop())
++ break;
+
-+ } else if (!rwlock->read_depth) {
-+ ret = 0;
-+ }
++ raw_spin_lock_irq(&worker->lock);
++ while (!list_empty(&worker->events)) {
++ struct swork_event *sev;
+
-+ if (ret)
-+ rwlock->read_depth++;
++ sev = list_first_entry(&worker->events,
++ struct swork_event, item);
++ list_del(&sev->item);
++ raw_spin_unlock_irq(&worker->lock);
+
-+ return ret;
++ WARN_ON_ONCE(!test_and_clear_bit(SWORK_EVENT_PENDING,
++ &sev->flags));
++ sev->func(sev);
++ raw_spin_lock_irq(&worker->lock);
++ }
++ raw_spin_unlock_irq(&worker->lock);
++ }
++ return 0;
+}
-+EXPORT_SYMBOL(rt_read_trylock);
+
-+void __lockfunc rt_write_lock(rwlock_t *rwlock)
++static struct sworker *swork_create(void)
+{
-+ rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
-+ __rt_spin_lock(&rwlock->lock);
-+}
-+EXPORT_SYMBOL(rt_write_lock);
++ struct sworker *worker;
+
-+void __lockfunc rt_read_lock(rwlock_t *rwlock)
-+{
-+ struct rt_mutex *lock = &rwlock->lock;
++ worker = kzalloc(sizeof(*worker), GFP_KERNEL);
++ if (!worker)
++ return ERR_PTR(-ENOMEM);
+
++ INIT_LIST_HEAD(&worker->events);
++ raw_spin_lock_init(&worker->lock);
++ init_swait_queue_head(&worker->wq);
+
-+ /*
-+ * recursive read locks succeed when current owns the lock
-+ */
-+ if (rt_mutex_owner(lock) != current) {
-+ rwlock_acquire(&rwlock->dep_map, 0, 0, _RET_IP_);
-+ __rt_spin_lock(lock);
++ worker->task = kthread_run(swork_kthread, worker, "kswork");
++ if (IS_ERR(worker->task)) {
++ kfree(worker);
++ return ERR_PTR(-ENOMEM);
+ }
-+ rwlock->read_depth++;
-+}
-+
-+EXPORT_SYMBOL(rt_read_lock);
+
-+void __lockfunc rt_write_unlock(rwlock_t *rwlock)
-+{
-+ /* NOTE: we always pass in '1' for nested, for simplicity */
-+ rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
-+ __rt_spin_unlock(&rwlock->lock);
-+ migrate_enable();
++ return worker;
+}
-+EXPORT_SYMBOL(rt_write_unlock);
+
-+void __lockfunc rt_read_unlock(rwlock_t *rwlock)
++static void swork_destroy(struct sworker *worker)
+{
-+ /* Release the lock only when read_depth is down to 0 */
-+ if (--rwlock->read_depth == 0) {
-+ rwlock_release(&rwlock->dep_map, 1, _RET_IP_);
-+ __rt_spin_unlock(&rwlock->lock);
-+ migrate_enable();
-+ }
++ kthread_stop(worker->task);
++
++ WARN_ON(!list_empty(&worker->events));
++ kfree(worker);
+}
-+EXPORT_SYMBOL(rt_read_unlock);
+
-+unsigned long __lockfunc rt_write_lock_irqsave(rwlock_t *rwlock)
++/**
++ * swork_queue - queue swork
++ *
++ * Returns %false if @work was already on a queue, %true otherwise.
++ *
++ * The work is queued and processed on a random CPU
++ */
++bool swork_queue(struct swork_event *sev)
+{
-+ rt_write_lock(rwlock);
++ unsigned long flags;
+
-+ return 0;
-+}
-+EXPORT_SYMBOL(rt_write_lock_irqsave);
++ if (test_and_set_bit(SWORK_EVENT_PENDING, &sev->flags))
++ return false;
+
-+unsigned long __lockfunc rt_read_lock_irqsave(rwlock_t *rwlock)
-+{
-+ rt_read_lock(rwlock);
++ raw_spin_lock_irqsave(&glob_worker->lock, flags);
++ list_add_tail(&sev->item, &glob_worker->events);
++ raw_spin_unlock_irqrestore(&glob_worker->lock, flags);
+
-+ return 0;
++ swake_up(&glob_worker->wq);
++ return true;
+}
-+EXPORT_SYMBOL(rt_read_lock_irqsave);
++EXPORT_SYMBOL_GPL(swork_queue);
+
-+void __rt_rwlock_init(rwlock_t *rwlock, char *name, struct lock_class_key *key)
++/**
++ * swork_get - get an instance of the sworker
++ *
++ * Returns an negative error code if the initialization if the worker did not
++ * work, %0 otherwise.
++ *
++ */
++int swork_get(void)
+{
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ /*
-+ * Make sure we are not reinitializing a held lock:
-+ */
-+ debug_check_no_locks_freed((void *)rwlock, sizeof(*rwlock));
-+ lockdep_init_map(&rwlock->dep_map, name, key, 0);
-+#endif
-+ rwlock->lock.save_state = 1;
-+ rwlock->read_depth = 0;
-+}
-+EXPORT_SYMBOL(__rt_rwlock_init);
++ struct sworker *worker;
+
-+/*
-+ * rw_semaphores
-+ */
++ mutex_lock(&worker_mutex);
++ if (!glob_worker) {
++ worker = swork_create();
++ if (IS_ERR(worker)) {
++ mutex_unlock(&worker_mutex);
++ return -ENOMEM;
++ }
+
-+void rt_up_write(struct rw_semaphore *rwsem)
-+{
-+ rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
-+ rt_mutex_unlock(&rwsem->lock);
-+}
-+EXPORT_SYMBOL(rt_up_write);
++ glob_worker = worker;
++ }
+
-+void __rt_up_read(struct rw_semaphore *rwsem)
-+{
-+ if (--rwsem->read_depth == 0)
-+ rt_mutex_unlock(&rwsem->lock);
-+}
++ glob_worker->refs++;
++ mutex_unlock(&worker_mutex);
+
-+void rt_up_read(struct rw_semaphore *rwsem)
-+{
-+ rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
-+ __rt_up_read(rwsem);
++ return 0;
+}
-+EXPORT_SYMBOL(rt_up_read);
++EXPORT_SYMBOL_GPL(swork_get);
+
-+/*
-+ * downgrade a write lock into a read lock
-+ * - just wake up any readers at the front of the queue
++/**
++ * swork_put - puts an instance of the sworker
++ *
++ * Will destroy the sworker thread. This function must not be called until all
++ * queued events have been completed.
+ */
-+void rt_downgrade_write(struct rw_semaphore *rwsem)
-+{
-+ BUG_ON(rt_mutex_owner(&rwsem->lock) != current);
-+ rwsem->read_depth = 1;
-+}
-+EXPORT_SYMBOL(rt_downgrade_write);
-+
-+int rt_down_write_trylock(struct rw_semaphore *rwsem)
++void swork_put(void)
+{
-+ int ret = rt_mutex_trylock(&rwsem->lock);
++ mutex_lock(&worker_mutex);
+
-+ if (ret)
-+ rwsem_acquire(&rwsem->dep_map, 0, 1, _RET_IP_);
-+ return ret;
-+}
-+EXPORT_SYMBOL(rt_down_write_trylock);
++ glob_worker->refs--;
++ if (glob_worker->refs > 0)
++ goto out;
+
-+void rt_down_write(struct rw_semaphore *rwsem)
-+{
-+ rwsem_acquire(&rwsem->dep_map, 0, 0, _RET_IP_);
-+ rt_mutex_lock(&rwsem->lock);
++ swork_destroy(glob_worker);
++ glob_worker = NULL;
++out:
++ mutex_unlock(&worker_mutex);
+}
-+EXPORT_SYMBOL(rt_down_write);
-+
-+int rt_down_write_killable(struct rw_semaphore *rwsem)
++EXPORT_SYMBOL_GPL(swork_put);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/sched/topology.c linux-4.14/kernel/sched/topology.c
+--- linux-4.14.orig/kernel/sched/topology.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/sched/topology.c 2018-09-05 11:05:07.000000000 +0200
+@@ -286,6 +286,7 @@
+ rd->rto_cpu = -1;
+ raw_spin_lock_init(&rd->rto_lock);
+ init_irq_work(&rd->rto_push_work, rto_push_irq_work_func);
++ rd->rto_push_work.flags |= IRQ_WORK_HARD_IRQ;
+ #endif
+
+ init_dl_bw(&rd->dl_bw);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/signal.c linux-4.14/kernel/signal.c
+--- linux-4.14.orig/kernel/signal.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/signal.c 2018-09-05 11:05:07.000000000 +0200
+@@ -19,6 +19,7 @@
+ #include <linux/sched/task.h>
+ #include <linux/sched/task_stack.h>
+ #include <linux/sched/cputime.h>
++#include <linux/sched/rt.h>
+ #include <linux/fs.h>
+ #include <linux/tty.h>
+ #include <linux/binfmts.h>
+@@ -360,13 +361,30 @@
+ return false;
+ }
+
++static inline struct sigqueue *get_task_cache(struct task_struct *t)
+{
-+ int ret;
++ struct sigqueue *q = t->sigqueue_cache;
+
-+ rwsem_acquire(&rwsem->dep_map, 0, 0, _RET_IP_);
-+ ret = rt_mutex_lock_killable(&rwsem->lock);
-+ if (ret)
-+ rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
-+ return ret;
++ if (cmpxchg(&t->sigqueue_cache, q, NULL) != q)
++ return NULL;
++ return q;
+}
-+EXPORT_SYMBOL(rt_down_write_killable);
+
-+int rt_down_write_killable_nested(struct rw_semaphore *rwsem, int subclass)
++static inline int put_task_cache(struct task_struct *t, struct sigqueue *q)
+{
-+ int ret;
-+
-+ rwsem_acquire(&rwsem->dep_map, subclass, 0, _RET_IP_);
-+ ret = rt_mutex_lock_killable(&rwsem->lock);
-+ if (ret)
-+ rwsem_release(&rwsem->dep_map, 1, _RET_IP_);
-+ return ret;
++ if (cmpxchg(&t->sigqueue_cache, NULL, q) == NULL)
++ return 0;
++ return 1;
+}
-+EXPORT_SYMBOL(rt_down_write_killable_nested);
+
-+void rt_down_write_nested(struct rw_semaphore *rwsem, int subclass)
+ /*
+ * allocate a new signal queue record
+ * - this may be called without locks if and only if t == current, otherwise an
+ * appropriate lock must be held to stop the target task from exiting
+ */
+ static struct sigqueue *
+-__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimit)
++__sigqueue_do_alloc(int sig, struct task_struct *t, gfp_t flags,
++ int override_rlimit, int fromslab)
+ {
+ struct sigqueue *q = NULL;
+ struct user_struct *user;
+@@ -383,7 +401,10 @@
+ if (override_rlimit ||
+ atomic_read(&user->sigpending) <=
+ task_rlimit(t, RLIMIT_SIGPENDING)) {
+- q = kmem_cache_alloc(sigqueue_cachep, flags);
++ if (!fromslab)
++ q = get_task_cache(t);
++ if (!q)
++ q = kmem_cache_alloc(sigqueue_cachep, flags);
+ } else {
+ print_dropped_signal(sig);
+ }
+@@ -400,6 +421,13 @@
+ return q;
+ }
+
++static struct sigqueue *
++__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags,
++ int override_rlimit)
+{
-+ rwsem_acquire(&rwsem->dep_map, subclass, 0, _RET_IP_);
-+ rt_mutex_lock(&rwsem->lock);
++ return __sigqueue_do_alloc(sig, t, flags, override_rlimit, 0);
+}
-+EXPORT_SYMBOL(rt_down_write_nested);
+
-+void rt_down_write_nested_lock(struct rw_semaphore *rwsem,
-+ struct lockdep_map *nest)
+ static void __sigqueue_free(struct sigqueue *q)
+ {
+ if (q->flags & SIGQUEUE_PREALLOC)
+@@ -409,6 +437,21 @@
+ kmem_cache_free(sigqueue_cachep, q);
+ }
+
++static void sigqueue_free_current(struct sigqueue *q)
+{
-+ rwsem_acquire_nest(&rwsem->dep_map, 0, 0, nest, _RET_IP_);
-+ rt_mutex_lock(&rwsem->lock);
++ struct user_struct *up;
++
++ if (q->flags & SIGQUEUE_PREALLOC)
++ return;
++
++ up = q->user;
++ if (rt_prio(current->normal_prio) && !put_task_cache(current, q)) {
++ atomic_dec(&up->sigpending);
++ free_uid(up);
++ } else
++ __sigqueue_free(q);
+}
-+EXPORT_SYMBOL(rt_down_write_nested_lock);
+
-+int rt__down_read_trylock(struct rw_semaphore *rwsem)
+ void flush_sigqueue(struct sigpending *queue)
+ {
+ struct sigqueue *q;
+@@ -422,6 +465,21 @@
+ }
+
+ /*
++ * Called from __exit_signal. Flush tsk->pending and
++ * tsk->sigqueue_cache
++ */
++void flush_task_sigqueue(struct task_struct *tsk)
+{
-+ struct rt_mutex *lock = &rwsem->lock;
-+ int ret = 1;
-+
-+ /*
-+ * recursive read locks succeed when current owns the rwsem,
-+ * but not when read_depth == 0 which means that the rwsem is
-+ * write locked.
-+ */
-+ if (rt_mutex_owner(lock) != current)
-+ ret = rt_mutex_trylock(&rwsem->lock);
-+ else if (!rwsem->read_depth)
-+ ret = 0;
++ struct sigqueue *q;
+
-+ if (ret)
-+ rwsem->read_depth++;
-+ return ret;
++ flush_sigqueue(&tsk->pending);
+
++ q = get_task_cache(tsk);
++ if (q)
++ kmem_cache_free(sigqueue_cachep, q);
+}
+
-+int rt_down_read_trylock(struct rw_semaphore *rwsem)
++/*
+ * Flush all pending signals for this kthread.
+ */
+ void flush_signals(struct task_struct *t)
+@@ -542,7 +600,7 @@
+ (info->si_code == SI_TIMER) &&
+ (info->si_sys_private);
+
+- __sigqueue_free(first);
++ sigqueue_free_current(first);
+ } else {
+ /*
+ * Ok, it wasn't in the queue. This must be
+@@ -578,6 +636,8 @@
+ bool resched_timer = false;
+ int signr;
+
++ WARN_ON_ONCE(tsk != current);
++
+ /* We only dequeue private signals from ourselves, we don't let
+ * signalfd steal them
+ */
+@@ -1177,8 +1237,8 @@
+ * We don't want to have recursive SIGSEGV's etc, for example,
+ * that is why we also clear SIGNAL_UNKILLABLE.
+ */
+-int
+-force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
++static int
++do_force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
+ {
+ unsigned long int flags;
+ int ret, blocked, ignored;
+@@ -1207,6 +1267,39 @@
+ return ret;
+ }
+
++int force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
+{
-+ int ret;
++/*
++ * On some archs, PREEMPT_RT has to delay sending a signal from a trap
++ * since it can not enable preemption, and the signal code's spin_locks
++ * turn into mutexes. Instead, it must set TIF_NOTIFY_RESUME which will
++ * send the signal on exit of the trap.
++ */
++#ifdef ARCH_RT_DELAYS_SIGNAL_SEND
++ if (in_atomic()) {
++ if (WARN_ON_ONCE(t != current))
++ return 0;
++ if (WARN_ON_ONCE(t->forced_info.si_signo))
++ return 0;
+
-+ ret = rt__down_read_trylock(rwsem);
-+ if (ret)
-+ rwsem_acquire(&rwsem->dep_map, 0, 1, _RET_IP_);
++ if (is_si_special(info)) {
++ WARN_ON_ONCE(info != SEND_SIG_PRIV);
++ t->forced_info.si_signo = sig;
++ t->forced_info.si_errno = 0;
++ t->forced_info.si_code = SI_KERNEL;
++ t->forced_info.si_pid = 0;
++ t->forced_info.si_uid = 0;
++ } else {
++ t->forced_info = *info;
++ }
+
-+ return ret;
++ set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
++ return 0;
++ }
++#endif
++ return do_force_sig_info(sig, info, t);
+}
-+EXPORT_SYMBOL(rt_down_read_trylock);
+
-+void rt__down_read(struct rw_semaphore *rwsem)
-+{
-+ struct rt_mutex *lock = &rwsem->lock;
+ /*
+ * Nuke all other threads in the group.
+ */
+@@ -1241,12 +1334,12 @@
+ * Disable interrupts early to avoid deadlocks.
+ * See rcu_read_unlock() comment header for details.
+ */
+- local_irq_save(*flags);
++ local_irq_save_nort(*flags);
+ rcu_read_lock();
+ sighand = rcu_dereference(tsk->sighand);
+ if (unlikely(sighand == NULL)) {
+ rcu_read_unlock();
+- local_irq_restore(*flags);
++ local_irq_restore_nort(*flags);
+ break;
+ }
+ /*
+@@ -1267,7 +1360,7 @@
+ }
+ spin_unlock(&sighand->siglock);
+ rcu_read_unlock();
+- local_irq_restore(*flags);
++ local_irq_restore_nort(*flags);
+ }
+
+ return sighand;
+@@ -1514,7 +1607,8 @@
+ */
+ struct sigqueue *sigqueue_alloc(void)
+ {
+- struct sigqueue *q = __sigqueue_alloc(-1, current, GFP_KERNEL, 0);
++ /* Preallocated sigqueue objects always from the slabcache ! */
++ struct sigqueue *q = __sigqueue_do_alloc(-1, current, GFP_KERNEL, 0, 1);
+
+ if (q)
+ q->flags |= SIGQUEUE_PREALLOC;
+@@ -1888,15 +1982,7 @@
+ if (gstop_done && ptrace_reparented(current))
+ do_notify_parent_cldstop(current, false, why);
+
+- /*
+- * Don't want to allow preemption here, because
+- * sys_ptrace() needs this task to be inactive.
+- *
+- * XXX: implement read_unlock_no_resched().
+- */
+- preempt_disable();
+ read_unlock(&tasklist_lock);
+- preempt_enable_no_resched();
+ freezable_schedule();
+ } else {
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/softirq.c linux-4.14/kernel/softirq.c
+--- linux-4.14.orig/kernel/softirq.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/softirq.c 2018-09-05 11:05:07.000000000 +0200
+@@ -21,11 +21,14 @@
+ #include <linux/freezer.h>
+ #include <linux/kthread.h>
+ #include <linux/rcupdate.h>
++#include <linux/delay.h>
+ #include <linux/ftrace.h>
+ #include <linux/smp.h>
+ #include <linux/smpboot.h>
+ #include <linux/tick.h>
++#include <linux/locallock.h>
+ #include <linux/irq.h>
++#include <linux/sched/types.h>
+
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/irq.h>
+@@ -56,12 +59,108 @@
+ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;
+
+ DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
++#ifdef CONFIG_PREEMPT_RT_FULL
++#define TIMER_SOFTIRQS ((1 << TIMER_SOFTIRQ) | (1 << HRTIMER_SOFTIRQ))
++DEFINE_PER_CPU(struct task_struct *, ktimer_softirqd);
++#endif
+
+ const char * const softirq_to_name[NR_SOFTIRQS] = {
+ "HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
+ "TASKLET", "SCHED", "HRTIMER", "RCU"
+ };
+
++#ifdef CONFIG_NO_HZ_COMMON
++# ifdef CONFIG_PREEMPT_RT_FULL
+
-+ if (rt_mutex_owner(lock) != current)
-+ rt_mutex_lock(&rwsem->lock);
-+ rwsem->read_depth++;
-+}
-+EXPORT_SYMBOL(rt__down_read);
++struct softirq_runner {
++ struct task_struct *runner[NR_SOFTIRQS];
++};
+
-+static void __rt_down_read(struct rw_semaphore *rwsem, int subclass)
-+{
-+ rwsem_acquire_read(&rwsem->dep_map, subclass, 0, _RET_IP_);
-+ rt__down_read(rwsem);
-+}
++static DEFINE_PER_CPU(struct softirq_runner, softirq_runners);
+
-+void rt_down_read(struct rw_semaphore *rwsem)
++static inline void softirq_set_runner(unsigned int sirq)
+{
-+ __rt_down_read(rwsem, 0);
-+}
-+EXPORT_SYMBOL(rt_down_read);
++ struct softirq_runner *sr = this_cpu_ptr(&softirq_runners);
+
-+void rt_down_read_nested(struct rw_semaphore *rwsem, int subclass)
-+{
-+ __rt_down_read(rwsem, subclass);
++ sr->runner[sirq] = current;
+}
-+EXPORT_SYMBOL(rt_down_read_nested);
+
-+void __rt_rwsem_init(struct rw_semaphore *rwsem, const char *name,
-+ struct lock_class_key *key)
++static inline void softirq_clr_runner(unsigned int sirq)
+{
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ /*
-+ * Make sure we are not reinitializing a held lock:
-+ */
-+ debug_check_no_locks_freed((void *)rwsem, sizeof(*rwsem));
-+ lockdep_init_map(&rwsem->dep_map, name, key, 0);
-+#endif
-+ rwsem->read_depth = 0;
-+ rwsem->lock.save_state = 0;
++ struct softirq_runner *sr = this_cpu_ptr(&softirq_runners);
++
++ sr->runner[sirq] = NULL;
+}
-+EXPORT_SYMBOL(__rt_rwsem_init);
+
-+/**
-+ * atomic_dec_and_mutex_lock - return holding mutex if we dec to 0
-+ * @cnt: the atomic which we are to dec
-+ * @lock: the mutex to return holding if we dec to 0
++/*
++ * On preempt-rt a softirq running context might be blocked on a
++ * lock. There might be no other runnable task on this CPU because the
++ * lock owner runs on some other CPU. So we have to go into idle with
++ * the pending bit set. Therefor we need to check this otherwise we
++ * warn about false positives which confuses users and defeats the
++ * whole purpose of this test.
+ *
-+ * return true and hold lock if we dec to 0, return false otherwise
++ * This code is called with interrupts disabled.
+ */
-+int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock)
++void softirq_check_pending_idle(void)
+{
-+ /* dec if we can't possibly hit 0 */
-+ if (atomic_add_unless(cnt, -1, 1))
-+ return 0;
-+ /* we might hit 0, so take the lock */
-+ mutex_lock(lock);
-+ if (!atomic_dec_and_test(cnt)) {
-+ /* when we actually did the dec, we didn't hit 0 */
-+ mutex_unlock(lock);
-+ return 0;
++ static int rate_limit;
++ struct softirq_runner *sr = this_cpu_ptr(&softirq_runners);
++ u32 warnpending;
++ int i;
++
++ if (rate_limit >= 10)
++ return;
++
++ warnpending = local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK;
++ for (i = 0; i < NR_SOFTIRQS; i++) {
++ struct task_struct *tsk = sr->runner[i];
++
++ /*
++ * The wakeup code in rtmutex.c wakes up the task
++ * _before_ it sets pi_blocked_on to NULL under
++ * tsk->pi_lock. So we need to check for both: state
++ * and pi_blocked_on.
++ */
++ if (tsk) {
++ raw_spin_lock(&tsk->pi_lock);
++ if (tsk->pi_blocked_on || tsk->state == TASK_RUNNING) {
++ /* Clear all bits pending in that task */
++ warnpending &= ~(tsk->softirqs_raised);
++ warnpending &= ~(1 << i);
++ }
++ raw_spin_unlock(&tsk->pi_lock);
++ }
+ }
-+ /* we hit 0, and we hold the lock */
-+ return 1;
-+}
-+EXPORT_SYMBOL(atomic_dec_and_mutex_lock);
-diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
-index 2c49d76f96c3..4f1a7663c34d 100644
---- a/kernel/locking/rtmutex.c
-+++ b/kernel/locking/rtmutex.c
-@@ -7,6 +7,11 @@
- * Copyright (C) 2005-2006 Timesys Corp., Thomas Gleixner <tglx@timesys.com>
- * Copyright (C) 2005 Kihon Technologies Inc., Steven Rostedt
- * Copyright (C) 2006 Esben Nielsen
-+ * Adaptive Spinlocks:
-+ * Copyright (C) 2008 Novell, Inc., Gregory Haskins, Sven Dietrich,
-+ * and Peter Morreale,
-+ * Adaptive Spinlocks simplification:
-+ * Copyright (C) 2008 Red Hat, Inc., Steven Rostedt <srostedt@redhat.com>
- *
- * See Documentation/locking/rt-mutex-design.txt for details.
- */
-@@ -16,6 +21,7 @@
- #include <linux/sched/rt.h>
- #include <linux/sched/deadline.h>
- #include <linux/timer.h>
-+#include <linux/ww_mutex.h>
-
- #include "rtmutex_common.h"
-
-@@ -133,6 +139,12 @@ static void fixup_rt_mutex_waiters(struct rt_mutex *lock)
- WRITE_ONCE(*p, owner & ~RT_MUTEX_HAS_WAITERS);
- }
-
-+static int rt_mutex_real_waiter(struct rt_mutex_waiter *waiter)
++
++ if (warnpending) {
++ printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
++ warnpending);
++ rate_limit++;
++ }
++}
++# else
++/*
++ * On !PREEMPT_RT we just printk rate limited:
++ */
++void softirq_check_pending_idle(void)
+{
-+ return waiter && waiter != PI_WAKEUP_INPROGRESS &&
-+ waiter != PI_REQUEUE_INPROGRESS;
++ static int rate_limit;
++
++ if (rate_limit < 10 &&
++ (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
++ printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
++ local_softirq_pending());
++ rate_limit++;
++ }
+}
++# endif
++
++#else /* !CONFIG_NO_HZ_COMMON */
++static inline void softirq_set_runner(unsigned int sirq) { }
++static inline void softirq_clr_runner(unsigned int sirq) { }
++#endif
+
/*
- * We can speed up the acquire/release, if there's no debugging state to be
- * set up.
-@@ -414,6 +426,14 @@ static bool rt_mutex_cond_detect_deadlock(struct rt_mutex_waiter *waiter,
- return debug_rt_mutex_detect_deadlock(waiter, chwalk);
+ * we cannot loop indefinitely here to avoid userspace starvation,
+ * but we also don't want to introduce a worst case 1/HZ latency
+@@ -77,6 +176,38 @@
+ wake_up_process(tsk);
}
-+static void rt_mutex_wake_waiter(struct rt_mutex_waiter *waiter)
++#ifdef CONFIG_PREEMPT_RT_FULL
++static void wakeup_timer_softirqd(void)
+{
-+ if (waiter->savestate)
-+ wake_up_lock_sleeper(waiter->task);
-+ else
-+ wake_up_process(waiter->task);
++ /* Interrupts are disabled: no need to stop preemption */
++ struct task_struct *tsk = __this_cpu_read(ktimer_softirqd);
++
++ if (tsk && tsk->state != TASK_RUNNING)
++ wake_up_process(tsk);
+}
++#endif
+
- /*
- * Max number of times we'll walk the boosting chain:
- */
-@@ -421,7 +441,8 @@ int max_lock_depth = 1024;
-
- static inline struct rt_mutex *task_blocked_on_lock(struct task_struct *p)
- {
-- return p->pi_blocked_on ? p->pi_blocked_on->lock : NULL;
-+ return rt_mutex_real_waiter(p->pi_blocked_on) ?
-+ p->pi_blocked_on->lock : NULL;
- }
-
- /*
-@@ -557,7 +578,7 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
- * reached or the state of the chain has changed while we
- * dropped the locks.
- */
-- if (!waiter)
-+ if (!rt_mutex_real_waiter(waiter))
- goto out_unlock_pi;
-
- /*
-@@ -719,13 +740,16 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
- * follow here. This is the end of the chain we are walking.
- */
- if (!rt_mutex_owner(lock)) {
-+ struct rt_mutex_waiter *lock_top_waiter;
++static void handle_softirq(unsigned int vec_nr)
++{
++ struct softirq_action *h = softirq_vec + vec_nr;
++ int prev_count;
+
- /*
- * If the requeue [7] above changed the top waiter,
- * then we need to wake the new top waiter up to try
- * to get the lock.
- */
-- if (prerequeue_top_waiter != rt_mutex_top_waiter(lock))
-- wake_up_process(rt_mutex_top_waiter(lock)->task);
-+ lock_top_waiter = rt_mutex_top_waiter(lock);
-+ if (prerequeue_top_waiter != lock_top_waiter)
-+ rt_mutex_wake_waiter(lock_top_waiter);
- raw_spin_unlock_irq(&lock->wait_lock);
- return 0;
- }
-@@ -818,6 +842,25 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
- return ret;
- }
-
++ prev_count = preempt_count();
+
-+#define STEAL_NORMAL 0
-+#define STEAL_LATERAL 1
++ kstat_incr_softirqs_this_cpu(vec_nr);
+
-+/*
-+ * Note that RT tasks are excluded from lateral-steals to prevent the
-+ * introduction of an unbounded latency
-+ */
-+static inline int lock_is_stealable(struct task_struct *task,
-+ struct task_struct *pendowner, int mode)
-+{
-+ if (mode == STEAL_NORMAL || rt_task(task)) {
-+ if (task->prio >= pendowner->prio)
-+ return 0;
-+ } else if (task->prio > pendowner->prio)
-+ return 0;
-+ return 1;
++ trace_softirq_entry(vec_nr);
++ h->action(h);
++ trace_softirq_exit(vec_nr);
++ if (unlikely(prev_count != preempt_count())) {
++ pr_err("huh, entered softirq %u %s %p with preempt_count %08x, exited with %08x?\n",
++ vec_nr, softirq_to_name[vec_nr], h->action,
++ prev_count, preempt_count());
++ preempt_count_set(prev_count);
++ }
+}
+
++#ifndef CONFIG_PREEMPT_RT_FULL
/*
- * Try to take an rt-mutex
- *
-@@ -828,8 +871,9 @@ static int rt_mutex_adjust_prio_chain(struct task_struct *task,
- * @waiter: The waiter that is queued to the lock's wait tree if the
- * callsite called task_blocked_on_lock(), otherwise NULL
- */
--static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
-- struct rt_mutex_waiter *waiter)
-+static int __try_to_take_rt_mutex(struct rt_mutex *lock,
-+ struct task_struct *task,
-+ struct rt_mutex_waiter *waiter, int mode)
- {
- /*
- * Before testing whether we can acquire @lock, we set the
-@@ -866,8 +910,10 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
- * If waiter is not the highest priority waiter of
- * @lock, give up.
- */
-- if (waiter != rt_mutex_top_waiter(lock))
-+ if (waiter != rt_mutex_top_waiter(lock)) {
-+ /* XXX lock_is_stealable() ? */
- return 0;
-+ }
-
- /*
- * We can acquire the lock. Remove the waiter from the
-@@ -885,14 +931,10 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
- * not need to be dequeued.
- */
- if (rt_mutex_has_waiters(lock)) {
-- /*
-- * If @task->prio is greater than or equal to
-- * the top waiter priority (kernel view),
-- * @task lost.
-- */
-- if (task->prio >= rt_mutex_top_waiter(lock)->prio)
-- return 0;
-+ struct task_struct *pown = rt_mutex_top_waiter(lock)->task;
-
-+ if (task != pown && !lock_is_stealable(task, pown, mode))
-+ return 0;
- /*
- * The current top waiter stays enqueued. We
- * don't have to change anything in the lock
-@@ -941,6 +983,433 @@ static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
- return 1;
+ * If ksoftirqd is scheduled, we do not want to process pending softirqs
+ * right now. Let ksoftirqd handle this at its own rate, to get fairness,
+@@ -92,6 +223,47 @@
+ return tsk && (tsk->state == TASK_RUNNING);
}
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+/*
-+ * preemptible spin_lock functions:
-+ */
-+static inline void rt_spin_lock_fastlock(struct rt_mutex *lock,
-+ void (*slowfn)(struct rt_mutex *lock,
-+ bool mg_off),
-+ bool do_mig_dis)
++static inline int ksoftirqd_softirq_pending(void)
+{
-+ might_sleep_no_state_check();
-+
-+ if (do_mig_dis)
-+ migrate_disable();
-+
-+ if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current)))
-+ rt_mutex_deadlock_account_lock(lock, current);
-+ else
-+ slowfn(lock, do_mig_dis);
++ return local_softirq_pending();
+}
+
-+static inline int rt_spin_lock_fastunlock(struct rt_mutex *lock,
-+ int (*slowfn)(struct rt_mutex *lock))
-+{
-+ if (likely(rt_mutex_cmpxchg_release(lock, current, NULL))) {
-+ rt_mutex_deadlock_account_unlock(current);
-+ return 0;
-+ }
-+ return slowfn(lock);
-+}
-+#ifdef CONFIG_SMP
-+/*
-+ * Note that owner is a speculative pointer and dereferencing relies
-+ * on rcu_read_lock() and the check against the lock owner.
-+ */
-+static int adaptive_wait(struct rt_mutex *lock,
-+ struct task_struct *owner)
++static void handle_pending_softirqs(u32 pending)
+{
-+ int res = 0;
++ struct softirq_action *h = softirq_vec;
++ int softirq_bit;
+
-+ rcu_read_lock();
-+ for (;;) {
-+ if (owner != rt_mutex_owner(lock))
-+ break;
-+ /*
-+ * Ensure that owner->on_cpu is dereferenced _after_
-+ * checking the above to be valid.
-+ */
-+ barrier();
-+ if (!owner->on_cpu) {
-+ res = 1;
-+ break;
-+ }
-+ cpu_relax();
++ local_irq_enable();
++
++ h = softirq_vec;
++
++ while ((softirq_bit = ffs(pending))) {
++ unsigned int vec_nr;
++
++ h += softirq_bit - 1;
++ vec_nr = h - softirq_vec;
++ handle_softirq(vec_nr);
++
++ h++;
++ pending >>= softirq_bit;
+ }
-+ rcu_read_unlock();
-+ return res;
++
++ rcu_bh_qs();
++ local_irq_disable();
+}
-+#else
-+static int adaptive_wait(struct rt_mutex *lock,
-+ struct task_struct *orig_owner)
++
++static void run_ksoftirqd(unsigned int cpu)
+{
-+ return 1;
++ local_irq_disable();
++ if (ksoftirqd_softirq_pending()) {
++ __do_softirq();
++ local_irq_enable();
++ cond_resched_rcu_qs();
++ return;
++ }
++ local_irq_enable();
+}
-+#endif
+
-+static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
-+ struct rt_mutex_waiter *waiter,
-+ struct task_struct *task,
-+ enum rtmutex_chainwalk chwalk);
-+/*
-+ * Slow path lock function spin_lock style: this variant is very
-+ * careful not to miss any non-lock wakeups.
-+ *
-+ * We store the current state under p->pi_lock in p->saved_state and
-+ * the try_to_wake_up() code handles this accordingly.
+ /*
+ * preempt_count and SOFTIRQ_OFFSET usage:
+ * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
+@@ -247,10 +419,8 @@
+ unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
+ unsigned long old_flags = current->flags;
+ int max_restart = MAX_SOFTIRQ_RESTART;
+- struct softirq_action *h;
+ bool in_hardirq;
+ __u32 pending;
+- int softirq_bit;
+
+ /*
+ * Mask out PF_MEMALLOC s current task context is borrowed for the
+@@ -269,36 +439,7 @@
+ /* Reset the pending bitmask before enabling irqs */
+ set_softirq_pending(0);
+
+- local_irq_enable();
+-
+- h = softirq_vec;
+-
+- while ((softirq_bit = ffs(pending))) {
+- unsigned int vec_nr;
+- int prev_count;
+-
+- h += softirq_bit - 1;
+-
+- vec_nr = h - softirq_vec;
+- prev_count = preempt_count();
+-
+- kstat_incr_softirqs_this_cpu(vec_nr);
+-
+- trace_softirq_entry(vec_nr);
+- h->action(h);
+- trace_softirq_exit(vec_nr);
+- if (unlikely(prev_count != preempt_count())) {
+- pr_err("huh, entered softirq %u %s %p with preempt_count %08x, exited with %08x?\n",
+- vec_nr, softirq_to_name[vec_nr], h->action,
+- prev_count, preempt_count());
+- preempt_count_set(prev_count);
+- }
+- h++;
+- pending >>= softirq_bit;
+- }
+-
+- rcu_bh_qs();
+- local_irq_disable();
++ handle_pending_softirqs(pending);
+
+ pending = local_softirq_pending();
+ if (pending) {
+@@ -335,6 +476,309 @@
+ }
+
+ /*
++ * This function must run with irqs disabled!
+ */
-+static void noinline __sched rt_spin_lock_slowlock(struct rt_mutex *lock,
-+ bool mg_off)
++void raise_softirq_irqoff(unsigned int nr)
+{
-+ struct task_struct *lock_owner, *self = current;
-+ struct rt_mutex_waiter waiter, *top_waiter;
-+ unsigned long flags;
-+ int ret;
-+
-+ rt_mutex_init_waiter(&waiter, true);
-+
-+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
-+
-+ if (__try_to_take_rt_mutex(lock, self, NULL, STEAL_LATERAL)) {
-+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
-+ return;
-+ }
-+
-+ BUG_ON(rt_mutex_owner(lock) == self);
++ __raise_softirq_irqoff(nr);
+
+ /*
-+ * We save whatever state the task is in and we'll restore it
-+ * after acquiring the lock taking real wakeups into account
-+ * as well. We are serialized via pi_lock against wakeups. See
-+ * try_to_wake_up().
++ * If we're in an interrupt or softirq, we're done
++ * (this also catches softirq-disabled code). We will
++ * actually run the softirq once we return from
++ * the irq or softirq.
++ *
++ * Otherwise we wake up ksoftirqd to make sure we
++ * schedule the softirq soon.
+ */
-+ raw_spin_lock(&self->pi_lock);
-+ self->saved_state = self->state;
-+ __set_current_state_no_track(TASK_UNINTERRUPTIBLE);
-+ raw_spin_unlock(&self->pi_lock);
-+
-+ ret = task_blocks_on_rt_mutex(lock, &waiter, self, RT_MUTEX_MIN_CHAINWALK);
-+ BUG_ON(ret);
-+
-+ for (;;) {
-+ /* Try to acquire the lock again. */
-+ if (__try_to_take_rt_mutex(lock, self, &waiter, STEAL_LATERAL))
-+ break;
-+
-+ top_waiter = rt_mutex_top_waiter(lock);
-+ lock_owner = rt_mutex_owner(lock);
++ if (!in_interrupt())
++ wakeup_softirqd();
++}
+
-+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
++void __raise_softirq_irqoff(unsigned int nr)
++{
++ trace_softirq_raise(nr);
++ or_softirq_pending(1UL << nr);
++}
+
-+ debug_rt_mutex_print_deadlock(&waiter);
++static inline void local_bh_disable_nort(void) { local_bh_disable(); }
++static inline void _local_bh_enable_nort(void) { _local_bh_enable(); }
++static void ksoftirqd_set_sched_params(unsigned int cpu) { }
+
-+ if (top_waiter != &waiter || adaptive_wait(lock, lock_owner)) {
-+ if (mg_off)
-+ migrate_enable();
-+ schedule();
-+ if (mg_off)
-+ migrate_disable();
-+ }
++#else /* !PREEMPT_RT_FULL */
+
-+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
++/*
++ * On RT we serialize softirq execution with a cpu local lock per softirq
++ */
++static DEFINE_PER_CPU(struct local_irq_lock [NR_SOFTIRQS], local_softirq_locks);
+
-+ raw_spin_lock(&self->pi_lock);
-+ __set_current_state_no_track(TASK_UNINTERRUPTIBLE);
-+ raw_spin_unlock(&self->pi_lock);
-+ }
++void __init softirq_early_init(void)
++{
++ int i;
+
-+ /*
-+ * Restore the task state to current->saved_state. We set it
-+ * to the original state above and the try_to_wake_up() code
-+ * has possibly updated it when a real (non-rtmutex) wakeup
-+ * happened while we were blocked. Clear saved_state so
-+ * try_to_wakeup() does not get confused.
-+ */
-+ raw_spin_lock(&self->pi_lock);
-+ __set_current_state_no_track(self->saved_state);
-+ self->saved_state = TASK_RUNNING;
-+ raw_spin_unlock(&self->pi_lock);
++ for (i = 0; i < NR_SOFTIRQS; i++)
++ local_irq_lock_init(local_softirq_locks[i]);
++}
+
-+ /*
-+ * try_to_take_rt_mutex() sets the waiter bit
-+ * unconditionally. We might have to fix that up:
-+ */
-+ fixup_rt_mutex_waiters(lock);
++static void lock_softirq(int which)
++{
++ local_lock(local_softirq_locks[which]);
++}
+
-+ BUG_ON(rt_mutex_has_waiters(lock) && &waiter == rt_mutex_top_waiter(lock));
-+ BUG_ON(!RB_EMPTY_NODE(&waiter.tree_entry));
++static void unlock_softirq(int which)
++{
++ local_unlock(local_softirq_locks[which]);
++}
+
-+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
++static void do_single_softirq(int which)
++{
++ unsigned long old_flags = current->flags;
+
-+ debug_rt_mutex_free_waiter(&waiter);
++ current->flags &= ~PF_MEMALLOC;
++ vtime_account_irq_enter(current);
++ current->flags |= PF_IN_SOFTIRQ;
++ lockdep_softirq_enter();
++ local_irq_enable();
++ handle_softirq(which);
++ local_irq_disable();
++ lockdep_softirq_exit();
++ current->flags &= ~PF_IN_SOFTIRQ;
++ vtime_account_irq_enter(current);
++ current_restore_flags(old_flags, PF_MEMALLOC);
+}
+
-+static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
-+ struct wake_q_head *wake_sleeper_q,
-+ struct rt_mutex *lock);
+/*
-+ * Slow path to release a rt_mutex spin_lock style
++ * Called with interrupts disabled. Process softirqs which were raised
++ * in current context (or on behalf of ksoftirqd).
+ */
-+static int noinline __sched rt_spin_lock_slowunlock(struct rt_mutex *lock)
++static void do_current_softirqs(void)
+{
-+ unsigned long flags;
-+ WAKE_Q(wake_q);
-+ WAKE_Q(wake_sleeper_q);
-+
-+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
-+
-+ debug_rt_mutex_unlock(lock);
++ while (current->softirqs_raised) {
++ int i = __ffs(current->softirqs_raised);
++ unsigned int pending, mask = (1U << i);
+
-+ rt_mutex_deadlock_account_unlock(current);
++ current->softirqs_raised &= ~mask;
++ local_irq_enable();
+
-+ if (!rt_mutex_has_waiters(lock)) {
-+ lock->owner = NULL;
-+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
-+ return 0;
++ /*
++ * If the lock is contended, we boost the owner to
++ * process the softirq or leave the critical section
++ * now.
++ */
++ lock_softirq(i);
++ local_irq_disable();
++ softirq_set_runner(i);
++ /*
++ * Check with the local_softirq_pending() bits,
++ * whether we need to process this still or if someone
++ * else took care of it.
++ */
++ pending = local_softirq_pending();
++ if (pending & mask) {
++ set_softirq_pending(pending & ~mask);
++ do_single_softirq(i);
++ }
++ softirq_clr_runner(i);
++ WARN_ON(current->softirq_nestcnt != 1);
++ local_irq_enable();
++ unlock_softirq(i);
++ local_irq_disable();
+ }
-+
-+ mark_wakeup_next_waiter(&wake_q, &wake_sleeper_q, lock);
-+
-+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
-+ wake_up_q(&wake_q);
-+ wake_up_q_sleeper(&wake_sleeper_q);
-+
-+ /* Undo pi boosting.when necessary */
-+ rt_mutex_adjust_prio(current);
-+ return 0;
+}
+
-+static int noinline __sched rt_spin_lock_slowunlock_no_deboost(struct rt_mutex *lock)
++void __local_bh_disable(void)
+{
-+ unsigned long flags;
-+ WAKE_Q(wake_q);
-+ WAKE_Q(wake_sleeper_q);
-+
-+ raw_spin_lock_irqsave(&lock->wait_lock, flags);
-+
-+ debug_rt_mutex_unlock(lock);
-+
-+ rt_mutex_deadlock_account_unlock(current);
++ if (++current->softirq_nestcnt == 1)
++ migrate_disable();
++}
++EXPORT_SYMBOL(__local_bh_disable);
+
-+ if (!rt_mutex_has_waiters(lock)) {
-+ lock->owner = NULL;
-+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
-+ return 0;
-+ }
++void __local_bh_enable(void)
++{
++ if (WARN_ON(current->softirq_nestcnt == 0))
++ return;
+
-+ mark_wakeup_next_waiter(&wake_q, &wake_sleeper_q, lock);
++ local_irq_disable();
++ if (current->softirq_nestcnt == 1 && current->softirqs_raised)
++ do_current_softirqs();
++ local_irq_enable();
+
-+ raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
-+ wake_up_q(&wake_q);
-+ wake_up_q_sleeper(&wake_sleeper_q);
-+ return 1;
++ if (--current->softirq_nestcnt == 0)
++ migrate_enable();
+}
++EXPORT_SYMBOL(__local_bh_enable);
+
-+void __lockfunc rt_spin_lock__no_mg(spinlock_t *lock)
++void _local_bh_enable(void)
+{
-+ rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock, false);
-+ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++ if (WARN_ON(current->softirq_nestcnt == 0))
++ return;
++ if (--current->softirq_nestcnt == 0)
++ migrate_enable();
+}
-+EXPORT_SYMBOL(rt_spin_lock__no_mg);
++EXPORT_SYMBOL(_local_bh_enable);
+
-+void __lockfunc rt_spin_lock(spinlock_t *lock)
++int in_serving_softirq(void)
+{
-+ rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock, true);
-+ spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
++ return current->flags & PF_IN_SOFTIRQ;
+}
-+EXPORT_SYMBOL(rt_spin_lock);
++EXPORT_SYMBOL(in_serving_softirq);
+
-+void __lockfunc __rt_spin_lock(struct rt_mutex *lock)
++/* Called with preemption disabled */
++static void run_ksoftirqd(unsigned int cpu)
+{
-+ rt_spin_lock_fastlock(lock, rt_spin_lock_slowlock, true);
++ local_irq_disable();
++ current->softirq_nestcnt++;
++
++ do_current_softirqs();
++ current->softirq_nestcnt--;
++ local_irq_enable();
++ cond_resched_rcu_qs();
+}
-+EXPORT_SYMBOL(__rt_spin_lock);
+
-+void __lockfunc __rt_spin_lock__no_mg(struct rt_mutex *lock)
++/*
++ * Called from netif_rx_ni(). Preemption enabled, but migration
++ * disabled. So the cpu can't go away under us.
++ */
++void thread_do_softirq(void)
+{
-+ rt_spin_lock_fastlock(lock, rt_spin_lock_slowlock, false);
++ if (!in_serving_softirq() && current->softirqs_raised) {
++ current->softirq_nestcnt++;
++ do_current_softirqs();
++ current->softirq_nestcnt--;
++ }
+}
-+EXPORT_SYMBOL(__rt_spin_lock__no_mg);
+
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+void __lockfunc rt_spin_lock_nested(spinlock_t *lock, int subclass)
++static void do_raise_softirq_irqoff(unsigned int nr)
+{
-+ spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
-+ rt_spin_lock_fastlock(&lock->lock, rt_spin_lock_slowlock, true);
++ unsigned int mask;
++
++ mask = 1UL << nr;
++
++ trace_softirq_raise(nr);
++ or_softirq_pending(mask);
++
++ /*
++ * If we are not in a hard interrupt and inside a bh disabled
++ * region, we simply raise the flag on current. local_bh_enable()
++ * will make sure that the softirq is executed. Otherwise we
++ * delegate it to ksoftirqd.
++ */
++ if (!in_irq() && current->softirq_nestcnt)
++ current->softirqs_raised |= mask;
++ else if (!__this_cpu_read(ksoftirqd) || !__this_cpu_read(ktimer_softirqd))
++ return;
++
++ if (mask & TIMER_SOFTIRQS)
++ __this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
++ else
++ __this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
+}
-+EXPORT_SYMBOL(rt_spin_lock_nested);
-+#endif
+
-+void __lockfunc rt_spin_unlock__no_mg(spinlock_t *lock)
++static void wakeup_proper_softirq(unsigned int nr)
+{
-+ /* NOTE: we always pass in '1' for nested, for simplicity */
-+ spin_release(&lock->dep_map, 1, _RET_IP_);
-+ rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock);
++ if ((1UL << nr) & TIMER_SOFTIRQS)
++ wakeup_timer_softirqd();
++ else
++ wakeup_softirqd();
+}
-+EXPORT_SYMBOL(rt_spin_unlock__no_mg);
+
-+void __lockfunc rt_spin_unlock(spinlock_t *lock)
++void __raise_softirq_irqoff(unsigned int nr)
+{
-+ /* NOTE: we always pass in '1' for nested, for simplicity */
-+ spin_release(&lock->dep_map, 1, _RET_IP_);
-+ rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock);
-+ migrate_enable();
++ do_raise_softirq_irqoff(nr);
++ if (!in_irq() && !current->softirq_nestcnt)
++ wakeup_proper_softirq(nr);
+}
-+EXPORT_SYMBOL(rt_spin_unlock);
+
-+int __lockfunc rt_spin_unlock_no_deboost(spinlock_t *lock)
++/*
++ * Same as __raise_softirq_irqoff() but will process them in ksoftirqd
++ */
++void __raise_softirq_irqoff_ksoft(unsigned int nr)
+{
-+ int ret;
++ unsigned int mask;
+
-+ /* NOTE: we always pass in '1' for nested, for simplicity */
-+ spin_release(&lock->dep_map, 1, _RET_IP_);
-+ ret = rt_spin_lock_fastunlock(&lock->lock, rt_spin_lock_slowunlock_no_deboost);
-+ migrate_enable();
-+ return ret;
-+}
++ if (WARN_ON_ONCE(!__this_cpu_read(ksoftirqd) ||
++ !__this_cpu_read(ktimer_softirqd)))
++ return;
++ mask = 1UL << nr;
+
-+void __lockfunc __rt_spin_unlock(struct rt_mutex *lock)
-+{
-+ rt_spin_lock_fastunlock(lock, rt_spin_lock_slowunlock);
++ trace_softirq_raise(nr);
++ or_softirq_pending(mask);
++ if (mask & TIMER_SOFTIRQS)
++ __this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
++ else
++ __this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
++ wakeup_proper_softirq(nr);
+}
-+EXPORT_SYMBOL(__rt_spin_unlock);
+
+/*
-+ * Wait for the lock to get unlocked: instead of polling for an unlock
-+ * (like raw spinlocks do), we lock and unlock, to force the kernel to
-+ * schedule if there's contention:
++ * This function must run with irqs disabled!
+ */
-+void __lockfunc rt_spin_unlock_wait(spinlock_t *lock)
++void raise_softirq_irqoff(unsigned int nr)
+{
-+ spin_lock(lock);
-+ spin_unlock(lock);
++ do_raise_softirq_irqoff(nr);
++
++ /*
++ * If we're in an hard interrupt we let irq return code deal
++ * with the wakeup of ksoftirqd.
++ */
++ if (in_irq())
++ return;
++ /*
++ * If we are in thread context but outside of a bh disabled
++ * region, we need to wake ksoftirqd as well.
++ *
++ * CHECKME: Some of the places which do that could be wrapped
++ * into local_bh_disable/enable pairs. Though it's unclear
++ * whether this is worth the effort. To find those places just
++ * raise a WARN() if the condition is met.
++ */
++ if (!current->softirq_nestcnt)
++ wakeup_proper_softirq(nr);
+}
-+EXPORT_SYMBOL(rt_spin_unlock_wait);
+
-+int __lockfunc rt_spin_trylock__no_mg(spinlock_t *lock)
++static inline int ksoftirqd_softirq_pending(void)
+{
-+ int ret;
++ return current->softirqs_raised;
++}
+
-+ ret = rt_mutex_trylock(&lock->lock);
-+ if (ret)
-+ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
-+ return ret;
++static inline void local_bh_disable_nort(void) { }
++static inline void _local_bh_enable_nort(void) { }
++
++static inline void ksoftirqd_set_sched_params(unsigned int cpu)
++{
++ /* Take over all but timer pending softirqs when starting */
++ local_irq_disable();
++ current->softirqs_raised = local_softirq_pending() & ~TIMER_SOFTIRQS;
++ local_irq_enable();
+}
-+EXPORT_SYMBOL(rt_spin_trylock__no_mg);
+
-+int __lockfunc rt_spin_trylock(spinlock_t *lock)
++static inline void ktimer_softirqd_set_sched_params(unsigned int cpu)
+{
-+ int ret;
++ struct sched_param param = { .sched_priority = 1 };
+
-+ migrate_disable();
-+ ret = rt_mutex_trylock(&lock->lock);
-+ if (ret)
-+ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
-+ else
-+ migrate_enable();
-+ return ret;
++ sched_setscheduler(current, SCHED_FIFO, ¶m);
++
++ /* Take over timer pending softirqs when starting */
++ local_irq_disable();
++ current->softirqs_raised = local_softirq_pending() & TIMER_SOFTIRQS;
++ local_irq_enable();
+}
-+EXPORT_SYMBOL(rt_spin_trylock);
+
-+int __lockfunc rt_spin_trylock_bh(spinlock_t *lock)
++static inline void ktimer_softirqd_clr_sched_params(unsigned int cpu,
++ bool online)
+{
-+ int ret;
++ struct sched_param param = { .sched_priority = 0 };
+
-+ local_bh_disable();
-+ ret = rt_mutex_trylock(&lock->lock);
-+ if (ret) {
-+ migrate_disable();
-+ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
-+ } else
-+ local_bh_enable();
-+ return ret;
++ sched_setscheduler(current, SCHED_NORMAL, ¶m);
+}
-+EXPORT_SYMBOL(rt_spin_trylock_bh);
+
-+int __lockfunc rt_spin_trylock_irqsave(spinlock_t *lock, unsigned long *flags)
++static int ktimer_softirqd_should_run(unsigned int cpu)
+{
-+ int ret;
-+
-+ *flags = 0;
-+ ret = rt_mutex_trylock(&lock->lock);
-+ if (ret) {
-+ migrate_disable();
-+ spin_acquire(&lock->dep_map, 0, 1, _RET_IP_);
-+ }
-+ return ret;
++ return current->softirqs_raised;
+}
-+EXPORT_SYMBOL(rt_spin_trylock_irqsave);
+
-+int atomic_dec_and_spin_lock(atomic_t *atomic, spinlock_t *lock)
++#endif /* PREEMPT_RT_FULL */
++/*
+ * Enter an interrupt context.
+ */
+ void irq_enter(void)
+@@ -345,9 +789,9 @@
+ * Prevent raise_softirq from needlessly waking up ksoftirqd
+ * here, as softirq will be serviced on return from interrupt.
+ */
+- local_bh_disable();
++ local_bh_disable_nort();
+ tick_irq_enter();
+- _local_bh_enable();
++ _local_bh_enable_nort();
+ }
+
+ __irq_enter();
+@@ -355,6 +799,7 @@
+
+ static inline void invoke_softirq(void)
+ {
++#ifndef CONFIG_PREEMPT_RT_FULL
+ if (ksoftirqd_running(local_softirq_pending()))
+ return;
+
+@@ -377,6 +822,18 @@
+ } else {
+ wakeup_softirqd();
+ }
++#else /* PREEMPT_RT_FULL */
++ unsigned long flags;
++
++ local_irq_save(flags);
++ if (__this_cpu_read(ksoftirqd) &&
++ __this_cpu_read(ksoftirqd)->softirqs_raised)
++ wakeup_softirqd();
++ if (__this_cpu_read(ktimer_softirqd) &&
++ __this_cpu_read(ktimer_softirqd)->softirqs_raised)
++ wakeup_timer_softirqd();
++ local_irq_restore(flags);
++#endif
+ }
+
+ static inline void tick_irq_exit(void)
+@@ -385,7 +842,13 @@
+ int cpu = smp_processor_id();
+
+ /* Make sure that timer wheel updates are propagated */
+- if ((idle_cpu(cpu) && !need_resched()) || tick_nohz_full_cpu(cpu)) {
++#ifdef CONFIG_PREEMPT_RT_BASE
++ if ((idle_cpu(cpu) || tick_nohz_full_cpu(cpu)) &&
++ !need_resched() && !local_softirq_pending())
++#else
++ if ((idle_cpu(cpu) && !need_resched()) || tick_nohz_full_cpu(cpu))
++#endif
++ {
+ if (!in_irq())
+ tick_nohz_irq_exit();
+ }
+@@ -413,26 +876,6 @@
+ trace_hardirq_exit(); /* must be last! */
+ }
+
+-/*
+- * This function must run with irqs disabled!
+- */
+-inline void raise_softirq_irqoff(unsigned int nr)
+-{
+- __raise_softirq_irqoff(nr);
+-
+- /*
+- * If we're in an interrupt or softirq, we're done
+- * (this also catches softirq-disabled code). We will
+- * actually run the softirq once we return from
+- * the irq or softirq.
+- *
+- * Otherwise we wake up ksoftirqd to make sure we
+- * schedule the softirq soon.
+- */
+- if (!in_interrupt())
+- wakeup_softirqd();
+-}
+-
+ void raise_softirq(unsigned int nr)
+ {
+ unsigned long flags;
+@@ -442,12 +885,6 @@
+ local_irq_restore(flags);
+ }
+
+-void __raise_softirq_irqoff(unsigned int nr)
+-{
+- trace_softirq_raise(nr);
+- or_softirq_pending(1UL << nr);
+-}
+-
+ void open_softirq(int nr, void (*action)(struct softirq_action *))
+ {
+ softirq_vec[nr].action = action;
+@@ -464,15 +901,45 @@
+ static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec);
+ static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec);
+
++static void inline
++__tasklet_common_schedule(struct tasklet_struct *t, struct tasklet_head *head, unsigned int nr)
+{
-+ /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
-+ if (atomic_add_unless(atomic, -1, 1))
-+ return 0;
-+ rt_spin_lock(lock);
-+ if (atomic_dec_and_test(atomic))
-+ return 1;
-+ rt_spin_unlock(lock);
-+ return 0;
++ if (tasklet_trylock(t)) {
++again:
++ /* We may have been preempted before tasklet_trylock
++ * and __tasklet_action may have already run.
++ * So double check the sched bit while the takslet
++ * is locked before adding it to the list.
++ */
++ if (test_bit(TASKLET_STATE_SCHED, &t->state)) {
++ t->next = NULL;
++ *head->tail = t;
++ head->tail = &(t->next);
++ raise_softirq_irqoff(nr);
++ tasklet_unlock(t);
++ } else {
++ /* This is subtle. If we hit the corner case above
++ * It is possible that we get preempted right here,
++ * and another task has successfully called
++ * tasklet_schedule(), then this function, and
++ * failed on the trylock. Thus we must be sure
++ * before releasing the tasklet lock, that the
++ * SCHED_BIT is clear. Otherwise the tasklet
++ * may get its SCHED_BIT set, but not added to the
++ * list
++ */
++ if (!tasklet_tryunlock(t))
++ goto again;
++ }
++ }
+}
-+EXPORT_SYMBOL(atomic_dec_and_spin_lock);
+
-+ void
-+__rt_spin_lock_init(spinlock_t *lock, char *name, struct lock_class_key *key)
-+{
-+#ifdef CONFIG_DEBUG_LOCK_ALLOC
-+ /*
-+ * Make sure we are not reinitializing a held lock:
-+ */
-+ debug_check_no_locks_freed((void *)lock, sizeof(*lock));
-+ lockdep_init_map(&lock->dep_map, name, key, 0);
-+#endif
+ void __tasklet_schedule(struct tasklet_struct *t)
+ {
+ unsigned long flags;
+
+ local_irq_save(flags);
+- t->next = NULL;
+- *__this_cpu_read(tasklet_vec.tail) = t;
+- __this_cpu_write(tasklet_vec.tail, &(t->next));
+- raise_softirq_irqoff(TASKLET_SOFTIRQ);
++ __tasklet_common_schedule(t, this_cpu_ptr(&tasklet_vec), TASKLET_SOFTIRQ);
+ local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL(__tasklet_schedule);
+@@ -482,50 +949,108 @@
+ unsigned long flags;
+
+ local_irq_save(flags);
+- t->next = NULL;
+- *__this_cpu_read(tasklet_hi_vec.tail) = t;
+- __this_cpu_write(tasklet_hi_vec.tail, &(t->next));
+- raise_softirq_irqoff(HI_SOFTIRQ);
++ __tasklet_common_schedule(t, this_cpu_ptr(&tasklet_hi_vec), HI_SOFTIRQ);
+ local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL(__tasklet_hi_schedule);
+
+-static __latent_entropy void tasklet_action(struct softirq_action *a)
++void tasklet_enable(struct tasklet_struct *t)
+ {
+- struct tasklet_struct *list;
++ if (!atomic_dec_and_test(&t->count))
++ return;
++ if (test_and_clear_bit(TASKLET_STATE_PENDING, &t->state))
++ tasklet_schedule(t);
+}
-+EXPORT_SYMBOL(__rt_spin_lock_init);
++EXPORT_SYMBOL(tasklet_enable);
+
+- local_irq_disable();
+- list = __this_cpu_read(tasklet_vec.head);
+- __this_cpu_write(tasklet_vec.head, NULL);
+- __this_cpu_write(tasklet_vec.tail, this_cpu_ptr(&tasklet_vec.head));
+- local_irq_enable();
++static void __tasklet_action(struct softirq_action *a,
++ struct tasklet_struct *list)
++{
++ int loops = 1000000;
+
+ while (list) {
+ struct tasklet_struct *t = list;
+
+ list = list->next;
+
+- if (tasklet_trylock(t)) {
+- if (!atomic_read(&t->count)) {
+- if (!test_and_clear_bit(TASKLET_STATE_SCHED,
+- &t->state))
+- BUG();
+- t->func(t->data);
+- tasklet_unlock(t);
+- continue;
+- }
+- tasklet_unlock(t);
++ /*
++ * Should always succeed - after a tasklist got on the
++ * list (after getting the SCHED bit set from 0 to 1),
++ * nothing but the tasklet softirq it got queued to can
++ * lock it:
++ */
++ if (!tasklet_trylock(t)) {
++ WARN_ON(1);
++ continue;
+ }
+
+- local_irq_disable();
+ t->next = NULL;
+- *__this_cpu_read(tasklet_vec.tail) = t;
+- __this_cpu_write(tasklet_vec.tail, &(t->next));
+- __raise_softirq_irqoff(TASKLET_SOFTIRQ);
+- local_irq_enable();
+
-+#endif /* PREEMPT_RT_FULL */
++ /*
++ * If we cannot handle the tasklet because it's disabled,
++ * mark it as pending. tasklet_enable() will later
++ * re-schedule the tasklet.
++ */
++ if (unlikely(atomic_read(&t->count))) {
++out_disabled:
++ /* implicit unlock: */
++ wmb();
++ t->state = TASKLET_STATEF_PENDING;
++ continue;
++ }
+
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ static inline int __sched
-+__mutex_lock_check_stamp(struct rt_mutex *lock, struct ww_acquire_ctx *ctx)
-+{
-+ struct ww_mutex *ww = container_of(lock, struct ww_mutex, base.lock);
-+ struct ww_acquire_ctx *hold_ctx = ACCESS_ONCE(ww->ctx);
++ /*
++ * After this point on the tasklet might be rescheduled
++ * on another CPU, but it can only be added to another
++ * CPU's tasklet list if we unlock the tasklet (which we
++ * dont do yet).
++ */
++ if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
++ WARN_ON(1);
+
-+ if (!hold_ctx)
-+ return 0;
++again:
++ t->func(t->data);
+
-+ if (unlikely(ctx == hold_ctx))
-+ return -EALREADY;
++ /*
++ * Try to unlock the tasklet. We must use cmpxchg, because
++ * another CPU might have scheduled or disabled the tasklet.
++ * We only allow the STATE_RUN -> 0 transition here.
++ */
++ while (!tasklet_tryunlock(t)) {
++ /*
++ * If it got disabled meanwhile, bail out:
++ */
++ if (atomic_read(&t->count))
++ goto out_disabled;
++ /*
++ * If it got scheduled meanwhile, re-execute
++ * the tasklet function:
++ */
++ if (test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
++ goto again;
++ if (!--loops) {
++ printk("hm, tasklet state: %08lx\n", t->state);
++ WARN_ON(1);
++ tasklet_unlock(t);
++ break;
++ }
++ }
+ }
+ }
+
++static __latent_entropy void tasklet_action(struct softirq_action *a)
++{
++ struct tasklet_struct *list;
+
-+ if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
-+ (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
-+#ifdef CONFIG_DEBUG_MUTEXES
-+ DEBUG_LOCKS_WARN_ON(ctx->contending_lock);
-+ ctx->contending_lock = ww;
-+#endif
-+ return -EDEADLK;
-+ }
++ local_irq_disable();
++ list = __this_cpu_read(tasklet_vec.head);
++ __this_cpu_write(tasklet_vec.head, NULL);
++ __this_cpu_write(tasklet_vec.tail, this_cpu_ptr(&tasklet_vec.head));
++ local_irq_enable();
+
-+ return 0;
++ __tasklet_action(a, list);
+}
++
+ static __latent_entropy void tasklet_hi_action(struct softirq_action *a)
+ {
+ struct tasklet_struct *list;
+@@ -536,30 +1061,7 @@
+ __this_cpu_write(tasklet_hi_vec.tail, this_cpu_ptr(&tasklet_hi_vec.head));
+ local_irq_enable();
+
+- while (list) {
+- struct tasklet_struct *t = list;
+-
+- list = list->next;
+-
+- if (tasklet_trylock(t)) {
+- if (!atomic_read(&t->count)) {
+- if (!test_and_clear_bit(TASKLET_STATE_SCHED,
+- &t->state))
+- BUG();
+- t->func(t->data);
+- tasklet_unlock(t);
+- continue;
+- }
+- tasklet_unlock(t);
+- }
+-
+- local_irq_disable();
+- t->next = NULL;
+- *__this_cpu_read(tasklet_hi_vec.tail) = t;
+- __this_cpu_write(tasklet_hi_vec.tail, &(t->next));
+- __raise_softirq_irqoff(HI_SOFTIRQ);
+- local_irq_enable();
+- }
++ __tasklet_action(a, list);
+ }
+
+ void tasklet_init(struct tasklet_struct *t,
+@@ -580,7 +1082,7 @@
+
+ while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
+ do {
+- yield();
++ msleep(1);
+ } while (test_bit(TASKLET_STATE_SCHED, &t->state));
+ }
+ tasklet_unlock_wait(t);
+@@ -588,57 +1090,6 @@
+ }
+ EXPORT_SYMBOL(tasklet_kill);
+
+-/*
+- * tasklet_hrtimer
+- */
+-
+-/*
+- * The trampoline is called when the hrtimer expires. It schedules a tasklet
+- * to run __tasklet_hrtimer_trampoline() which in turn will call the intended
+- * hrtimer callback, but from softirq context.
+- */
+-static enum hrtimer_restart __hrtimer_tasklet_trampoline(struct hrtimer *timer)
+-{
+- struct tasklet_hrtimer *ttimer =
+- container_of(timer, struct tasklet_hrtimer, timer);
+-
+- tasklet_hi_schedule(&ttimer->tasklet);
+- return HRTIMER_NORESTART;
+-}
+-
+-/*
+- * Helper function which calls the hrtimer callback from
+- * tasklet/softirq context
+- */
+-static void __tasklet_hrtimer_trampoline(unsigned long data)
+-{
+- struct tasklet_hrtimer *ttimer = (void *)data;
+- enum hrtimer_restart restart;
+-
+- restart = ttimer->function(&ttimer->timer);
+- if (restart != HRTIMER_NORESTART)
+- hrtimer_restart(&ttimer->timer);
+-}
+-
+-/**
+- * tasklet_hrtimer_init - Init a tasklet/hrtimer combo for softirq callbacks
+- * @ttimer: tasklet_hrtimer which is initialized
+- * @function: hrtimer callback function which gets called from softirq context
+- * @which_clock: clock id (CLOCK_MONOTONIC/CLOCK_REALTIME)
+- * @mode: hrtimer mode (HRTIMER_MODE_ABS/HRTIMER_MODE_REL)
+- */
+-void tasklet_hrtimer_init(struct tasklet_hrtimer *ttimer,
+- enum hrtimer_restart (*function)(struct hrtimer *),
+- clockid_t which_clock, enum hrtimer_mode mode)
+-{
+- hrtimer_init(&ttimer->timer, which_clock, mode);
+- ttimer->timer.function = __hrtimer_tasklet_trampoline;
+- tasklet_init(&ttimer->tasklet, __tasklet_hrtimer_trampoline,
+- (unsigned long)ttimer);
+- ttimer->function = function;
+-}
+-EXPORT_SYMBOL_GPL(tasklet_hrtimer_init);
+-
+ void __init softirq_init(void)
+ {
+ int cpu;
+@@ -654,25 +1105,26 @@
+ open_softirq(HI_SOFTIRQ, tasklet_hi_action);
+ }
+
+-static int ksoftirqd_should_run(unsigned int cpu)
+-{
+- return local_softirq_pending();
+-}
+-
+-static void run_ksoftirqd(unsigned int cpu)
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
++void tasklet_unlock_wait(struct tasklet_struct *t)
+ {
+- local_irq_disable();
+- if (local_softirq_pending()) {
++ while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
+ /*
+- * We can safely run softirq on inline stack, as we are not deep
+- * in the task stack here.
++ * Hack for now to avoid this busy-loop:
+ */
+- __do_softirq();
+- local_irq_enable();
+- cond_resched_rcu_qs();
+- return;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ msleep(1);
+#else
-+ static inline int __sched
-+__mutex_lock_check_stamp(struct rt_mutex *lock, struct ww_acquire_ctx *ctx)
-+{
-+ BUG();
-+ return 0;
++ barrier();
++#endif
+ }
+- local_irq_enable();
+}
-+
++EXPORT_SYMBOL(tasklet_unlock_wait);
+#endif
+
-+static inline int
-+try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task,
-+ struct rt_mutex_waiter *waiter)
++static int ksoftirqd_should_run(unsigned int cpu)
+{
-+ return __try_to_take_rt_mutex(lock, task, waiter, STEAL_NORMAL);
-+}
-+
- /*
- * Task blocks on lock.
- *
-@@ -971,6 +1440,23 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
- return -EDEADLK;
++ return ksoftirqd_softirq_pending();
+ }
- raw_spin_lock(&task->pi_lock);
-+
-+ /*
-+ * In the case of futex requeue PI, this will be a proxy
-+ * lock. The task will wake unaware that it is enqueueed on
-+ * this lock. Avoid blocking on two locks and corrupting
-+ * pi_blocked_on via the PI_WAKEUP_INPROGRESS
-+ * flag. futex_wait_requeue_pi() sets this when it wakes up
-+ * before requeue (due to a signal or timeout). Do not enqueue
-+ * the task if PI_WAKEUP_INPROGRESS is set.
-+ */
-+ if (task != current && task->pi_blocked_on == PI_WAKEUP_INPROGRESS) {
-+ raw_spin_unlock(&task->pi_lock);
-+ return -EAGAIN;
-+ }
+ #ifdef CONFIG_HOTPLUG_CPU
+@@ -739,17 +1191,31 @@
+
+ static struct smp_hotplug_thread softirq_threads = {
+ .store = &ksoftirqd,
++ .setup = ksoftirqd_set_sched_params,
+ .thread_should_run = ksoftirqd_should_run,
+ .thread_fn = run_ksoftirqd,
+ .thread_comm = "ksoftirqd/%u",
+ };
+
++#ifdef CONFIG_PREEMPT_RT_FULL
++static struct smp_hotplug_thread softirq_timer_threads = {
++ .store = &ktimer_softirqd,
++ .setup = ktimer_softirqd_set_sched_params,
++ .cleanup = ktimer_softirqd_clr_sched_params,
++ .thread_should_run = ktimer_softirqd_should_run,
++ .thread_fn = run_ksoftirqd,
++ .thread_comm = "ktimersoftd/%u",
++};
++#endif
+
-+ BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on));
+ static __init int spawn_ksoftirqd(void)
+ {
+ cpuhp_setup_state_nocalls(CPUHP_SOFTIRQ_DEAD, "softirq:dead", NULL,
+ takeover_tasklets);
+ BUG_ON(smpboot_register_percpu_thread(&softirq_threads));
+-
++#ifdef CONFIG_PREEMPT_RT_FULL
++ BUG_ON(smpboot_register_percpu_thread(&softirq_timer_threads));
++#endif
+ return 0;
+ }
+ early_initcall(spawn_ksoftirqd);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/stop_machine.c linux-4.14/kernel/stop_machine.c
+--- linux-4.14.orig/kernel/stop_machine.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/stop_machine.c 2018-09-05 11:05:07.000000000 +0200
+@@ -496,6 +496,8 @@
+ struct cpu_stop_done *done = work->done;
+ int ret;
+
++ /* XXX */
+
- __rt_mutex_adjust_prio(task);
- waiter->task = task;
- waiter->lock = lock;
-@@ -994,7 +1480,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
- rt_mutex_enqueue_pi(owner, waiter);
+ /* cpu stop callbacks must not sleep, make in_atomic() == T */
+ preempt_count_inc();
+ ret = fn(arg);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/alarmtimer.c linux-4.14/kernel/time/alarmtimer.c
+--- linux-4.14.orig/kernel/time/alarmtimer.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/time/alarmtimer.c 2018-09-05 11:05:07.000000000 +0200
+@@ -436,7 +436,7 @@
+ int ret = alarm_try_to_cancel(alarm);
+ if (ret >= 0)
+ return ret;
+- cpu_relax();
++ hrtimer_wait_for_timer(&alarm->timer);
+ }
+ }
+ EXPORT_SYMBOL_GPL(alarm_cancel);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/hrtimer.c linux-4.14/kernel/time/hrtimer.c
+--- linux-4.14.orig/kernel/time/hrtimer.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/time/hrtimer.c 2018-09-05 11:05:07.000000000 +0200
+@@ -60,6 +60,15 @@
+ #include "tick-internal.h"
- __rt_mutex_adjust_prio(owner);
-- if (owner->pi_blocked_on)
-+ if (rt_mutex_real_waiter(owner->pi_blocked_on))
- chain_walk = 1;
- } else if (rt_mutex_cond_detect_deadlock(waiter, chwalk)) {
- chain_walk = 1;
-@@ -1036,6 +1522,7 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
- * Called with lock->wait_lock held and interrupts disabled.
- */
- static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
-+ struct wake_q_head *wake_sleeper_q,
- struct rt_mutex *lock)
+ /*
++ * Masks for selecting the soft and hard context timers from
++ * cpu_base->active
++ */
++#define MASK_SHIFT (HRTIMER_BASE_MONOTONIC_SOFT)
++#define HRTIMER_ACTIVE_HARD ((1U << MASK_SHIFT) - 1)
++#define HRTIMER_ACTIVE_SOFT (HRTIMER_ACTIVE_HARD << MASK_SHIFT)
++#define HRTIMER_ACTIVE_ALL (HRTIMER_ACTIVE_SOFT | HRTIMER_ACTIVE_HARD)
++
++/*
+ * The timer bases:
+ *
+ * There are more clockids than hrtimer bases. Thus, we index
+@@ -70,7 +79,6 @@
+ DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
{
- struct rt_mutex_waiter *waiter;
-@@ -1064,7 +1551,10 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
+ .lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock),
+- .seq = SEQCNT_ZERO(hrtimer_bases.seq),
+ .clock_base =
+ {
+ {
+@@ -93,6 +101,26 @@
+ .clockid = CLOCK_TAI,
+ .get_time = &ktime_get_clocktai,
+ },
++ {
++ .index = HRTIMER_BASE_MONOTONIC_SOFT,
++ .clockid = CLOCK_MONOTONIC,
++ .get_time = &ktime_get,
++ },
++ {
++ .index = HRTIMER_BASE_REALTIME_SOFT,
++ .clockid = CLOCK_REALTIME,
++ .get_time = &ktime_get_real,
++ },
++ {
++ .index = HRTIMER_BASE_BOOTTIME_SOFT,
++ .clockid = CLOCK_BOOTTIME,
++ .get_time = &ktime_get_boottime,
++ },
++ {
++ .index = HRTIMER_BASE_TAI_SOFT,
++ .clockid = CLOCK_TAI,
++ .get_time = &ktime_get_clocktai,
++ },
+ }
+ };
- raw_spin_unlock(¤t->pi_lock);
+@@ -118,7 +146,6 @@
+ * timer->base->cpu_base
+ */
+ static struct hrtimer_cpu_base migration_cpu_base = {
+- .seq = SEQCNT_ZERO(migration_cpu_base),
+ .clock_base = { { .cpu_base = &migration_cpu_base, }, },
+ };
-- wake_q_add(wake_q, waiter->task);
-+ if (waiter->savestate)
-+ wake_q_add(wake_sleeper_q, waiter->task);
-+ else
-+ wake_q_add(wake_q, waiter->task);
+@@ -156,45 +183,33 @@
}
/*
-@@ -1078,7 +1568,7 @@ static void remove_waiter(struct rt_mutex *lock,
+- * With HIGHRES=y we do not migrate the timer when it is expiring
+- * before the next event on the target cpu because we cannot reprogram
+- * the target cpu hardware and we would cause it to fire late.
++ * We do not migrate the timer when it is expiring before the next
++ * event on the target cpu. When high resolution is enabled, we cannot
++ * reprogram the target cpu hardware and we would cause it to fire
++ * late. To keep it simple, we handle the high resolution enabled and
++ * disabled case similar.
+ *
+ * Called with cpu_base->lock of target cpu held.
+ */
+ static int
+ hrtimer_check_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base)
{
- bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock));
- struct task_struct *owner = rt_mutex_owner(lock);
-- struct rt_mutex *next_lock;
-+ struct rt_mutex *next_lock = NULL;
+-#ifdef CONFIG_HIGH_RES_TIMERS
+ ktime_t expires;
- raw_spin_lock(¤t->pi_lock);
- rt_mutex_dequeue(lock, waiter);
-@@ -1102,7 +1592,8 @@ static void remove_waiter(struct rt_mutex *lock,
- __rt_mutex_adjust_prio(owner);
-
- /* Store the lock on which owner is blocked or NULL */
-- next_lock = task_blocked_on_lock(owner);
-+ if (rt_mutex_real_waiter(owner->pi_blocked_on))
-+ next_lock = task_blocked_on_lock(owner);
+- if (!new_base->cpu_base->hres_active)
+- return 0;
+-
+ expires = ktime_sub(hrtimer_get_expires(timer), new_base->offset);
+- return expires <= new_base->cpu_base->expires_next;
+-#else
+- return 0;
+-#endif
++ return expires < new_base->cpu_base->expires_next;
+ }
- raw_spin_unlock(&owner->pi_lock);
+-#ifdef CONFIG_NO_HZ_COMMON
+-static inline
+-struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base,
+- int pinned)
+-{
+- if (pinned || !base->migration_enabled)
+- return base;
+- return &per_cpu(hrtimer_bases, get_nohz_timer_target());
+-}
+-#else
+ static inline
+ struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base,
+ int pinned)
+ {
++#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
++ if (static_branch_unlikely(&timers_migration_enabled) && !pinned)
++ return &per_cpu(hrtimer_bases, get_nohz_timer_target());
++#endif
+ return base;
+ }
+-#endif
-@@ -1138,17 +1629,17 @@ void rt_mutex_adjust_pi(struct task_struct *task)
- raw_spin_lock_irqsave(&task->pi_lock, flags);
+ /*
+ * We switch the timer base to a power-optimized selected CPU target,
+@@ -396,7 +411,8 @@
+ debug_object_init(timer, &hrtimer_debug_descr);
+ }
- waiter = task->pi_blocked_on;
-- if (!waiter || (waiter->prio == task->prio &&
-+ if (!rt_mutex_real_waiter(waiter) || (waiter->prio == task->prio &&
- !dl_prio(task->prio))) {
- raw_spin_unlock_irqrestore(&task->pi_lock, flags);
- return;
- }
- next_lock = waiter->lock;
-- raw_spin_unlock_irqrestore(&task->pi_lock, flags);
+-static inline void debug_hrtimer_activate(struct hrtimer *timer)
++static inline void debug_hrtimer_activate(struct hrtimer *timer,
++ enum hrtimer_mode mode)
+ {
+ debug_object_activate(timer, &hrtimer_debug_descr);
+ }
+@@ -429,8 +445,10 @@
+ EXPORT_SYMBOL_GPL(destroy_hrtimer_on_stack);
- /* gets dropped in rt_mutex_adjust_prio_chain()! */
- get_task_struct(task);
+ #else
++
+ static inline void debug_hrtimer_init(struct hrtimer *timer) { }
+-static inline void debug_hrtimer_activate(struct hrtimer *timer) { }
++static inline void debug_hrtimer_activate(struct hrtimer *timer,
++ enum hrtimer_mode mode) { }
+ static inline void debug_hrtimer_deactivate(struct hrtimer *timer) { }
+ #endif
-+ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
- rt_mutex_adjust_prio_chain(task, RT_MUTEX_MIN_CHAINWALK, NULL,
- next_lock, NULL, task);
+@@ -442,10 +460,11 @@
+ trace_hrtimer_init(timer, clockid, mode);
}
-@@ -1166,7 +1657,8 @@ void rt_mutex_adjust_pi(struct task_struct *task)
- static int __sched
- __rt_mutex_slowlock(struct rt_mutex *lock, int state,
- struct hrtimer_sleeper *timeout,
-- struct rt_mutex_waiter *waiter)
-+ struct rt_mutex_waiter *waiter,
-+ struct ww_acquire_ctx *ww_ctx)
- {
- int ret = 0;
-@@ -1189,6 +1681,12 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state,
- break;
- }
-
-+ if (ww_ctx && ww_ctx->acquired > 0) {
-+ ret = __mutex_lock_check_stamp(lock, ww_ctx);
-+ if (ret)
-+ break;
-+ }
-+
- raw_spin_unlock_irq(&lock->wait_lock);
+-static inline void debug_activate(struct hrtimer *timer)
++static inline void debug_activate(struct hrtimer *timer,
++ enum hrtimer_mode mode)
+ {
+- debug_hrtimer_activate(timer);
+- trace_hrtimer_start(timer);
++ debug_hrtimer_activate(timer, mode);
++ trace_hrtimer_start(timer, mode);
+ }
- debug_rt_mutex_print_deadlock(waiter);
-@@ -1223,21 +1721,96 @@ static void rt_mutex_handle_deadlock(int res, int detect_deadlock,
- }
+ static inline void debug_deactivate(struct hrtimer *timer)
+@@ -454,35 +473,43 @@
+ trace_hrtimer_cancel(timer);
}
-+static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww,
-+ struct ww_acquire_ctx *ww_ctx)
-+{
-+#ifdef CONFIG_DEBUG_MUTEXES
-+ /*
-+ * If this WARN_ON triggers, you used ww_mutex_lock to acquire,
-+ * but released with a normal mutex_unlock in this call.
-+ *
-+ * This should never happen, always use ww_mutex_unlock.
-+ */
-+ DEBUG_LOCKS_WARN_ON(ww->ctx);
+-#if defined(CONFIG_NO_HZ_COMMON) || defined(CONFIG_HIGH_RES_TIMERS)
+-static inline void hrtimer_update_next_timer(struct hrtimer_cpu_base *cpu_base,
+- struct hrtimer *timer)
++static struct hrtimer_clock_base *
++__next_base(struct hrtimer_cpu_base *cpu_base, unsigned int *active)
+ {
+-#ifdef CONFIG_HIGH_RES_TIMERS
+- cpu_base->next_timer = timer;
+-#endif
++ unsigned int idx;
+
-+ /*
-+ * Not quite done after calling ww_acquire_done() ?
-+ */
-+ DEBUG_LOCKS_WARN_ON(ww_ctx->done_acquire);
++ if (!*active)
++ return NULL;
+
-+ if (ww_ctx->contending_lock) {
-+ /*
-+ * After -EDEADLK you tried to
-+ * acquire a different ww_mutex? Bad!
-+ */
-+ DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock != ww);
++ idx = __ffs(*active);
++ *active &= ~(1U << idx);
+
-+ /*
-+ * You called ww_mutex_lock after receiving -EDEADLK,
-+ * but 'forgot' to unlock everything else first?
-+ */
-+ DEBUG_LOCKS_WARN_ON(ww_ctx->acquired > 0);
-+ ww_ctx->contending_lock = NULL;
-+ }
++ return &cpu_base->clock_base[idx];
+ }
+
+-static ktime_t __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base)
++#define for_each_active_base(base, cpu_base, active) \
++ while ((base = __next_base((cpu_base), &(active))))
+
-+ /*
-+ * Naughty, using a different class will lead to undefined behavior!
-+ */
-+ DEBUG_LOCKS_WARN_ON(ww_ctx->ww_class != ww->ww_class);
-+#endif
-+ ww_ctx->acquired++;
-+}
++static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base,
++ unsigned int active,
++ ktime_t expires_next)
+ {
+- struct hrtimer_clock_base *base = cpu_base->clock_base;
+- unsigned int active = cpu_base->active_bases;
+- ktime_t expires, expires_next = KTIME_MAX;
++ struct hrtimer_clock_base *base;
++ ktime_t expires;
+
+- hrtimer_update_next_timer(cpu_base, NULL);
+- for (; active; base++, active >>= 1) {
++ for_each_active_base(base, cpu_base, active) {
+ struct timerqueue_node *next;
+ struct hrtimer *timer;
+
+- if (!(active & 0x01))
+- continue;
+-
+ next = timerqueue_getnext(&base->active);
+ timer = container_of(next, struct hrtimer, node);
+ expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
+ if (expires < expires_next) {
+ expires_next = expires;
+- hrtimer_update_next_timer(cpu_base, timer);
++ if (timer->is_soft)
++ cpu_base->softirq_next_timer = timer;
++ else
++ cpu_base->next_timer = timer;
+ }
+ }
+ /*
+@@ -494,7 +521,47 @@
+ expires_next = 0;
+ return expires_next;
+ }
+-#endif
+
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static void ww_mutex_account_lock(struct rt_mutex *lock,
-+ struct ww_acquire_ctx *ww_ctx)
++/*
++ * Recomputes cpu_base::*next_timer and returns the earliest expires_next but
++ * does not set cpu_base::*expires_next, that is done by hrtimer_reprogram.
++ *
++ * When a softirq is pending, we can ignore the HRTIMER_ACTIVE_SOFT bases,
++ * those timers will get run whenever the softirq gets handled, at the end of
++ * hrtimer_run_softirq(), hrtimer_update_softirq_timer() will re-add these bases.
++ *
++ * Therefore softirq values are those from the HRTIMER_ACTIVE_SOFT clock bases.
++ * The !softirq values are the minima across HRTIMER_ACTIVE_ALL, unless an actual
++ * softirq is pending, in which case they're the minima of HRTIMER_ACTIVE_HARD.
++ *
++ * @active_mask must be one of:
++ * - HRTIMER_ACTIVE_ALL,
++ * - HRTIMER_ACTIVE_SOFT, or
++ * - HRTIMER_ACTIVE_HARD.
++ */
++static ktime_t
++__hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base, unsigned int active_mask)
+{
-+ struct ww_mutex *ww = container_of(lock, struct ww_mutex, base.lock);
-+ struct rt_mutex_waiter *waiter, *n;
-+
-+ /*
-+ * This branch gets optimized out for the common case,
-+ * and is only important for ww_mutex_lock.
-+ */
-+ ww_mutex_lock_acquired(ww, ww_ctx);
-+ ww->ctx = ww_ctx;
-+
-+ /*
-+ * Give any possible sleeping processes the chance to wake up,
-+ * so they can recheck if they have to back off.
-+ */
-+ rbtree_postorder_for_each_entry_safe(waiter, n, &lock->waiters,
-+ tree_entry) {
-+ /* XXX debug rt mutex waiter wakeup */
++ unsigned int active;
++ struct hrtimer *next_timer = NULL;
++ ktime_t expires_next = KTIME_MAX;
+
-+ BUG_ON(waiter->lock != lock);
-+ rt_mutex_wake_waiter(waiter);
++ if (!cpu_base->softirq_activated && (active_mask & HRTIMER_ACTIVE_SOFT)) {
++ active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
++ cpu_base->softirq_next_timer = NULL;
++ expires_next = __hrtimer_next_event_base(cpu_base, active, KTIME_MAX);
++
++ next_timer = cpu_base->softirq_next_timer;
+ }
-+}
+
-+#else
++ if (active_mask & HRTIMER_ACTIVE_HARD) {
++ active = cpu_base->active_bases & HRTIMER_ACTIVE_HARD;
++ cpu_base->next_timer = next_timer;
++ expires_next = __hrtimer_next_event_base(cpu_base, active, expires_next);
++ }
+
-+static void ww_mutex_account_lock(struct rt_mutex *lock,
-+ struct ww_acquire_ctx *ww_ctx)
-+{
-+ BUG();
++ return expires_next;
+}
-+#endif
-+
+
+ static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)
+ {
+@@ -502,36 +569,14 @@
+ ktime_t *offs_boot = &base->clock_base[HRTIMER_BASE_BOOTTIME].offset;
+ ktime_t *offs_tai = &base->clock_base[HRTIMER_BASE_TAI].offset;
+
+- return ktime_get_update_offsets_now(&base->clock_was_set_seq,
++ ktime_t now = ktime_get_update_offsets_now(&base->clock_was_set_seq,
+ offs_real, offs_boot, offs_tai);
+-}
+-
+-/* High resolution timer related functions */
+-#ifdef CONFIG_HIGH_RES_TIMERS
+-
+-/*
+- * High resolution timer enabled ?
+- */
+-static bool hrtimer_hres_enabled __read_mostly = true;
+-unsigned int hrtimer_resolution __read_mostly = LOW_RES_NSEC;
+-EXPORT_SYMBOL_GPL(hrtimer_resolution);
+-
+-/*
+- * Enable / Disable high resolution mode
+- */
+-static int __init setup_hrtimer_hres(char *str)
+-{
+- return (kstrtobool(str, &hrtimer_hres_enabled) == 0);
+-}
+
+-__setup("highres=", setup_hrtimer_hres);
++ base->clock_base[HRTIMER_BASE_REALTIME_SOFT].offset = *offs_real;
++ base->clock_base[HRTIMER_BASE_BOOTTIME_SOFT].offset = *offs_boot;
++ base->clock_base[HRTIMER_BASE_TAI_SOFT].offset = *offs_tai;
+
+-/*
+- * hrtimer_high_res_enabled - query, if the highres mode is enabled
+- */
+-static inline int hrtimer_is_hres_enabled(void)
+-{
+- return hrtimer_hres_enabled;
++ return now;
+ }
+
/*
- * Slow path lock function:
+@@ -539,7 +584,8 @@
*/
- static int __sched
- rt_mutex_slowlock(struct rt_mutex *lock, int state,
- struct hrtimer_sleeper *timeout,
-- enum rtmutex_chainwalk chwalk)
-+ enum rtmutex_chainwalk chwalk,
-+ struct ww_acquire_ctx *ww_ctx)
+ static inline int __hrtimer_hres_active(struct hrtimer_cpu_base *cpu_base)
{
- struct rt_mutex_waiter waiter;
- unsigned long flags;
- int ret = 0;
-
-- debug_rt_mutex_init_waiter(&waiter);
-- RB_CLEAR_NODE(&waiter.pi_tree_entry);
-- RB_CLEAR_NODE(&waiter.tree_entry);
-+ rt_mutex_init_waiter(&waiter, false);
+- return cpu_base->hres_active;
++ return IS_ENABLED(CONFIG_HIGH_RES_TIMERS) ?
++ cpu_base->hres_active : 0;
+ }
- /*
- * Technically we could use raw_spin_[un]lock_irq() here, but this can
-@@ -1251,6 +1824,8 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
+ static inline int hrtimer_hres_active(void)
+@@ -557,10 +603,23 @@
+ {
+ ktime_t expires_next;
- /* Try to acquire the lock again: */
- if (try_to_take_rt_mutex(lock, current, NULL)) {
-+ if (ww_ctx)
-+ ww_mutex_account_lock(lock, ww_ctx);
- raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
- return 0;
- }
-@@ -1265,13 +1840,23 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state,
+- if (!cpu_base->hres_active)
+- return;
++ /*
++ * Find the current next expiration time.
++ */
++ expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
- if (likely(!ret))
- /* sleep on the mutex */
-- ret = __rt_mutex_slowlock(lock, state, timeout, &waiter);
-+ ret = __rt_mutex_slowlock(lock, state, timeout, &waiter,
-+ ww_ctx);
-+ else if (ww_ctx) {
-+ /* ww_mutex received EDEADLK, let it become EALREADY */
-+ ret = __mutex_lock_check_stamp(lock, ww_ctx);
-+ BUG_ON(!ret);
+- expires_next = __hrtimer_get_next_event(cpu_base);
++ if (cpu_base->next_timer && cpu_base->next_timer->is_soft) {
++ /*
++ * When the softirq is activated, hrtimer has to be
++ * programmed with the first hard hrtimer because soft
++ * timer interrupt could occur too late.
++ */
++ if (cpu_base->softirq_activated)
++ expires_next = __hrtimer_get_next_event(cpu_base,
++ HRTIMER_ACTIVE_HARD);
++ else
++ cpu_base->softirq_expires_next = expires_next;
+ }
- if (unlikely(ret)) {
- __set_current_state(TASK_RUNNING);
- if (rt_mutex_has_waiters(lock))
- remove_waiter(lock, &waiter);
-- rt_mutex_handle_deadlock(ret, chwalk, &waiter);
-+ /* ww_mutex want to report EDEADLK/EALREADY, let them */
-+ if (!ww_ctx)
-+ rt_mutex_handle_deadlock(ret, chwalk, &waiter);
-+ } else if (ww_ctx) {
-+ ww_mutex_account_lock(lock, ww_ctx);
- }
+ if (skip_equal && expires_next == cpu_base->expires_next)
+ return;
+@@ -568,6 +627,9 @@
+ cpu_base->expires_next = expires_next;
/*
-@@ -1331,7 +1916,8 @@ static inline int rt_mutex_slowtrylock(struct rt_mutex *lock)
- * Return whether the current task needs to undo a potential priority boosting.
- */
- static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
-- struct wake_q_head *wake_q)
-+ struct wake_q_head *wake_q,
-+ struct wake_q_head *wake_sleeper_q)
- {
- unsigned long flags;
-
-@@ -1387,7 +1973,7 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
- *
- * Queue the next waiter for wakeup once we release the wait_lock.
++ * If hres is not active, hardware does not have to be
++ * reprogrammed yet.
++ *
+ * If a hang was detected in the last timer interrupt then we
+ * leave the hang delay active in the hardware. We want the
+ * system to make progress. That also prevents the following
+@@ -581,83 +643,38 @@
+ * set. So we'd effectivly block all timers until the T2 event
+ * fires.
*/
-- mark_wakeup_next_waiter(wake_q, lock);
-+ mark_wakeup_next_waiter(wake_q, wake_sleeper_q, lock);
+- if (cpu_base->hang_detected)
++ if (!__hrtimer_hres_active(cpu_base) || cpu_base->hang_detected)
+ return;
- raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
+ tick_program_event(cpu_base->expires_next, 1);
+ }
-@@ -1403,31 +1989,36 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
++/* High resolution timer related functions */
++#ifdef CONFIG_HIGH_RES_TIMERS
++
+ /*
+- * When a timer is enqueued and expires earlier than the already enqueued
+- * timers, we have to check, whether it expires earlier than the timer for
+- * which the clock event device was armed.
+- *
+- * Called with interrupts disabled and base->cpu_base.lock held
++ * High resolution timer enabled ?
*/
- static inline int
- rt_mutex_fastlock(struct rt_mutex *lock, int state,
-+ struct ww_acquire_ctx *ww_ctx,
- int (*slowfn)(struct rt_mutex *lock, int state,
- struct hrtimer_sleeper *timeout,
-- enum rtmutex_chainwalk chwalk))
-+ enum rtmutex_chainwalk chwalk,
-+ struct ww_acquire_ctx *ww_ctx))
- {
- if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) {
- rt_mutex_deadlock_account_lock(lock, current);
- return 0;
- } else
-- return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK);
-+ return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK,
-+ ww_ctx);
+-static void hrtimer_reprogram(struct hrtimer *timer,
+- struct hrtimer_clock_base *base)
+-{
+- struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
+- ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
+-
+- WARN_ON_ONCE(hrtimer_get_expires_tv64(timer) < 0);
+-
+- /*
+- * If the timer is not on the current cpu, we cannot reprogram
+- * the other cpus clock event device.
+- */
+- if (base->cpu_base != cpu_base)
+- return;
+-
+- /*
+- * If the hrtimer interrupt is running, then it will
+- * reevaluate the clock bases and reprogram the clock event
+- * device. The callbacks are always executed in hard interrupt
+- * context so we don't need an extra check for a running
+- * callback.
+- */
+- if (cpu_base->in_hrtirq)
+- return;
+-
+- /*
+- * CLOCK_REALTIME timer might be requested with an absolute
+- * expiry time which is less than base->offset. Set it to 0.
+- */
+- if (expires < 0)
+- expires = 0;
+-
+- if (expires >= cpu_base->expires_next)
+- return;
+-
+- /* Update the pointer to the next expiring timer */
+- cpu_base->next_timer = timer;
+-
+- /*
+- * If a hang was detected in the last timer interrupt then we
+- * do not schedule a timer which is earlier than the expiry
+- * which we enforced in the hang detection. We want the system
+- * to make progress.
+- */
+- if (cpu_base->hang_detected)
+- return;
++static bool hrtimer_hres_enabled __read_mostly = true;
++unsigned int hrtimer_resolution __read_mostly = LOW_RES_NSEC;
++EXPORT_SYMBOL_GPL(hrtimer_resolution);
+
+- /*
+- * Program the timer hardware. We enforce the expiry for
+- * events which are already in the past.
+- */
+- cpu_base->expires_next = expires;
+- tick_program_event(expires, 1);
++/*
++ * Enable / Disable high resolution mode
++ */
++static int __init setup_hrtimer_hres(char *str)
++{
++ return (kstrtobool(str, &hrtimer_hres_enabled) == 0);
}
- static inline int
- rt_mutex_timed_fastlock(struct rt_mutex *lock, int state,
- struct hrtimer_sleeper *timeout,
- enum rtmutex_chainwalk chwalk,
-+ struct ww_acquire_ctx *ww_ctx,
- int (*slowfn)(struct rt_mutex *lock, int state,
- struct hrtimer_sleeper *timeout,
-- enum rtmutex_chainwalk chwalk))
-+ enum rtmutex_chainwalk chwalk,
-+ struct ww_acquire_ctx *ww_ctx))
++__setup("highres=", setup_hrtimer_hres);
++
+ /*
+- * Initialize the high resolution related parts of cpu_base
++ * hrtimer_high_res_enabled - query, if the highres mode is enabled
+ */
+-static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base)
++static inline int hrtimer_is_hres_enabled(void)
{
- if (chwalk == RT_MUTEX_MIN_CHAINWALK &&
- likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) {
- rt_mutex_deadlock_account_lock(lock, current);
- return 0;
- } else
-- return slowfn(lock, state, timeout, chwalk);
-+ return slowfn(lock, state, timeout, chwalk, ww_ctx);
+- base->expires_next = KTIME_MAX;
+- base->hang_detected = 0;
+- base->hres_active = 0;
+- base->next_timer = NULL;
++ return hrtimer_hres_enabled;
}
- static inline int
-@@ -1444,17 +2035,20 @@ rt_mutex_fasttrylock(struct rt_mutex *lock,
- static inline void
- rt_mutex_fastunlock(struct rt_mutex *lock,
- bool (*slowfn)(struct rt_mutex *lock,
-- struct wake_q_head *wqh))
-+ struct wake_q_head *wqh,
-+ struct wake_q_head *wq_sleeper))
+ /*
+@@ -669,7 +686,7 @@
{
- WAKE_Q(wake_q);
-+ WAKE_Q(wake_sleeper_q);
+ struct hrtimer_cpu_base *base = this_cpu_ptr(&hrtimer_bases);
- if (likely(rt_mutex_cmpxchg_release(lock, current, NULL))) {
- rt_mutex_deadlock_account_unlock(current);
-
- } else {
-- bool deboost = slowfn(lock, &wake_q);
-+ bool deboost = slowfn(lock, &wake_q, &wake_sleeper_q);
+- if (!base->hres_active)
++ if (!__hrtimer_hres_active(base))
+ return;
- wake_up_q(&wake_q);
-+ wake_up_q_sleeper(&wake_sleeper_q);
+ raw_spin_lock(&base->lock);
+@@ -698,6 +715,29 @@
+ retrigger_next_event(NULL);
+ }
- /* Undo pi boosting if necessary: */
- if (deboost)
-@@ -1471,7 +2065,7 @@ void __sched rt_mutex_lock(struct rt_mutex *lock)
++#ifdef CONFIG_PREEMPT_RT_FULL
++
++static struct swork_event clock_set_delay_work;
++
++static void run_clock_set_delay(struct swork_event *event)
++{
++ clock_was_set();
++}
++
++void clock_was_set_delayed(void)
++{
++ swork_queue(&clock_set_delay_work);
++}
++
++static __init int create_clock_set_delay_thread(void)
++{
++ WARN_ON(swork_get());
++ INIT_SWORK(&clock_set_delay_work, run_clock_set_delay);
++ return 0;
++}
++early_initcall(create_clock_set_delay_thread);
++#else /* PREEMPT_RT_FULL */
++
+ static void clock_was_set_work(struct work_struct *work)
{
- might_sleep();
-
-- rt_mutex_fastlock(lock, TASK_UNINTERRUPTIBLE, rt_mutex_slowlock);
-+ rt_mutex_fastlock(lock, TASK_UNINTERRUPTIBLE, NULL, rt_mutex_slowlock);
+ clock_was_set();
+@@ -713,26 +753,106 @@
+ {
+ schedule_work(&hrtimer_work);
}
- EXPORT_SYMBOL_GPL(rt_mutex_lock);
++#endif
-@@ -1488,7 +2082,7 @@ int __sched rt_mutex_lock_interruptible(struct rt_mutex *lock)
- {
- might_sleep();
+ #else
-- return rt_mutex_fastlock(lock, TASK_INTERRUPTIBLE, rt_mutex_slowlock);
-+ return rt_mutex_fastlock(lock, TASK_INTERRUPTIBLE, NULL, rt_mutex_slowlock);
- }
- EXPORT_SYMBOL_GPL(rt_mutex_lock_interruptible);
+-static inline int __hrtimer_hres_active(struct hrtimer_cpu_base *b) { return 0; }
+-static inline int hrtimer_hres_active(void) { return 0; }
+ static inline int hrtimer_is_hres_enabled(void) { return 0; }
+ static inline void hrtimer_switch_to_hres(void) { }
+-static inline void
+-hrtimer_force_reprogram(struct hrtimer_cpu_base *base, int skip_equal) { }
+-static inline int hrtimer_reprogram(struct hrtimer *timer,
+- struct hrtimer_clock_base *base)
+-{
+- return 0;
+-}
+-static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) { }
+ static inline void retrigger_next_event(void *arg) { }
-@@ -1501,11 +2095,30 @@ int rt_mutex_timed_futex_lock(struct rt_mutex *lock,
- might_sleep();
+ #endif /* CONFIG_HIGH_RES_TIMERS */
- return rt_mutex_timed_fastlock(lock, TASK_INTERRUPTIBLE, timeout,
-- RT_MUTEX_FULL_CHAINWALK,
-+ RT_MUTEX_FULL_CHAINWALK, NULL,
- rt_mutex_slowlock);
+ /*
++ * When a timer is enqueued and expires earlier than the already enqueued
++ * timers, we have to check, whether it expires earlier than the timer for
++ * which the clock event device was armed.
++ *
++ * Called with interrupts disabled and base->cpu_base.lock held
++ */
++static void hrtimer_reprogram(struct hrtimer *timer, bool reprogram)
++{
++ struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
++ struct hrtimer_clock_base *base = timer->base;
++ ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
++
++ WARN_ON_ONCE(hrtimer_get_expires_tv64(timer) < 0);
++
++ /*
++ * CLOCK_REALTIME timer might be requested with an absolute
++ * expiry time which is less than base->offset. Set it to 0.
++ */
++ if (expires < 0)
++ expires = 0;
++
++ if (timer->is_soft) {
++ /*
++ * soft hrtimer could be started on a remote CPU. In this
++ * case softirq_expires_next needs to be updated on the
++ * remote CPU. The soft hrtimer will not expire before the
++ * first hard hrtimer on the remote CPU -
++ * hrtimer_check_target() prevents this case.
++ */
++ struct hrtimer_cpu_base *timer_cpu_base = base->cpu_base;
++
++ if (timer_cpu_base->softirq_activated)
++ return;
++
++ if (!ktime_before(expires, timer_cpu_base->softirq_expires_next))
++ return;
++
++ timer_cpu_base->softirq_next_timer = timer;
++ timer_cpu_base->softirq_expires_next = expires;
++
++ if (!ktime_before(expires, timer_cpu_base->expires_next) ||
++ !reprogram)
++ return;
++ }
++
++ /*
++ * If the timer is not on the current cpu, we cannot reprogram
++ * the other cpus clock event device.
++ */
++ if (base->cpu_base != cpu_base)
++ return;
++
++ /*
++ * If the hrtimer interrupt is running, then it will
++ * reevaluate the clock bases and reprogram the clock event
++ * device. The callbacks are always executed in hard interrupt
++ * context so we don't need an extra check for a running
++ * callback.
++ */
++ if (cpu_base->in_hrtirq)
++ return;
++
++ if (expires >= cpu_base->expires_next)
++ return;
++
++ /* Update the pointer to the next expiring timer */
++ cpu_base->next_timer = timer;
++ cpu_base->expires_next = expires;
++
++ /*
++ * If hres is not active, hardware does not have to be
++ * programmed yet.
++ *
++ * If a hang was detected in the last timer interrupt then we
++ * do not schedule a timer which is earlier than the expiry
++ * which we enforced in the hang detection. We want the system
++ * to make progress.
++ */
++ if (!__hrtimer_hres_active(cpu_base) || cpu_base->hang_detected)
++ return;
++
++ /*
++ * Program the timer hardware. We enforce the expiry for
++ * events which are already in the past.
++ */
++ tick_program_event(expires, 1);
++}
++
++/*
+ * Clock realtime was set
+ *
+ * Change the offset of the realtime clock vs. the monotonic
+@@ -830,6 +950,33 @@
}
+ EXPORT_SYMBOL_GPL(hrtimer_forward);
- /**
-+ * rt_mutex_lock_killable - lock a rt_mutex killable
++#ifdef CONFIG_PREEMPT_RT_BASE
++# define wake_up_timer_waiters(b) wake_up(&(b)->wait)
++
++/**
++ * hrtimer_wait_for_timer - Wait for a running timer
+ *
-+ * @lock: the rt_mutex to be locked
-+ * @detect_deadlock: deadlock detection on/off
++ * @timer: timer to wait for
+ *
-+ * Returns:
-+ * 0 on success
-+ * -EINTR when interrupted by a signal
-+ * -EDEADLK when the lock would deadlock (when deadlock detection is on)
++ * The function waits in case the timers callback function is
++ * currently executed on the waitqueue of the timer base. The
++ * waitqueue is woken up after the timer callback function has
++ * finished execution.
+ */
-+int __sched rt_mutex_lock_killable(struct rt_mutex *lock)
++void hrtimer_wait_for_timer(const struct hrtimer *timer)
+{
-+ might_sleep();
++ struct hrtimer_clock_base *base = timer->base;
+
-+ return rt_mutex_fastlock(lock, TASK_KILLABLE, NULL, rt_mutex_slowlock);
++ if (base && base->cpu_base &&
++ base->index >= HRTIMER_BASE_MONOTONIC_SOFT)
++ wait_event(base->cpu_base->wait,
++ !(hrtimer_callback_running(timer)));
+}
-+EXPORT_SYMBOL_GPL(rt_mutex_lock_killable);
+
-+/**
- * rt_mutex_timed_lock - lock a rt_mutex interruptible
- * the timeout structure is provided
- * by the caller
-@@ -1525,6 +2138,7 @@ rt_mutex_timed_lock(struct rt_mutex *lock, struct hrtimer_sleeper *timeout)
-
- return rt_mutex_timed_fastlock(lock, TASK_INTERRUPTIBLE, timeout,
- RT_MUTEX_MIN_CHAINWALK,
-+ NULL,
- rt_mutex_slowlock);
- }
- EXPORT_SYMBOL_GPL(rt_mutex_timed_lock);
-@@ -1542,7 +2156,11 @@ EXPORT_SYMBOL_GPL(rt_mutex_timed_lock);
- */
- int __sched rt_mutex_trylock(struct rt_mutex *lock)
- {
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ if (WARN_ON_ONCE(in_irq() || in_nmi()))
+#else
- if (WARN_ON_ONCE(in_irq() || in_nmi() || in_serving_softirq()))
++# define wake_up_timer_waiters(b) do { } while (0)
+#endif
- return 0;
-
- return rt_mutex_fasttrylock(lock, rt_mutex_slowtrylock);
-@@ -1568,13 +2186,14 @@ EXPORT_SYMBOL_GPL(rt_mutex_unlock);
- * required or not.
++
+ /*
+ * enqueue_hrtimer - internal function to (re)start a timer
+ *
+@@ -839,9 +986,10 @@
+ * Returns 1 when the new timer is the leftmost timer in the tree.
*/
- bool __sched rt_mutex_futex_unlock(struct rt_mutex *lock,
-- struct wake_q_head *wqh)
-+ struct wake_q_head *wqh,
-+ struct wake_q_head *wq_sleeper)
+ static int enqueue_hrtimer(struct hrtimer *timer,
+- struct hrtimer_clock_base *base)
++ struct hrtimer_clock_base *base,
++ enum hrtimer_mode mode)
{
- if (likely(rt_mutex_cmpxchg_release(lock, current, NULL))) {
- rt_mutex_deadlock_account_unlock(current);
- return false;
- }
-- return rt_mutex_slowunlock(lock, wqh);
-+ return rt_mutex_slowunlock(lock, wqh, wq_sleeper);
- }
+- debug_activate(timer);
++ debug_activate(timer, mode);
- /**
-@@ -1607,13 +2226,12 @@ EXPORT_SYMBOL_GPL(rt_mutex_destroy);
- void __rt_mutex_init(struct rt_mutex *lock, const char *name)
- {
- lock->owner = NULL;
-- raw_spin_lock_init(&lock->wait_lock);
- lock->waiters = RB_ROOT;
- lock->waiters_leftmost = NULL;
+ base->cpu_base->active_bases |= 1 << base->index;
+
+@@ -874,7 +1022,6 @@
+ if (!timerqueue_del(&base->active, &timer->node))
+ cpu_base->active_bases &= ~(1 << base->index);
+
+-#ifdef CONFIG_HIGH_RES_TIMERS
+ /*
+ * Note: If reprogram is false we do not update
+ * cpu_base->next_timer. This happens when we remove the first
+@@ -885,7 +1032,6 @@
+ */
+ if (reprogram && timer == cpu_base->next_timer)
+ hrtimer_force_reprogram(cpu_base, 1);
+-#endif
+ }
- debug_rt_mutex_init(lock, name);
+ /*
+@@ -934,22 +1080,36 @@
+ return tim;
}
--EXPORT_SYMBOL_GPL(__rt_mutex_init);
-+EXPORT_SYMBOL(__rt_mutex_init);
- /**
- * rt_mutex_init_proxy_locked - initialize and lock a rt_mutex on behalf of a
-@@ -1628,7 +2246,7 @@ EXPORT_SYMBOL_GPL(__rt_mutex_init);
- void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
- struct task_struct *proxy_owner)
+-/**
+- * hrtimer_start_range_ns - (re)start an hrtimer on the current CPU
+- * @timer: the timer to be added
+- * @tim: expiry time
+- * @delta_ns: "slack" range for the timer
+- * @mode: expiry mode: absolute (HRTIMER_MODE_ABS) or
+- * relative (HRTIMER_MODE_REL)
+- */
+-void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+- u64 delta_ns, const enum hrtimer_mode mode)
++static void
++hrtimer_update_softirq_timer(struct hrtimer_cpu_base *cpu_base, bool reprogram)
{
-- __rt_mutex_init(lock, NULL);
-+ rt_mutex_init(lock);
- debug_rt_mutex_proxy_lock(lock, proxy_owner);
- rt_mutex_set_owner(lock, proxy_owner);
- rt_mutex_deadlock_account_lock(lock, proxy_owner);
-@@ -1676,6 +2294,35 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
- return 1;
- }
+- struct hrtimer_clock_base *base, *new_base;
+- unsigned long flags;
+- int leftmost;
++ ktime_t expires;
-+#ifdef CONFIG_PREEMPT_RT_FULL
+- base = lock_hrtimer_base(timer, &flags);
+ /*
-+ * In PREEMPT_RT there's an added race.
-+ * If the task, that we are about to requeue, times out,
-+ * it can set the PI_WAKEUP_INPROGRESS. This tells the requeue
-+ * to skip this task. But right after the task sets
-+ * its pi_blocked_on to PI_WAKEUP_INPROGRESS it can then
-+ * block on the spin_lock(&hb->lock), which in RT is an rtmutex.
-+ * This will replace the PI_WAKEUP_INPROGRESS with the actual
-+ * lock that it blocks on. We *must not* place this task
-+ * on this proxy lock in that case.
-+ *
-+ * To prevent this race, we first take the task's pi_lock
-+ * and check if it has updated its pi_blocked_on. If it has,
-+ * we assume that it woke up and we return -EAGAIN.
-+ * Otherwise, we set the task's pi_blocked_on to
-+ * PI_REQUEUE_INPROGRESS, so that if the task is waking up
-+ * it will know that we are in the process of requeuing it.
++ * Find the next SOFT expiration.
+ */
-+ raw_spin_lock(&task->pi_lock);
-+ if (task->pi_blocked_on) {
-+ raw_spin_unlock(&task->pi_lock);
-+ raw_spin_unlock_irq(&lock->wait_lock);
-+ return -EAGAIN;
-+ }
-+ task->pi_blocked_on = PI_REQUEUE_INPROGRESS;
-+ raw_spin_unlock(&task->pi_lock);
-+#endif
-+
- /* We enforce deadlock detection for futexes */
- ret = task_blocks_on_rt_mutex(lock, waiter, task,
- RT_MUTEX_FULL_CHAINWALK);
-@@ -1690,7 +2337,7 @@ int rt_mutex_start_proxy_lock(struct rt_mutex *lock,
- ret = 0;
- }
-
-- if (unlikely(ret))
-+ if (ret && rt_mutex_has_waiters(lock))
- remove_waiter(lock, waiter);
-
- raw_spin_unlock_irq(&lock->wait_lock);
-@@ -1746,7 +2393,7 @@ int rt_mutex_finish_proxy_lock(struct rt_mutex *lock,
- set_current_state(TASK_INTERRUPTIBLE);
-
- /* sleep on the mutex */
-- ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter);
-+ ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter, NULL);
-
- if (unlikely(ret))
- remove_waiter(lock, waiter);
-@@ -1761,3 +2408,89 @@ int rt_mutex_finish_proxy_lock(struct rt_mutex *lock,
-
- return ret;
- }
-+
-+static inline int
-+ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
-+{
-+#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
-+ unsigned tmp;
-+
-+ if (ctx->deadlock_inject_countdown-- == 0) {
-+ tmp = ctx->deadlock_inject_interval;
-+ if (tmp > UINT_MAX/4)
-+ tmp = UINT_MAX;
-+ else
-+ tmp = tmp*2 + tmp + tmp/2;
-+
-+ ctx->deadlock_inject_interval = tmp;
-+ ctx->deadlock_inject_countdown = tmp;
-+ ctx->contending_lock = lock;
-+
-+ ww_mutex_unlock(lock);
-+
-+ return -EDEADLK;
-+ }
-+#endif
-+
-+ return 0;
-+}
++ expires = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_SOFT);
+
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+int __sched
-+__ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx)
-+{
-+ int ret;
-+
-+ might_sleep();
-+
-+ mutex_acquire_nest(&lock->base.dep_map, 0, 0, &ww_ctx->dep_map, _RET_IP_);
-+ ret = rt_mutex_slowlock(&lock->base.lock, TASK_INTERRUPTIBLE, NULL, 0, ww_ctx);
-+ if (ret)
-+ mutex_release(&lock->base.dep_map, 1, _RET_IP_);
-+ else if (!ret && ww_ctx->acquired > 1)
-+ return ww_mutex_deadlock_injection(lock, ww_ctx);
++ /*
++ * reprogramming needs to be triggered, even if the next soft
++ * hrtimer expires at the same time than the next hard
++ * hrtimer. cpu_base->softirq_expires_next needs to be updated!
++ */
++ if (expires == KTIME_MAX)
++ return;
+
-+ return ret;
++ /*
++ * cpu_base->*next_timer is recomputed by __hrtimer_get_next_event()
++ * cpu_base->*expires_next is only set by hrtimer_reprogram()
++ */
++ hrtimer_reprogram(cpu_base->softirq_next_timer, reprogram);
+}
-+EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible);
+
-+int __sched
-+__ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ww_ctx)
++static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
++ u64 delta_ns, const enum hrtimer_mode mode,
++ struct hrtimer_clock_base *base)
+{
-+ int ret;
-+
-+ might_sleep();
-+
-+ mutex_acquire_nest(&lock->base.dep_map, 0, 0, &ww_ctx->dep_map, _RET_IP_);
-+ ret = rt_mutex_slowlock(&lock->base.lock, TASK_UNINTERRUPTIBLE, NULL, 0, ww_ctx);
-+ if (ret)
-+ mutex_release(&lock->base.dep_map, 1, _RET_IP_);
-+ else if (!ret && ww_ctx->acquired > 1)
-+ return ww_mutex_deadlock_injection(lock, ww_ctx);
-+
-+ return ret;
++ struct hrtimer_clock_base *new_base;
+
+ /* Remove an active timer from the queue: */
+ remove_hrtimer(timer, base, true);
+@@ -964,21 +1124,37 @@
+ /* Switch the timer base, if necessary: */
+ new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
+
+- leftmost = enqueue_hrtimer(timer, new_base);
+- if (!leftmost)
+- goto unlock;
++ return enqueue_hrtimer(timer, new_base, mode);
+}
-+EXPORT_SYMBOL_GPL(__ww_mutex_lock);
+
-+void __sched ww_mutex_unlock(struct ww_mutex *lock)
++/**
++ * hrtimer_start_range_ns - (re)start an hrtimer
++ * @timer: the timer to be added
++ * @tim: expiry time
++ * @delta_ns: "slack" range for the timer
++ * @mode: timer mode: absolute (HRTIMER_MODE_ABS) or
++ * relative (HRTIMER_MODE_REL), and pinned (HRTIMER_MODE_PINNED);
++ * softirq based mode is considered for debug purpose only!
++ */
++void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
++ u64 delta_ns, const enum hrtimer_mode mode)
+{
-+ int nest = !!lock->ctx;
++ struct hrtimer_clock_base *base;
++ unsigned long flags;
+
+ /*
-+ * The unlocking fastpath is the 0->1 transition from 'locked'
-+ * into 'unlocked' state:
++ * Check whether the HRTIMER_MODE_SOFT bit and hrtimer.is_soft
++ * match.
+ */
-+ if (nest) {
-+#ifdef CONFIG_DEBUG_MUTEXES
-+ DEBUG_LOCKS_WARN_ON(!lock->ctx->acquired);
++#ifndef CONFIG_PREEMPT_RT_BASE
++ WARN_ON_ONCE(!(mode & HRTIMER_MODE_SOFT) ^ !timer->is_soft);
+#endif
-+ if (lock->ctx->acquired > 0)
-+ lock->ctx->acquired--;
-+ lock->ctx = NULL;
-+ }
+
-+ mutex_release(&lock->base.dep_map, nest, _RET_IP_);
-+ rt_mutex_unlock(&lock->base.lock);
-+}
-+EXPORT_SYMBOL(ww_mutex_unlock);
++ base = lock_hrtimer_base(timer, &flags);
++
++ if (__hrtimer_start_range_ns(timer, tim, delta_ns, mode, base))
++ hrtimer_reprogram(timer, true);
+
+- if (!hrtimer_is_hres_active(timer)) {
+- /*
+- * Kick to reschedule the next tick to handle the new timer
+- * on dynticks target.
+- */
+- if (new_base->cpu_base->nohz_active)
+- wake_up_nohz_cpu(new_base->cpu_base->cpu);
+- } else {
+- hrtimer_reprogram(timer, new_base);
+- }
+-unlock:
+ unlock_hrtimer_base(timer, &flags);
+ }
+ EXPORT_SYMBOL_GPL(hrtimer_start_range_ns);
+@@ -1035,7 +1211,7 @@
+
+ if (ret >= 0)
+ return ret;
+- cpu_relax();
++ hrtimer_wait_for_timer(timer);
+ }
+ }
+ EXPORT_SYMBOL_GPL(hrtimer_cancel);
+@@ -1076,7 +1252,7 @@
+ raw_spin_lock_irqsave(&cpu_base->lock, flags);
+
+ if (!__hrtimer_hres_active(cpu_base))
+- expires = __hrtimer_get_next_event(cpu_base);
++ expires = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
+
+ raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
+
+@@ -1099,8 +1275,16 @@
+ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
+ enum hrtimer_mode mode)
+ {
+- struct hrtimer_cpu_base *cpu_base;
++ bool softtimer;
+ int base;
++ struct hrtimer_cpu_base *cpu_base;
++
++ softtimer = !!(mode & HRTIMER_MODE_SOFT);
++#ifdef CONFIG_PREEMPT_RT_FULL
++ if (!softtimer && !(mode & HRTIMER_MODE_HARD))
++ softtimer = true;
+#endif
-diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
-index e317e1cbb3eb..f457c7574920 100644
---- a/kernel/locking/rtmutex_common.h
-+++ b/kernel/locking/rtmutex_common.h
-@@ -27,6 +27,7 @@ struct rt_mutex_waiter {
- struct rb_node pi_tree_entry;
- struct task_struct *task;
- struct rt_mutex *lock;
-+ bool savestate;
- #ifdef CONFIG_DEBUG_RT_MUTEXES
- unsigned long ip;
- struct pid *deadlock_task_pid;
-@@ -98,6 +99,9 @@ enum rtmutex_chainwalk {
- /*
- * PI-futex support (proxy locking functions, etc.):
++ base = softtimer ? HRTIMER_MAX_CLOCK_BASES / 2 : 0;
+
+ memset(timer, 0, sizeof(struct hrtimer));
+
+@@ -1114,7 +1298,8 @@
+ if (clock_id == CLOCK_REALTIME && mode & HRTIMER_MODE_REL)
+ clock_id = CLOCK_MONOTONIC;
+
+- base = hrtimer_clockid_to_base(clock_id);
++ base += hrtimer_clockid_to_base(clock_id);
++ timer->is_soft = softtimer;
+ timer->base = &cpu_base->clock_base[base];
+ timerqueue_init(&timer->node);
+ }
+@@ -1123,7 +1308,13 @@
+ * hrtimer_init - initialize a timer to the given clock
+ * @timer: the timer to be initialized
+ * @clock_id: the clock to be used
+- * @mode: timer mode abs/rel
++ * @mode: The modes which are relevant for intitialization:
++ * HRTIMER_MODE_ABS, HRTIMER_MODE_REL, HRTIMER_MODE_ABS_SOFT,
++ * HRTIMER_MODE_REL_SOFT
++ *
++ * The PINNED variants of the above can be handed in,
++ * but the PINNED bit is ignored as pinning happens
++ * when the hrtimer is started
*/
-+#define PI_WAKEUP_INPROGRESS ((struct rt_mutex_waiter *) 1)
-+#define PI_REQUEUE_INPROGRESS ((struct rt_mutex_waiter *) 2)
-+
- extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock);
- extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock,
- struct task_struct *proxy_owner);
-@@ -111,7 +115,8 @@ extern int rt_mutex_finish_proxy_lock(struct rt_mutex *lock,
- struct rt_mutex_waiter *waiter);
- extern int rt_mutex_timed_futex_lock(struct rt_mutex *l, struct hrtimer_sleeper *to);
- extern bool rt_mutex_futex_unlock(struct rt_mutex *lock,
-- struct wake_q_head *wqh);
-+ struct wake_q_head *wqh,
-+ struct wake_q_head *wq_sleeper);
- extern void rt_mutex_adjust_prio(struct task_struct *task);
+ void hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
+ enum hrtimer_mode mode)
+@@ -1142,19 +1333,19 @@
+ */
+ bool hrtimer_active(const struct hrtimer *timer)
+ {
+- struct hrtimer_cpu_base *cpu_base;
++ struct hrtimer_clock_base *base;
+ unsigned int seq;
+
+ do {
+- cpu_base = READ_ONCE(timer->base->cpu_base);
+- seq = raw_read_seqcount_begin(&cpu_base->seq);
++ base = READ_ONCE(timer->base);
++ seq = raw_read_seqcount_begin(&base->seq);
+
+ if (timer->state != HRTIMER_STATE_INACTIVE ||
+- cpu_base->running == timer)
++ base->running == timer)
+ return true;
+
+- } while (read_seqcount_retry(&cpu_base->seq, seq) ||
+- cpu_base != READ_ONCE(timer->base->cpu_base));
++ } while (read_seqcount_retry(&base->seq, seq) ||
++ base != READ_ONCE(timer->base));
+
+ return false;
+ }
+@@ -1180,7 +1371,8 @@
+
+ static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base,
+ struct hrtimer_clock_base *base,
+- struct hrtimer *timer, ktime_t *now)
++ struct hrtimer *timer, ktime_t *now,
++ unsigned long flags)
+ {
+ enum hrtimer_restart (*fn)(struct hrtimer *);
+ int restart;
+@@ -1188,16 +1380,16 @@
+ lockdep_assert_held(&cpu_base->lock);
+
+ debug_deactivate(timer);
+- cpu_base->running = timer;
++ base->running = timer;
+
+ /*
+ * Separate the ->running assignment from the ->state assignment.
+ *
+ * As with a regular write barrier, this ensures the read side in
+- * hrtimer_active() cannot observe cpu_base->running == NULL &&
++ * hrtimer_active() cannot observe base->running == NULL &&
+ * timer->state == INACTIVE.
+ */
+- raw_write_seqcount_barrier(&cpu_base->seq);
++ raw_write_seqcount_barrier(&base->seq);
+
+ __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0);
+ fn = timer->function;
+@@ -1211,15 +1403,15 @@
+ timer->is_rel = false;
+
+ /*
+- * Because we run timers from hardirq context, there is no chance
+- * they get migrated to another cpu, therefore its safe to unlock
+- * the timer base.
++ * The timer is marked as running in the cpu base, so it is
++ * protected against migration to a different CPU even if the lock
++ * is dropped.
+ */
+- raw_spin_unlock(&cpu_base->lock);
++ raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
+ trace_hrtimer_expire_entry(timer, now);
+ restart = fn(timer);
+ trace_hrtimer_expire_exit(timer);
+- raw_spin_lock(&cpu_base->lock);
++ raw_spin_lock_irq(&cpu_base->lock);
+
+ /*
+ * Note: We clear the running state after enqueue_hrtimer and
+@@ -1232,33 +1424,31 @@
+ */
+ if (restart != HRTIMER_NORESTART &&
+ !(timer->state & HRTIMER_STATE_ENQUEUED))
+- enqueue_hrtimer(timer, base);
++ enqueue_hrtimer(timer, base, HRTIMER_MODE_ABS);
+
+ /*
+ * Separate the ->running assignment from the ->state assignment.
+ *
+ * As with a regular write barrier, this ensures the read side in
+- * hrtimer_active() cannot observe cpu_base->running == NULL &&
++ * hrtimer_active() cannot observe base->running.timer == NULL &&
+ * timer->state == INACTIVE.
+ */
+- raw_write_seqcount_barrier(&cpu_base->seq);
++ raw_write_seqcount_barrier(&base->seq);
- #ifdef CONFIG_DEBUG_RT_MUTEXES
-@@ -120,4 +125,14 @@ extern void rt_mutex_adjust_prio(struct task_struct *task);
- # include "rtmutex.h"
- #endif
+- WARN_ON_ONCE(cpu_base->running != timer);
+- cpu_base->running = NULL;
++ WARN_ON_ONCE(base->running != timer);
++ base->running = NULL;
+ }
-+static inline void
-+rt_mutex_init_waiter(struct rt_mutex_waiter *waiter, bool savestate)
+-static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
++static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now,
++ unsigned long flags, unsigned int active_mask)
+ {
+- struct hrtimer_clock_base *base = cpu_base->clock_base;
+- unsigned int active = cpu_base->active_bases;
++ struct hrtimer_clock_base *base;
++ unsigned int active = cpu_base->active_bases & active_mask;
+
+- for (; active; base++, active >>= 1) {
++ for_each_active_base(base, cpu_base, active) {
+ struct timerqueue_node *node;
+ ktime_t basenow;
+
+- if (!(active & 0x01))
+- continue;
+-
+ basenow = ktime_add(now, base->offset);
+
+ while ((node = timerqueue_getnext(&base->active))) {
+@@ -1281,11 +1471,29 @@
+ if (basenow < hrtimer_get_softexpires_tv64(timer))
+ break;
+
+- __run_hrtimer(cpu_base, base, timer, &basenow);
++ __run_hrtimer(cpu_base, base, timer, &basenow, flags);
+ }
+ }
+ }
+
++static __latent_entropy void hrtimer_run_softirq(struct softirq_action *h)
+{
-+ debug_rt_mutex_init_waiter(waiter);
-+ waiter->task = NULL;
-+ waiter->savestate = savestate;
-+ RB_CLEAR_NODE(&waiter->pi_tree_entry);
-+ RB_CLEAR_NODE(&waiter->tree_entry);
-+}
++ struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
++ unsigned long flags;
++ ktime_t now;
+
- #endif
-diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c
-index db3ccb1dd614..909779647bd1 100644
---- a/kernel/locking/spinlock.c
-+++ b/kernel/locking/spinlock.c
-@@ -124,8 +124,11 @@ void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock) \
- * __[spin|read|write]_lock_bh()
- */
- BUILD_LOCK_OPS(spin, raw_spinlock);
++ raw_spin_lock_irqsave(&cpu_base->lock, flags);
+
-+#ifndef CONFIG_PREEMPT_RT_FULL
- BUILD_LOCK_OPS(read, rwlock);
- BUILD_LOCK_OPS(write, rwlock);
-+#endif
++ now = hrtimer_update_base(cpu_base);
++ __hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_SOFT);
++
++ cpu_base->softirq_activated = 0;
++ hrtimer_update_softirq_timer(cpu_base, true);
++
++ raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
++ wake_up_timer_waiters(cpu_base);
++}
++
+ #ifdef CONFIG_HIGH_RES_TIMERS
- #endif
+ /*
+@@ -1296,13 +1504,14 @@
+ {
+ struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
+ ktime_t expires_next, now, entry_time, delta;
++ unsigned long flags;
+ int retries = 0;
-@@ -209,6 +212,8 @@ void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock)
- EXPORT_SYMBOL(_raw_spin_unlock_bh);
- #endif
+ BUG_ON(!cpu_base->hres_active);
+ cpu_base->nr_events++;
+ dev->next_event = KTIME_MAX;
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+
- #ifndef CONFIG_INLINE_READ_TRYLOCK
- int __lockfunc _raw_read_trylock(rwlock_t *lock)
- {
-@@ -353,6 +358,8 @@ void __lockfunc _raw_write_unlock_bh(rwlock_t *lock)
- EXPORT_SYMBOL(_raw_write_unlock_bh);
- #endif
+- raw_spin_lock(&cpu_base->lock);
++ raw_spin_lock_irqsave(&cpu_base->lock, flags);
+ entry_time = now = hrtimer_update_base(cpu_base);
+ retry:
+ cpu_base->in_hrtirq = 1;
+@@ -1315,17 +1524,23 @@
+ */
+ cpu_base->expires_next = KTIME_MAX;
-+#endif /* !PREEMPT_RT_FULL */
+- __hrtimer_run_queues(cpu_base, now);
++ if (!ktime_before(now, cpu_base->softirq_expires_next)) {
++ cpu_base->softirq_expires_next = KTIME_MAX;
++ cpu_base->softirq_activated = 1;
++ raise_softirq_irqoff(HRTIMER_SOFTIRQ);
++ }
+
- #ifdef CONFIG_DEBUG_LOCK_ALLOC
++ __hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_HARD);
- void __lockfunc _raw_spin_lock_nested(raw_spinlock_t *lock, int subclass)
-diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
-index 0374a596cffa..94970338d518 100644
---- a/kernel/locking/spinlock_debug.c
-+++ b/kernel/locking/spinlock_debug.c
-@@ -31,6 +31,7 @@ void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name,
+ /* Reevaluate the clock bases for the next expiry */
+- expires_next = __hrtimer_get_next_event(cpu_base);
++ expires_next = __hrtimer_get_next_event(cpu_base, HRTIMER_ACTIVE_ALL);
+ /*
+ * Store the new expiry value so the migration code can verify
+ * against it.
+ */
+ cpu_base->expires_next = expires_next;
+ cpu_base->in_hrtirq = 0;
+- raw_spin_unlock(&cpu_base->lock);
++ raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
+
+ /* Reprogramming necessary ? */
+ if (!tick_program_event(expires_next, 0)) {
+@@ -1346,7 +1561,7 @@
+ * Acquire base lock for updating the offsets and retrieving
+ * the current time.
+ */
+- raw_spin_lock(&cpu_base->lock);
++ raw_spin_lock_irqsave(&cpu_base->lock, flags);
+ now = hrtimer_update_base(cpu_base);
+ cpu_base->nr_retries++;
+ if (++retries < 3)
+@@ -1359,7 +1574,8 @@
+ */
+ cpu_base->nr_hangs++;
+ cpu_base->hang_detected = 1;
+- raw_spin_unlock(&cpu_base->lock);
++ raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
++
+ delta = ktime_sub(now, entry_time);
+ if ((unsigned int)delta > cpu_base->max_hang_time)
+ cpu_base->max_hang_time = (unsigned int) delta;
+@@ -1401,6 +1617,7 @@
+ void hrtimer_run_queues(void)
+ {
+ struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
++ unsigned long flags;
+ ktime_t now;
- EXPORT_SYMBOL(__raw_spin_lock_init);
+ if (__hrtimer_hres_active(cpu_base))
+@@ -1418,10 +1635,17 @@
+ return;
+ }
-+#ifndef CONFIG_PREEMPT_RT_FULL
- void __rwlock_init(rwlock_t *lock, const char *name,
- struct lock_class_key *key)
- {
-@@ -48,6 +49,7 @@ void __rwlock_init(rwlock_t *lock, const char *name,
+- raw_spin_lock(&cpu_base->lock);
++ raw_spin_lock_irqsave(&cpu_base->lock, flags);
+ now = hrtimer_update_base(cpu_base);
+- __hrtimer_run_queues(cpu_base, now);
+- raw_spin_unlock(&cpu_base->lock);
++
++ if (!ktime_before(now, cpu_base->softirq_expires_next)) {
++ cpu_base->softirq_expires_next = KTIME_MAX;
++ cpu_base->softirq_activated = 1;
++ raise_softirq_irqoff(HRTIMER_SOFTIRQ);
++ }
++
++ __hrtimer_run_queues(cpu_base, now, flags, HRTIMER_ACTIVE_HARD);
++ raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
}
- EXPORT_SYMBOL(__rwlock_init);
-+#endif
-
- static void spin_dump(raw_spinlock_t *lock, const char *msg)
- {
-@@ -159,6 +161,7 @@ void do_raw_spin_unlock(raw_spinlock_t *lock)
- arch_spin_unlock(&lock->raw_lock);
+ /*
+@@ -1440,13 +1664,65 @@
+ return HRTIMER_NORESTART;
}
-+#ifndef CONFIG_PREEMPT_RT_FULL
- static void rwlock_bug(rwlock_t *lock, const char *msg)
+-void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
++#ifdef CONFIG_PREEMPT_RT_FULL
++static bool task_is_realtime(struct task_struct *tsk)
{
- if (!debug_locks_off())
-@@ -300,3 +303,5 @@ void do_raw_write_unlock(rwlock_t *lock)
- debug_write_unlock(lock);
- arch_write_unlock(&lock->raw_lock);
- }
++ int policy = tsk->policy;
+
++ if (policy == SCHED_FIFO || policy == SCHED_RR)
++ return true;
++ if (policy == SCHED_DEADLINE)
++ return true;
++ return false;
++}
+#endif
-diff --git a/kernel/panic.c b/kernel/panic.c
-index e6480e20379e..7e9c1918a94e 100644
---- a/kernel/panic.c
-+++ b/kernel/panic.c
-@@ -482,9 +482,11 @@ static u64 oops_id;
-
- static int init_oops_id(void)
- {
-+#ifndef CONFIG_PREEMPT_RT_FULL
- if (!oops_id)
- get_random_bytes(&oops_id, sizeof(oops_id));
- else
++
++static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
++ clockid_t clock_id,
++ enum hrtimer_mode mode,
++ struct task_struct *task)
++{
++#ifdef CONFIG_PREEMPT_RT_FULL
++ if (!(mode & (HRTIMER_MODE_SOFT | HRTIMER_MODE_HARD))) {
++ if (task_is_realtime(current) || system_state != SYSTEM_RUNNING)
++ mode |= HRTIMER_MODE_HARD;
++ else
++ mode |= HRTIMER_MODE_SOFT;
++ }
+#endif
- oops_id++;
-
- return 0;
-diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
-index b26dbc48c75b..968255f27a33 100644
---- a/kernel/power/hibernate.c
-+++ b/kernel/power/hibernate.c
-@@ -286,6 +286,8 @@ static int create_image(int platform_mode)
-
- local_irq_disable();
++ __hrtimer_init(&sl->timer, clock_id, mode);
+ sl->timer.function = hrtimer_wakeup;
+ sl->task = task;
+ }
++
++/**
++ * hrtimer_init_sleeper - initialize sleeper to the given clock
++ * @sl: sleeper to be initialized
++ * @clock_id: the clock to be used
++ * @mode: timer mode abs/rel
++ * @task: the task to wake up
++ */
++void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
++ enum hrtimer_mode mode, struct task_struct *task)
++{
++ debug_init(&sl->timer, clock_id, mode);
++ __hrtimer_init_sleeper(sl, clock_id, mode, task);
++
++}
+ EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
-+ system_state = SYSTEM_SUSPEND;
++#ifdef CONFIG_DEBUG_OBJECTS_TIMERS
++void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
++ clockid_t clock_id,
++ enum hrtimer_mode mode,
++ struct task_struct *task)
++{
++ debug_object_init_on_stack(&sl->timer, &hrtimer_debug_descr);
++ __hrtimer_init_sleeper(sl, clock_id, mode, task);
++}
++EXPORT_SYMBOL_GPL(hrtimer_init_sleeper_on_stack);
++#endif
+
- error = syscore_suspend();
- if (error) {
- printk(KERN_ERR "PM: Some system devices failed to power down, "
-@@ -317,6 +319,7 @@ static int create_image(int platform_mode)
- syscore_resume();
+ int nanosleep_copyout(struct restart_block *restart, struct timespec64 *ts)
+ {
+ switch(restart->nanosleep.type) {
+@@ -1470,8 +1746,6 @@
+ {
+ struct restart_block *restart;
- Enable_irqs:
-+ system_state = SYSTEM_RUNNING;
- local_irq_enable();
+- hrtimer_init_sleeper(t, current);
+-
+ do {
+ set_current_state(TASK_INTERRUPTIBLE);
+ hrtimer_start_expires(&t->timer, mode);
+@@ -1508,10 +1782,9 @@
+ struct hrtimer_sleeper t;
+ int ret;
- Enable_cpus:
-@@ -446,6 +449,7 @@ static int resume_target_kernel(bool platform_mode)
- goto Enable_cpus;
+- hrtimer_init_on_stack(&t.timer, restart->nanosleep.clockid,
+- HRTIMER_MODE_ABS);
++ hrtimer_init_sleeper_on_stack(&t, restart->nanosleep.clockid,
++ HRTIMER_MODE_ABS, current);
+ hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires);
+-
+ ret = do_nanosleep(&t, HRTIMER_MODE_ABS);
+ destroy_hrtimer_on_stack(&t.timer);
+ return ret;
+@@ -1529,7 +1802,7 @@
+ if (dl_task(current) || rt_task(current))
+ slack = 0;
- local_irq_disable();
-+ system_state = SYSTEM_SUSPEND;
+- hrtimer_init_on_stack(&t.timer, clockid, mode);
++ hrtimer_init_sleeper_on_stack(&t, clockid, mode, current);
+ hrtimer_set_expires_range_ns(&t.timer, timespec64_to_ktime(*rqtp), slack);
+ ret = do_nanosleep(&t, mode);
+ if (ret != -ERESTART_RESTARTBLOCK)
+@@ -1585,6 +1858,27 @@
+ }
+ #endif
- error = syscore_suspend();
- if (error)
-@@ -479,6 +483,7 @@ static int resume_target_kernel(bool platform_mode)
- syscore_resume();
++#ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * Sleep for 1 ms in hope whoever holds what we want will let it go.
++ */
++void cpu_chill(void)
++{
++ ktime_t chill_time;
++ unsigned int freeze_flag = current->flags & PF_NOFREEZE;
++
++ chill_time = ktime_set(0, NSEC_PER_MSEC);
++ set_current_state(TASK_UNINTERRUPTIBLE);
++ current->flags |= PF_NOFREEZE;
++ sleeping_lock_inc();
++ schedule_hrtimeout(&chill_time, HRTIMER_MODE_REL_HARD);
++ sleeping_lock_dec();
++ if (!freeze_flag)
++ current->flags &= ~PF_NOFREEZE;
++}
++EXPORT_SYMBOL(cpu_chill);
++#endif
++
+ /*
+ * Functions related to boot-time initialization:
+ */
+@@ -1598,9 +1892,17 @@
+ timerqueue_init_head(&cpu_base->clock_base[i].active);
+ }
- Enable_irqs:
-+ system_state = SYSTEM_RUNNING;
- local_irq_enable();
+- cpu_base->active_bases = 0;
+ cpu_base->cpu = cpu;
+- hrtimer_init_hres(cpu_base);
++ cpu_base->active_bases = 0;
++ cpu_base->hres_active = 0;
++ cpu_base->hang_detected = 0;
++ cpu_base->next_timer = NULL;
++ cpu_base->softirq_next_timer = NULL;
++ cpu_base->expires_next = KTIME_MAX;
++ cpu_base->softirq_expires_next = KTIME_MAX;
++#ifdef CONFIG_PREEMPT_RT_BASE
++ init_waitqueue_head(&cpu_base->wait);
++#endif
+ return 0;
+ }
- Enable_cpus:
-@@ -564,6 +569,7 @@ int hibernation_platform_enter(void)
- goto Enable_cpus;
+@@ -1632,7 +1934,7 @@
+ * sort out already expired timers and reprogram the
+ * event device.
+ */
+- enqueue_hrtimer(timer, new_base);
++ enqueue_hrtimer(timer, new_base, HRTIMER_MODE_ABS);
+ }
+ }
+@@ -1644,6 +1946,12 @@
+ BUG_ON(cpu_online(scpu));
+ tick_cancel_sched_timer(scpu);
+
++ /*
++ * this BH disable ensures that raise_softirq_irqoff() does
++ * not wakeup ksoftirqd (and acquire the pi-lock) while
++ * holding the cpu_base lock
++ */
++ local_bh_disable();
local_irq_disable();
-+ system_state = SYSTEM_SUSPEND;
- syscore_suspend();
- if (pm_wakeup_pending()) {
- error = -EAGAIN;
-@@ -576,6 +582,7 @@ int hibernation_platform_enter(void)
+ old_base = &per_cpu(hrtimer_bases, scpu);
+ new_base = this_cpu_ptr(&hrtimer_bases);
+@@ -1659,12 +1967,19 @@
+ &new_base->clock_base[i]);
+ }
- Power_up:
- syscore_resume();
-+ system_state = SYSTEM_RUNNING;
++ /*
++ * The migration might have changed the first expiring softirq
++ * timer on this CPU. Update it.
++ */
++ hrtimer_update_softirq_timer(new_base, false);
++
+ raw_spin_unlock(&old_base->lock);
+ raw_spin_unlock(&new_base->lock);
+
+ /* Check, if we got expired work to do */
+ __hrtimer_peek_ahead_timers();
local_irq_enable();
++ local_bh_enable();
+ return 0;
+ }
- Enable_cpus:
-@@ -676,6 +683,10 @@ static int load_image_and_restore(void)
- return error;
+@@ -1673,18 +1988,19 @@
+ void __init hrtimers_init(void)
+ {
+ hrtimers_prepare_cpu(smp_processor_id());
++ open_softirq(HRTIMER_SOFTIRQ, hrtimer_run_softirq);
}
-+#ifndef CONFIG_SUSPEND
-+bool pm_in_action;
-+#endif
-+
/**
- * hibernate - Carry out system hibernation, including saving the image.
+ * schedule_hrtimeout_range_clock - sleep until timeout
+ * @expires: timeout value (ktime_t)
+ * @delta: slack in expires timeout (ktime_t)
+- * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL
+- * @clock: timer clock, CLOCK_MONOTONIC or CLOCK_REALTIME
++ * @mode: timer mode
++ * @clock_id: timer clock to be used
*/
-@@ -689,6 +700,8 @@ int hibernate(void)
- return -EPERM;
+ int __sched
+ schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta,
+- const enum hrtimer_mode mode, int clock)
++ const enum hrtimer_mode mode, clockid_t clock_id)
+ {
+ struct hrtimer_sleeper t;
+
+@@ -1705,11 +2021,9 @@
+ return -EINTR;
}
-+ pm_in_action = true;
-+
- lock_system_sleep();
- /* The snapshot device should not be opened while we're running */
- if (!atomic_add_unless(&snapshot_device_available, -1, 0)) {
-@@ -766,6 +779,7 @@ int hibernate(void)
- atomic_inc(&snapshot_device_available);
- Unlock:
- unlock_system_sleep();
-+ pm_in_action = false;
- return error;
+- hrtimer_init_on_stack(&t.timer, clock, mode);
++ hrtimer_init_sleeper_on_stack(&t, clock_id, mode, current);
+ hrtimer_set_expires_range_ns(&t.timer, *expires, delta);
+
+- hrtimer_init_sleeper(&t, current);
+-
+ hrtimer_start_expires(&t.timer, mode);
+
+ if (likely(t.task))
+@@ -1727,7 +2041,7 @@
+ * schedule_hrtimeout_range - sleep until timeout
+ * @expires: timeout value (ktime_t)
+ * @delta: slack in expires timeout (ktime_t)
+- * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL
++ * @mode: timer mode
+ *
+ * Make the current task sleep until the given expiry time has
+ * elapsed. The routine will return immediately unless
+@@ -1766,7 +2080,7 @@
+ /**
+ * schedule_hrtimeout - sleep until timeout
+ * @expires: timeout value (ktime_t)
+- * @mode: timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL
++ * @mode: timer mode
+ *
+ * Make the current task sleep until the given expiry time has
+ * elapsed. The routine will return immediately unless
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/itimer.c linux-4.14/kernel/time/itimer.c
+--- linux-4.14.orig/kernel/time/itimer.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/itimer.c 2018-09-05 11:05:07.000000000 +0200
+@@ -214,6 +214,7 @@
+ /* We are sharing ->siglock with it_real_fn() */
+ if (hrtimer_try_to_cancel(timer) < 0) {
+ spin_unlock_irq(&tsk->sighand->siglock);
++ hrtimer_wait_for_timer(&tsk->signal->real_timer);
+ goto again;
+ }
+ expires = timeval_to_ktime(value->it_value);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/jiffies.c linux-4.14/kernel/time/jiffies.c
+--- linux-4.14.orig/kernel/time/jiffies.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/jiffies.c 2018-09-05 11:05:07.000000000 +0200
+@@ -74,7 +74,8 @@
+ .max_cycles = 10,
+ };
+
+-__cacheline_aligned_in_smp DEFINE_SEQLOCK(jiffies_lock);
++__cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock);
++__cacheline_aligned_in_smp seqcount_t jiffies_seq;
+
+ #if (BITS_PER_LONG < 64)
+ u64 get_jiffies_64(void)
+@@ -83,9 +84,9 @@
+ u64 ret;
+
+ do {
+- seq = read_seqbegin(&jiffies_lock);
++ seq = read_seqcount_begin(&jiffies_seq);
+ ret = jiffies_64;
+- } while (read_seqretry(&jiffies_lock, seq));
++ } while (read_seqcount_retry(&jiffies_seq, seq));
+ return ret;
}
+ EXPORT_SYMBOL(get_jiffies_64);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/posix-cpu-timers.c linux-4.14/kernel/time/posix-cpu-timers.c
+--- linux-4.14.orig/kernel/time/posix-cpu-timers.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/posix-cpu-timers.c 2018-09-05 11:05:07.000000000 +0200
+@@ -3,8 +3,10 @@
+ * Implement CPU time clocks for the POSIX clock interface.
+ */
-diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
-index 6ccb08f57fcb..c8cbb5ed2fe3 100644
---- a/kernel/power/suspend.c
-+++ b/kernel/power/suspend.c
-@@ -369,6 +369,8 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
- arch_suspend_disable_irqs();
- BUG_ON(!irqs_disabled());
++#include <uapi/linux/sched/types.h>
+ #include <linux/sched/signal.h>
+ #include <linux/sched/cputime.h>
++#include <linux/sched/rt.h>
+ #include <linux/posix-timers.h>
+ #include <linux/errno.h>
+ #include <linux/math64.h>
+@@ -14,6 +16,7 @@
+ #include <linux/tick.h>
+ #include <linux/workqueue.h>
+ #include <linux/compat.h>
++#include <linux/smpboot.h>
-+ system_state = SYSTEM_SUSPEND;
-+
- error = syscore_suspend();
- if (!error) {
- *wakeup = pm_wakeup_pending();
-@@ -385,6 +387,8 @@ static int suspend_enter(suspend_state_t state, bool *wakeup)
- syscore_resume();
- }
+ #include "posix-timers.h"
-+ system_state = SYSTEM_RUNNING;
-+
- arch_suspend_enable_irqs();
- BUG_ON(irqs_disabled());
+@@ -603,7 +606,7 @@
+ /*
+ * Disarm any old timer after extracting its expiry time.
+ */
+- WARN_ON_ONCE(!irqs_disabled());
++ WARN_ON_ONCE_NONRT(!irqs_disabled());
-@@ -527,6 +531,8 @@ static int enter_state(suspend_state_t state)
- return error;
- }
+ ret = 0;
+ old_incr = timer->it.cpu.incr;
+@@ -1034,7 +1037,7 @@
+ /*
+ * Now re-arm for the new expiry time.
+ */
+- WARN_ON_ONCE(!irqs_disabled());
++ WARN_ON_ONCE_NONRT(!irqs_disabled());
+ arm_timer(timer);
+ unlock:
+ unlock_task_sighand(p, &flags);
+@@ -1119,13 +1122,13 @@
+ * already updated our counts. We need to check if any timers fire now.
+ * Interrupts are disabled.
+ */
+-void run_posix_cpu_timers(struct task_struct *tsk)
++static void __run_posix_cpu_timers(struct task_struct *tsk)
+ {
+ LIST_HEAD(firing);
+ struct k_itimer *timer, *next;
+ unsigned long flags;
-+bool pm_in_action;
-+
- /**
- * pm_suspend - Externally visible function for suspending the system.
- * @state: System sleep state to enter.
-@@ -541,6 +547,8 @@ int pm_suspend(suspend_state_t state)
- if (state <= PM_SUSPEND_ON || state >= PM_SUSPEND_MAX)
- return -EINVAL;
+- WARN_ON_ONCE(!irqs_disabled());
++ WARN_ON_ONCE_NONRT(!irqs_disabled());
-+ pm_in_action = true;
-+
- error = enter_state(state);
- if (error) {
- suspend_stats.fail++;
-@@ -548,6 +556,7 @@ int pm_suspend(suspend_state_t state)
- } else {
- suspend_stats.success++;
+ /*
+ * The fast path checks that there are no expired thread or thread
+@@ -1179,6 +1182,152 @@
}
-+ pm_in_action = false;
- return error;
}
- EXPORT_SYMBOL(pm_suspend);
-diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
-index f7a55e9ff2f7..9277ee033271 100644
---- a/kernel/printk/printk.c
-+++ b/kernel/printk/printk.c
-@@ -351,6 +351,65 @@ __packed __aligned(4)
- */
- DEFINE_RAW_SPINLOCK(logbuf_lock);
-+#ifdef CONFIG_EARLY_PRINTK
-+struct console *early_console;
++#ifdef CONFIG_PREEMPT_RT_BASE
++#include <linux/kthread.h>
++#include <linux/cpu.h>
++DEFINE_PER_CPU(struct task_struct *, posix_timer_task);
++DEFINE_PER_CPU(struct task_struct *, posix_timer_tasklist);
++DEFINE_PER_CPU(bool, posix_timer_th_active);
+
-+static void early_vprintk(const char *fmt, va_list ap)
++static void posix_cpu_kthread_fn(unsigned int cpu)
+{
-+ if (early_console) {
-+ char buf[512];
-+ int n = vscnprintf(buf, sizeof(buf), fmt, ap);
++ struct task_struct *tsk = NULL;
++ struct task_struct *next = NULL;
+
-+ early_console->write(early_console, buf, n);
++ BUG_ON(per_cpu(posix_timer_task, cpu) != current);
++
++ /* grab task list */
++ raw_local_irq_disable();
++ tsk = per_cpu(posix_timer_tasklist, cpu);
++ per_cpu(posix_timer_tasklist, cpu) = NULL;
++ raw_local_irq_enable();
++
++ /* its possible the list is empty, just return */
++ if (!tsk)
++ return;
++
++ /* Process task list */
++ while (1) {
++ /* save next */
++ next = tsk->posix_timer_list;
++
++ /* run the task timers, clear its ptr and
++ * unreference it
++ */
++ __run_posix_cpu_timers(tsk);
++ tsk->posix_timer_list = NULL;
++ put_task_struct(tsk);
++
++ /* check if this is the last on the list */
++ if (next == tsk)
++ break;
++ tsk = next;
+ }
+}
+
-+asmlinkage void early_printk(const char *fmt, ...)
++static inline int __fastpath_timer_check(struct task_struct *tsk)
+{
-+ va_list ap;
++ /* tsk == current, ensure it is safe to use ->signal/sighand */
++ if (unlikely(tsk->exit_state))
++ return 0;
+
-+ va_start(ap, fmt);
-+ early_vprintk(fmt, ap);
-+ va_end(ap);
++ if (!task_cputime_zero(&tsk->cputime_expires))
++ return 1;
++
++ if (!task_cputime_zero(&tsk->signal->cputime_expires))
++ return 1;
++
++ return 0;
+}
+
-+/*
-+ * This is independent of any log levels - a global
-+ * kill switch that turns off all of printk.
-+ *
-+ * Used by the NMI watchdog if early-printk is enabled.
-+ */
-+static bool __read_mostly printk_killswitch;
++void run_posix_cpu_timers(struct task_struct *tsk)
++{
++ unsigned int cpu = smp_processor_id();
++ struct task_struct *tasklist;
+
-+static int __init force_early_printk_setup(char *str)
++ BUG_ON(!irqs_disabled());
++
++ if (per_cpu(posix_timer_th_active, cpu) != true)
++ return;
++
++ /* get per-cpu references */
++ tasklist = per_cpu(posix_timer_tasklist, cpu);
++
++ /* check to see if we're already queued */
++ if (!tsk->posix_timer_list && __fastpath_timer_check(tsk)) {
++ get_task_struct(tsk);
++ if (tasklist) {
++ tsk->posix_timer_list = tasklist;
++ } else {
++ /*
++ * The list is terminated by a self-pointing
++ * task_struct
++ */
++ tsk->posix_timer_list = tsk;
++ }
++ per_cpu(posix_timer_tasklist, cpu) = tsk;
++
++ wake_up_process(per_cpu(posix_timer_task, cpu));
++ }
++}
++
++static int posix_cpu_kthread_should_run(unsigned int cpu)
+{
-+ printk_killswitch = true;
-+ return 0;
++ return __this_cpu_read(posix_timer_tasklist) != NULL;
+}
-+early_param("force_early_printk", force_early_printk_setup);
+
-+void printk_kill(void)
++static void posix_cpu_kthread_park(unsigned int cpu)
+{
-+ printk_killswitch = true;
++ this_cpu_write(posix_timer_th_active, false);
+}
+
-+#ifdef CONFIG_PRINTK
-+static int forced_early_printk(const char *fmt, va_list ap)
++static void posix_cpu_kthread_unpark(unsigned int cpu)
+{
-+ if (!printk_killswitch)
-+ return 0;
-+ early_vprintk(fmt, ap);
-+ return 1;
++ this_cpu_write(posix_timer_th_active, true);
+}
-+#endif
+
-+#else
-+static inline int forced_early_printk(const char *fmt, va_list ap)
++static void posix_cpu_kthread_setup(unsigned int cpu)
++{
++ struct sched_param sp;
++
++ sp.sched_priority = MAX_RT_PRIO - 1;
++ sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
++ posix_cpu_kthread_unpark(cpu);
++}
++
++static struct smp_hotplug_thread posix_cpu_thread = {
++ .store = &posix_timer_task,
++ .thread_should_run = posix_cpu_kthread_should_run,
++ .thread_fn = posix_cpu_kthread_fn,
++ .thread_comm = "posixcputmr/%u",
++ .setup = posix_cpu_kthread_setup,
++ .park = posix_cpu_kthread_park,
++ .unpark = posix_cpu_kthread_unpark,
++};
++
++static int __init posix_cpu_thread_init(void)
+{
++ /* Start one for boot CPU. */
++ unsigned long cpu;
++ int ret;
++
++ /* init the per-cpu posix_timer_tasklets */
++ for_each_possible_cpu(cpu)
++ per_cpu(posix_timer_tasklist, cpu) = NULL;
++
++ ret = smpboot_register_percpu_thread(&posix_cpu_thread);
++ WARN_ON(ret);
++
+ return 0;
+}
++early_initcall(posix_cpu_thread_init);
++#else /* CONFIG_PREEMPT_RT_BASE */
++void run_posix_cpu_timers(struct task_struct *tsk)
++{
++ __run_posix_cpu_timers(tsk);
++}
++#endif /* CONFIG_PREEMPT_RT_BASE */
++
+ /*
+ * Set one of the process-wide special case CPU timers or RLIMIT_CPU.
+ * The tsk->sighand->siglock must be held by the caller.
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/posix-timers.c linux-4.14/kernel/time/posix-timers.c
+--- linux-4.14.orig/kernel/time/posix-timers.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/time/posix-timers.c 2018-09-05 11:05:07.000000000 +0200
+@@ -434,6 +434,7 @@
+ static struct pid *good_sigevent(sigevent_t * event)
+ {
+ struct task_struct *rtn = current->group_leader;
++ int sig = event->sigev_signo;
+
+ switch (event->sigev_notify) {
+ case SIGEV_SIGNAL | SIGEV_THREAD_ID:
+@@ -443,7 +444,8 @@
+ /* FALLTHRU */
+ case SIGEV_SIGNAL:
+ case SIGEV_THREAD:
+- if (event->sigev_signo <= 0 || event->sigev_signo > SIGRTMAX)
++ if (sig <= 0 || sig > SIGRTMAX ||
++ sig_kernel_only(sig) || sig_kernel_coredump(sig))
+ return NULL;
+ /* FALLTHRU */
+ case SIGEV_NONE:
+@@ -469,7 +471,7 @@
+
+ static void k_itimer_rcu_free(struct rcu_head *head)
+ {
+- struct k_itimer *tmr = container_of(head, struct k_itimer, it.rcu);
++ struct k_itimer *tmr = container_of(head, struct k_itimer, rcu);
+
+ kmem_cache_free(posix_timers_cache, tmr);
+ }
+@@ -486,7 +488,7 @@
+ }
+ put_pid(tmr->it_pid);
+ sigqueue_free(tmr->sigq);
+- call_rcu(&tmr->it.rcu, k_itimer_rcu_free);
++ call_rcu(&tmr->rcu, k_itimer_rcu_free);
+ }
+
+ static int common_timer_create(struct k_itimer *new_timer)
+@@ -825,6 +827,22 @@
+ hrtimer_start_expires(timer, HRTIMER_MODE_ABS);
+ }
+
++/*
++ * Protected by RCU!
++ */
++static void timer_wait_for_callback(const struct k_clock *kc, struct k_itimer *timr)
++{
++#ifdef CONFIG_PREEMPT_RT_FULL
++ if (kc->timer_arm == common_hrtimer_arm)
++ hrtimer_wait_for_timer(&timr->it.real.timer);
++ else if (kc == &alarm_clock)
++ hrtimer_wait_for_timer(&timr->it.alarm.alarmtimer.timer);
++ else
++ /* FIXME: Whacky hack for posix-cpu-timers */
++ schedule_timeout(1);
+#endif
++}
+
- #ifdef CONFIG_PRINTK
- DECLARE_WAIT_QUEUE_HEAD(log_wait);
- /* the next printk record to read by syslog(READ) or /proc/kmsg */
-@@ -1337,6 +1396,7 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
+ static int common_hrtimer_try_to_cancel(struct k_itimer *timr)
+ {
+ return hrtimer_try_to_cancel(&timr->it.real.timer);
+@@ -889,6 +907,7 @@
+ if (!timr)
+ return -EINVAL;
+
++ rcu_read_lock();
+ kc = timr->kclock;
+ if (WARN_ON_ONCE(!kc || !kc->timer_set))
+ error = -EINVAL;
+@@ -897,9 +916,12 @@
+
+ unlock_timer(timr, flag);
+ if (error == TIMER_RETRY) {
++ timer_wait_for_callback(kc, timr);
+ old_spec64 = NULL; // We already got the old time...
++ rcu_read_unlock();
+ goto retry;
+ }
++ rcu_read_unlock();
+
+ return error;
+ }
+@@ -981,10 +1003,15 @@
+ if (!timer)
+ return -EINVAL;
+
++ rcu_read_lock();
+ if (timer_delete_hook(timer) == TIMER_RETRY) {
+ unlock_timer(timer, flags);
++ timer_wait_for_callback(clockid_to_kclock(timer->it_clock),
++ timer);
++ rcu_read_unlock();
+ goto retry_delete;
+ }
++ rcu_read_unlock();
+
+ spin_lock(¤t->sighand->siglock);
+ list_del(&timer->list);
+@@ -1010,8 +1037,18 @@
+ retry_delete:
+ spin_lock_irqsave(&timer->it_lock, flags);
+
++ /* On RT we can race with a deletion */
++ if (!timer->it_signal) {
++ unlock_timer(timer, flags);
++ return;
++ }
++
+ if (timer_delete_hook(timer) == TIMER_RETRY) {
++ rcu_read_lock();
+ unlock_timer(timer, flags);
++ timer_wait_for_callback(clockid_to_kclock(timer->it_clock),
++ timer);
++ rcu_read_unlock();
+ goto retry_delete;
+ }
+ list_del(&timer->list);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/tick-broadcast-hrtimer.c linux-4.14/kernel/time/tick-broadcast-hrtimer.c
+--- linux-4.14.orig/kernel/time/tick-broadcast-hrtimer.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/tick-broadcast-hrtimer.c 2018-09-05 11:05:07.000000000 +0200
+@@ -106,7 +106,7 @@
+
+ void tick_setup_hrtimer_broadcast(void)
{
- char *text;
- int len = 0;
-+ int attempts = 0;
+- hrtimer_init(&bctimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
++ hrtimer_init(&bctimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
+ bctimer.function = bc_handler;
+ clockevents_register_device(&ce_broadcast_hrtimer);
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/tick-common.c linux-4.14/kernel/time/tick-common.c
+--- linux-4.14.orig/kernel/time/tick-common.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/tick-common.c 2018-09-05 11:05:07.000000000 +0200
+@@ -79,13 +79,15 @@
+ static void tick_periodic(int cpu)
+ {
+ if (tick_do_timer_cpu == cpu) {
+- write_seqlock(&jiffies_lock);
++ raw_spin_lock(&jiffies_lock);
++ write_seqcount_begin(&jiffies_seq);
- text = kmalloc(LOG_LINE_MAX + PREFIX_MAX, GFP_KERNEL);
- if (!text)
-@@ -1348,6 +1408,14 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
- u64 seq;
- u32 idx;
- enum log_flags prev;
-+ int num_msg;
-+try_again:
-+ attempts++;
-+ if (attempts > 10) {
-+ len = -EBUSY;
-+ goto out;
-+ }
-+ num_msg = 0;
+ /* Keep track of the next tick event */
+ tick_next_period = ktime_add(tick_next_period, tick_period);
- /*
- * Find first record that fits, including all following records,
-@@ -1363,6 +1431,14 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
- prev = msg->flags;
- idx = log_next(idx);
- seq++;
-+ num_msg++;
-+ if (num_msg > 5) {
-+ num_msg = 0;
-+ raw_spin_unlock_irq(&logbuf_lock);
-+ raw_spin_lock_irq(&logbuf_lock);
-+ if (clear_seq < log_first_seq)
-+ goto try_again;
-+ }
- }
+ do_timer(1);
+- write_sequnlock(&jiffies_lock);
++ write_seqcount_end(&jiffies_seq);
++ raw_spin_unlock(&jiffies_lock);
+ update_wall_time();
+ }
- /* move first record forward until length fits into the buffer */
-@@ -1376,6 +1452,14 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
- prev = msg->flags;
- idx = log_next(idx);
- seq++;
-+ num_msg++;
-+ if (num_msg > 5) {
-+ num_msg = 0;
-+ raw_spin_unlock_irq(&logbuf_lock);
-+ raw_spin_lock_irq(&logbuf_lock);
-+ if (clear_seq < log_first_seq)
-+ goto try_again;
-+ }
- }
+@@ -157,9 +159,9 @@
+ ktime_t next;
- /* last message fitting into this dump */
-@@ -1416,6 +1500,7 @@ static int syslog_print_all(char __user *buf, int size, bool clear)
- clear_seq = log_next_seq;
- clear_idx = log_next_idx;
- }
-+out:
- raw_spin_unlock_irq(&logbuf_lock);
+ do {
+- seq = read_seqbegin(&jiffies_lock);
++ seq = read_seqcount_begin(&jiffies_seq);
+ next = tick_next_period;
+- } while (read_seqretry(&jiffies_lock, seq));
++ } while (read_seqcount_retry(&jiffies_seq, seq));
- kfree(text);
-@@ -1569,6 +1654,12 @@ static void call_console_drivers(int level,
- if (!console_drivers)
+ clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/tick-internal.h linux-4.14/kernel/time/tick-internal.h
+--- linux-4.14.orig/kernel/time/tick-internal.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/tick-internal.h 2018-09-05 11:05:07.000000000 +0200
+@@ -150,16 +150,15 @@
+
+ #ifdef CONFIG_NO_HZ_COMMON
+ extern unsigned long tick_nohz_active;
+-#else
++extern void timers_update_nohz(void);
++# ifdef CONFIG_SMP
++extern struct static_key_false timers_migration_enabled;
++# endif
++#else /* CONFIG_NO_HZ_COMMON */
++static inline void timers_update_nohz(void) { }
+ #define tick_nohz_active (0)
+ #endif
+
+-#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+-extern void timers_update_migration(bool update_nohz);
+-#else
+-static inline void timers_update_migration(bool update_nohz) { }
+-#endif
+-
+ DECLARE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases);
+
+ extern u64 get_next_timer_interrupt(unsigned long basej, u64 basem);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/tick-sched.c linux-4.14/kernel/time/tick-sched.c
+--- linux-4.14.orig/kernel/time/tick-sched.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/time/tick-sched.c 2018-09-05 11:05:07.000000000 +0200
+@@ -66,7 +66,8 @@
return;
-+ if (IS_ENABLED(CONFIG_PREEMPT_RT_BASE)) {
-+ if (in_irq() || in_nmi())
-+ return;
-+ }
-+
-+ migrate_disable();
- for_each_console(con) {
- if (exclusive_console && con != exclusive_console)
- continue;
-@@ -1584,6 +1675,7 @@ static void call_console_drivers(int level,
- else
- con->write(con, text, len);
+ /* Reevaluate with jiffies_lock held */
+- write_seqlock(&jiffies_lock);
++ raw_spin_lock(&jiffies_lock);
++ write_seqcount_begin(&jiffies_seq);
+
+ delta = ktime_sub(now, last_jiffies_update);
+ if (delta >= tick_period) {
+@@ -89,10 +90,12 @@
+ /* Keep the tick_next_period variable up to date */
+ tick_next_period = ktime_add(last_jiffies_update, tick_period);
+ } else {
+- write_sequnlock(&jiffies_lock);
++ write_seqcount_end(&jiffies_seq);
++ raw_spin_unlock(&jiffies_lock);
+ return;
}
-+ migrate_enable();
+- write_sequnlock(&jiffies_lock);
++ write_seqcount_end(&jiffies_seq);
++ raw_spin_unlock(&jiffies_lock);
+ update_wall_time();
+ }
+
+@@ -103,12 +106,14 @@
+ {
+ ktime_t period;
+
+- write_seqlock(&jiffies_lock);
++ raw_spin_lock(&jiffies_lock);
++ write_seqcount_begin(&jiffies_seq);
+ /* Did we start the jiffies update yet ? */
+ if (last_jiffies_update == 0)
+ last_jiffies_update = tick_next_period;
+ period = last_jiffies_update;
+- write_sequnlock(&jiffies_lock);
++ write_seqcount_end(&jiffies_seq);
++ raw_spin_unlock(&jiffies_lock);
+ return period;
}
+@@ -225,6 +230,7 @@
+
+ static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = {
+ .func = nohz_full_kick_func,
++ .flags = IRQ_WORK_HARD_IRQ,
+ };
+
/*
-@@ -1781,6 +1873,13 @@ asmlinkage int vprintk_emit(int facility, int level,
- /* cpu currently holding logbuf_lock in this function */
- static unsigned int logbuf_cpu = UINT_MAX;
+@@ -689,10 +695,10 @@
-+ /*
-+ * Fall back to early_printk if a debugging subsystem has
-+ * killed printk output
-+ */
-+ if (unlikely(forced_early_printk(fmt, args)))
-+ return 1;
-+
- if (level == LOGLEVEL_SCHED) {
- level = LOGLEVEL_DEFAULT;
- in_sched = true;
-@@ -1885,13 +1984,23 @@ asmlinkage int vprintk_emit(int facility, int level,
+ /* Read jiffies and the time when jiffies were updated last */
+ do {
+- seq = read_seqbegin(&jiffies_lock);
++ seq = read_seqcount_begin(&jiffies_seq);
+ basemono = last_jiffies_update;
+ basejiff = jiffies;
+- } while (read_seqretry(&jiffies_lock, seq));
++ } while (read_seqcount_retry(&jiffies_seq, seq));
+ ts->last_jiffies = basejiff;
- /* If called from the scheduler, we can not call up(). */
- if (!in_sched) {
-+ int may_trylock = 1;
-+
- lockdep_off();
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ /*
-+ * we can't take a sleeping lock with IRQs or preeption disabled
-+ * so we can't print in these contexts
-+ */
-+ if (!(preempt_count() == 0 && !irqs_disabled()))
-+ may_trylock = 0;
-+#endif
- /*
- * Try to acquire and then immediately release the console
- * semaphore. The release will print out buffers and wake up
- * /dev/kmsg and syslog() users.
- */
-- if (console_trylock())
-+ if (may_trylock && console_trylock())
- console_unlock();
- lockdep_on();
+ /*
+@@ -906,14 +912,7 @@
+ return false;
+
+ if (unlikely(local_softirq_pending() && cpu_online(cpu))) {
+- static int ratelimit;
+-
+- if (ratelimit < 10 &&
+- (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
+- pr_warn("NOHZ: local_softirq_pending %02x\n",
+- (unsigned int) local_softirq_pending());
+- ratelimit++;
+- }
++ softirq_check_pending_idle();
+ return false;
}
-@@ -2014,26 +2123,6 @@ DEFINE_PER_CPU(printk_func_t, printk_func);
- #endif /* CONFIG_PRINTK */
+@@ -1132,7 +1131,7 @@
+ ts->nohz_mode = mode;
+ /* One update is enough */
+ if (!test_and_set_bit(0, &tick_nohz_active))
+- timers_update_migration(true);
++ timers_update_nohz();
+ }
--#ifdef CONFIG_EARLY_PRINTK
--struct console *early_console;
--
--asmlinkage __visible void early_printk(const char *fmt, ...)
--{
-- va_list ap;
-- char buf[512];
-- int n;
--
-- if (!early_console)
-- return;
--
-- va_start(ap, fmt);
-- n = vscnprintf(buf, sizeof(buf), fmt, ap);
-- va_end(ap);
--
-- early_console->write(early_console, buf, n);
--}
--#endif
--
- static int __add_preferred_console(char *name, int idx, char *options,
- char *brl_options)
+ /**
+@@ -1250,7 +1249,7 @@
+ /*
+ * Emulate tick processing via per-CPU hrtimers:
+ */
+- hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
++ hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_HARD);
+ ts->sched_timer.function = tick_sched_timer;
+
+ /* Get the next period (per-CPU) */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/timekeeping.c linux-4.14/kernel/time/timekeeping.c
+--- linux-4.14.orig/kernel/time/timekeeping.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/timekeeping.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2326,8 +2326,10 @@
+ */
+ void xtime_update(unsigned long ticks)
{
-@@ -2303,11 +2392,16 @@ static void console_cont_flush(char *text, size_t size)
- goto out;
+- write_seqlock(&jiffies_lock);
++ raw_spin_lock(&jiffies_lock);
++ write_seqcount_begin(&jiffies_seq);
+ do_timer(ticks);
+- write_sequnlock(&jiffies_lock);
++ write_seqcount_end(&jiffies_seq);
++ raw_spin_unlock(&jiffies_lock);
+ update_wall_time();
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/timekeeping.h linux-4.14/kernel/time/timekeeping.h
+--- linux-4.14.orig/kernel/time/timekeeping.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/time/timekeeping.h 2018-09-05 11:05:07.000000000 +0200
+@@ -18,7 +18,8 @@
+ extern void do_timer(unsigned long ticks);
+ extern void update_wall_time(void);
- len = cont_print_text(text, size);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ raw_spin_unlock_irqrestore(&logbuf_lock, flags);
-+ call_console_drivers(cont.level, NULL, 0, text, len);
-+#else
- raw_spin_unlock(&logbuf_lock);
- stop_critical_timings();
- call_console_drivers(cont.level, NULL, 0, text, len);
- start_critical_timings();
- local_irq_restore(flags);
-+#endif
- return;
- out:
- raw_spin_unlock_irqrestore(&logbuf_lock, flags);
-@@ -2431,13 +2525,17 @@ void console_unlock(void)
- console_idx = log_next(console_idx);
- console_seq++;
- console_prev = msg->flags;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ raw_spin_unlock_irqrestore(&logbuf_lock, flags);
-+ call_console_drivers(level, ext_text, ext_len, text, len);
-+#else
- raw_spin_unlock(&logbuf_lock);
+-extern seqlock_t jiffies_lock;
++extern raw_spinlock_t jiffies_lock;
++extern seqcount_t jiffies_seq;
- stop_critical_timings(); /* don't trace print latency */
- call_console_drivers(level, ext_text, ext_len, text, len);
- start_critical_timings();
- local_irq_restore(flags);
--
+ #define CS_NAME_LEN 32
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/time/timer.c linux-4.14/kernel/time/timer.c
+--- linux-4.14.orig/kernel/time/timer.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/time/timer.c 2018-09-05 11:05:07.000000000 +0200
+@@ -44,6 +44,7 @@
+ #include <linux/sched/debug.h>
+ #include <linux/slab.h>
+ #include <linux/compat.h>
++#include <linux/swait.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/unistd.h>
+@@ -197,11 +198,12 @@
+ struct timer_base {
+ raw_spinlock_t lock;
+ struct timer_list *running_timer;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ struct swait_queue_head wait_for_running_timer;
+#endif
- if (do_cond_resched)
- cond_resched();
- }
-@@ -2489,6 +2587,11 @@ void console_unblank(void)
- {
- struct console *c;
+ unsigned long clk;
+ unsigned long next_expiry;
+ unsigned int cpu;
+- bool migration_enabled;
+- bool nohz_active;
+ bool is_idle;
+ bool must_forward_clk;
+ DECLARE_BITMAP(pending_map, WHEEL_SIZE);
+@@ -210,45 +212,73 @@
-+ if (IS_ENABLED(CONFIG_PREEMPT_RT_BASE)) {
-+ if (in_irq() || in_nmi())
-+ return;
-+ }
-+
- /*
- * console_unblank can no longer be called in interrupt context unless
- * oops_in_progress is set to 1..
-diff --git a/kernel/ptrace.c b/kernel/ptrace.c
-index 49ba7c1ade9d..44f44b47ec07 100644
---- a/kernel/ptrace.c
-+++ b/kernel/ptrace.c
-@@ -166,7 +166,14 @@ static bool ptrace_freeze_traced(struct task_struct *task)
+ static DEFINE_PER_CPU(struct timer_base, timer_bases[NR_BASES]);
- spin_lock_irq(&task->sighand->siglock);
- if (task_is_traced(task) && !__fatal_signal_pending(task)) {
-- task->state = __TASK_TRACED;
-+ unsigned long flags;
+-#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
++#ifdef CONFIG_NO_HZ_COMMON
+
-+ raw_spin_lock_irqsave(&task->pi_lock, flags);
-+ if (task->state & __TASK_TRACED)
-+ task->state = __TASK_TRACED;
-+ else
-+ task->saved_state = __TASK_TRACED;
-+ raw_spin_unlock_irqrestore(&task->pi_lock, flags);
- ret = true;
- }
- spin_unlock_irq(&task->sighand->siglock);
-diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
-index bf08fee53dc7..eeb8ce4ad7b6 100644
---- a/kernel/rcu/rcutorture.c
-+++ b/kernel/rcu/rcutorture.c
-@@ -404,6 +404,7 @@ static struct rcu_torture_ops rcu_ops = {
- .name = "rcu"
- };
-
-+#ifndef CONFIG_PREEMPT_RT_FULL
- /*
- * Definitions for rcu_bh torture testing.
- */
-@@ -443,6 +444,12 @@ static struct rcu_torture_ops rcu_bh_ops = {
- .name = "rcu_bh"
- };
++static DEFINE_STATIC_KEY_FALSE(timers_nohz_active);
++static DEFINE_MUTEX(timer_keys_mutex);
++
++static struct swork_event timer_update_swork;
++
++#ifdef CONFIG_SMP
+ unsigned int sysctl_timer_migration = 1;
-+#else
-+static struct rcu_torture_ops rcu_bh_ops = {
-+ .ttype = INVALID_RCU_FLAVOR,
-+};
-+#endif
+-void timers_update_migration(bool update_nohz)
++DEFINE_STATIC_KEY_FALSE(timers_migration_enabled);
+
- /*
- * Don't even think about trying any of these in real life!!!
- * The names includes "busted", and they really means it!
-diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
-index 10f62c6f48e7..dbee19478f09 100644
---- a/kernel/rcu/tree.c
-+++ b/kernel/rcu/tree.c
-@@ -55,6 +55,11 @@
- #include <linux/random.h>
- #include <linux/trace_events.h>
- #include <linux/suspend.h>
-+#include <linux/delay.h>
-+#include <linux/gfp.h>
-+#include <linux/oom.h>
-+#include <linux/smpboot.h>
-+#include "../time/tick-internal.h"
++static void timers_update_migration(void)
+ {
+ bool on = sysctl_timer_migration && tick_nohz_active;
+- unsigned int cpu;
- #include "tree.h"
- #include "rcu.h"
-@@ -260,6 +265,19 @@ void rcu_sched_qs(void)
- this_cpu_ptr(&rcu_sched_data), true);
+- /* Avoid the loop, if nothing to update */
+- if (this_cpu_read(timer_bases[BASE_STD].migration_enabled) == on)
+- return;
++ if (on)
++ static_branch_enable(&timers_migration_enabled);
++ else
++ static_branch_disable(&timers_migration_enabled);
++}
++#else
++static inline void timers_update_migration(void) { }
++#endif /* !CONFIG_SMP */
+
+- for_each_possible_cpu(cpu) {
+- per_cpu(timer_bases[BASE_STD].migration_enabled, cpu) = on;
+- per_cpu(timer_bases[BASE_DEF].migration_enabled, cpu) = on;
+- per_cpu(hrtimer_bases.migration_enabled, cpu) = on;
+- if (!update_nohz)
+- continue;
+- per_cpu(timer_bases[BASE_STD].nohz_active, cpu) = true;
+- per_cpu(timer_bases[BASE_DEF].nohz_active, cpu) = true;
+- per_cpu(hrtimer_bases.nohz_active, cpu) = true;
+- }
++static void timer_update_keys(struct swork_event *event)
++{
++ mutex_lock(&timer_keys_mutex);
++ timers_update_migration();
++ static_branch_enable(&timers_nohz_active);
++ mutex_unlock(&timer_keys_mutex);
}
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static void rcu_preempt_qs(void);
++void timers_update_nohz(void)
++{
++ swork_queue(&timer_update_swork);
++}
+
-+void rcu_bh_qs(void)
++static __init int hrtimer_init_thread(void)
+{
-+ unsigned long flags;
++ WARN_ON(swork_get());
++ INIT_SWORK(&timer_update_swork, timer_update_keys);
++ return 0;
++}
++early_initcall(hrtimer_init_thread);
+
-+ /* Callers to this function, rcu_preempt_qs(), must disable irqs. */
-+ local_irq_save(flags);
-+ rcu_preempt_qs();
-+ local_irq_restore(flags);
+ int timer_migration_handler(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp,
+ loff_t *ppos)
+ {
+- static DEFINE_MUTEX(mutex);
+ int ret;
+
+- mutex_lock(&mutex);
++ mutex_lock(&timer_keys_mutex);
+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ if (!ret && write)
+- timers_update_migration(false);
+- mutex_unlock(&mutex);
++ timers_update_migration();
++ mutex_unlock(&timer_keys_mutex);
+ return ret;
+ }
+-#endif
++
++static inline bool is_timers_nohz_active(void)
++{
++ return static_branch_unlikely(&timers_nohz_active);
+}
+#else
- void rcu_bh_qs(void)
++static inline bool is_timers_nohz_active(void) { return false; }
++#endif /* NO_HZ_COMMON */
+
+ static unsigned long round_jiffies_common(unsigned long j, int cpu,
+ bool force_up)
+@@ -534,7 +564,7 @@
+ static void
+ trigger_dyntick_cpu(struct timer_base *base, struct timer_list *timer)
{
- if (__this_cpu_read(rcu_bh_data.cpu_no_qs.s)) {
-@@ -269,6 +287,7 @@ void rcu_bh_qs(void)
- __this_cpu_write(rcu_bh_data.cpu_no_qs.b.norm, false);
- }
- }
-+#endif
+- if (!IS_ENABLED(CONFIG_NO_HZ_COMMON) || !base->nohz_active)
++ if (!is_timers_nohz_active())
+ return;
- static DEFINE_PER_CPU(int, rcu_sched_qs_mask);
+ /*
+@@ -840,21 +870,20 @@
+ return get_timer_cpu_base(tflags, tflags & TIMER_CPUMASK);
+ }
-@@ -449,11 +468,13 @@ EXPORT_SYMBOL_GPL(rcu_batches_started_sched);
- /*
- * Return the number of RCU BH batches started thus far for debug & stats.
- */
-+#ifndef CONFIG_PREEMPT_RT_FULL
- unsigned long rcu_batches_started_bh(void)
+-#ifdef CONFIG_NO_HZ_COMMON
+ static inline struct timer_base *
+ get_target_base(struct timer_base *base, unsigned tflags)
{
- return rcu_bh_state.gpnum;
+-#ifdef CONFIG_SMP
+- if ((tflags & TIMER_PINNED) || !base->migration_enabled)
+- return get_timer_this_cpu_base(tflags);
+- return get_timer_cpu_base(tflags, get_nohz_timer_target());
+-#else
+- return get_timer_this_cpu_base(tflags);
++#if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
++ if (static_branch_unlikely(&timers_migration_enabled) &&
++ !(tflags & TIMER_PINNED))
++ return get_timer_cpu_base(tflags, get_nohz_timer_target());
+ #endif
++ return get_timer_this_cpu_base(tflags);
}
- EXPORT_SYMBOL_GPL(rcu_batches_started_bh);
-+#endif
- /*
- * Return the number of RCU batches completed thus far for debug & stats.
-@@ -473,6 +494,7 @@ unsigned long rcu_batches_completed_sched(void)
- }
- EXPORT_SYMBOL_GPL(rcu_batches_completed_sched);
+ static inline void forward_timer_base(struct timer_base *base)
+ {
++#ifdef CONFIG_NO_HZ_COMMON
+ unsigned long jnow;
-+#ifndef CONFIG_PREEMPT_RT_FULL
- /*
- * Return the number of RCU BH batches completed thus far for debug & stats.
- */
-@@ -481,6 +503,7 @@ unsigned long rcu_batches_completed_bh(void)
- return rcu_bh_state.completed;
- }
- EXPORT_SYMBOL_GPL(rcu_batches_completed_bh);
-+#endif
+ /*
+@@ -878,16 +907,8 @@
+ base->clk = jnow;
+ else
+ base->clk = base->next_expiry;
+-}
+-#else
+-static inline struct timer_base *
+-get_target_base(struct timer_base *base, unsigned tflags)
+-{
+- return get_timer_this_cpu_base(tflags);
+-}
+-
+-static inline void forward_timer_base(struct timer_base *base) { }
+ #endif
++}
- /*
- * Return the number of RCU expedited batches completed thus far for
-@@ -504,6 +527,7 @@ unsigned long rcu_exp_batches_completed_sched(void)
- }
- EXPORT_SYMBOL_GPL(rcu_exp_batches_completed_sched);
-+#ifndef CONFIG_PREEMPT_RT_FULL
/*
- * Force a quiescent state.
- */
-@@ -522,6 +546,13 @@ void rcu_bh_force_quiescent_state(void)
+@@ -1130,6 +1151,33 @@
}
- EXPORT_SYMBOL_GPL(rcu_bh_force_quiescent_state);
+ EXPORT_SYMBOL_GPL(add_timer_on);
++#ifdef CONFIG_PREEMPT_RT_FULL
++/*
++ * Wait for a running timer
++ */
++static void wait_for_running_timer(struct timer_list *timer)
++{
++ struct timer_base *base;
++ u32 tf = timer->flags;
++
++ if (tf & TIMER_MIGRATING)
++ return;
++
++ base = get_timer_base(tf);
++ swait_event(base->wait_for_running_timer,
++ base->running_timer != timer);
++}
++
++# define wakeup_timer_waiters(b) swake_up_all(&(b)->wait_for_running_timer)
+#else
-+void rcu_force_quiescent_state(void)
++static inline void wait_for_running_timer(struct timer_list *timer)
+{
++ cpu_relax();
+}
-+EXPORT_SYMBOL_GPL(rcu_force_quiescent_state);
++
++# define wakeup_timer_waiters(b) do { } while (0)
+#endif
+
+ /**
+ * del_timer - deactivate a timer.
+ * @timer: the timer to be deactivated
+@@ -1185,7 +1233,7 @@
+ }
+ EXPORT_SYMBOL(try_to_del_timer_sync);
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
+ /**
+ * del_timer_sync - deactivate a timer and wait for the handler to finish.
+ * @timer: the timer to be deactivated
+@@ -1245,7 +1293,7 @@
+ int ret = try_to_del_timer_sync(timer);
+ if (ret >= 0)
+ return ret;
+- cpu_relax();
++ wait_for_running_timer(timer);
+ }
+ }
+ EXPORT_SYMBOL(del_timer_sync);
+@@ -1309,13 +1357,16 @@
+ fn = timer->function;
+ data = timer->data;
+
+- if (timer->flags & TIMER_IRQSAFE) {
++ if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL) &&
++ timer->flags & TIMER_IRQSAFE) {
+ raw_spin_unlock(&base->lock);
+ call_timer_fn(timer, fn, data);
++ base->running_timer = NULL;
+ raw_spin_lock(&base->lock);
+ } else {
+ raw_spin_unlock_irq(&base->lock);
+ call_timer_fn(timer, fn, data);
++ base->running_timer = NULL;
+ raw_spin_lock_irq(&base->lock);
+ }
+ }
+@@ -1584,13 +1635,13 @@
+
+ /* Note: this timer irq context must be accounted for as well. */
+ account_process_tick(p, user_tick);
++ scheduler_tick();
+ run_local_timers();
+ rcu_check_callbacks(user_tick);
+-#ifdef CONFIG_IRQ_WORK
++#if defined(CONFIG_IRQ_WORK)
+ if (in_irq())
+ irq_work_tick();
+ #endif
+- scheduler_tick();
+ if (IS_ENABLED(CONFIG_POSIX_TIMERS))
+ run_posix_cpu_timers(p);
+ }
+@@ -1617,8 +1668,8 @@
+ while (levels--)
+ expire_timers(base, heads + levels);
+ }
+- base->running_timer = NULL;
+ raw_spin_unlock_irq(&base->lock);
++ wakeup_timer_waiters(base);
+ }
+
/*
- * Force a quiescent state for RCU-sched.
- */
-@@ -572,9 +603,11 @@ void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags,
- case RCU_FLAVOR:
- rsp = rcu_state_p;
- break;
-+#ifndef CONFIG_PREEMPT_RT_FULL
- case RCU_BH_FLAVOR:
- rsp = &rcu_bh_state;
- break;
+@@ -1628,6 +1679,7 @@
+ {
+ struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
+
++ irq_work_tick_soft();
+ /*
+ * must_forward_clk must be cleared before running timers so that any
+ * timer functions that call mod_timer will not try to forward the
+@@ -1864,6 +1916,9 @@
+ base->cpu = cpu;
+ raw_spin_lock_init(&base->lock);
+ base->clk = jiffies;
++#ifdef CONFIG_PREEMPT_RT_FULL
++ init_swait_queue_head(&base->wait_for_running_timer);
+#endif
- case RCU_SCHED_FLAVOR:
- rsp = &rcu_sched_state;
- break;
-@@ -3016,18 +3049,17 @@ __rcu_process_callbacks(struct rcu_state *rsp)
- /*
- * Do RCU core processing for the current CPU.
- */
--static __latent_entropy void rcu_process_callbacks(struct softirq_action *unused)
-+static __latent_entropy void rcu_process_callbacks(void)
+ }
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/Kconfig linux-4.14/kernel/trace/Kconfig
+--- linux-4.14.orig/kernel/trace/Kconfig 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -585,7 +585,10 @@
+ event activity as an initial guide for further investigation
+ using more advanced tools.
+
+- See Documentation/trace/events.txt.
++ Inter-event tracing of quantities such as latencies is also
++ supported using hist triggers under this option.
++
++ See Documentation/trace/histogram.txt.
+ If in doubt, say N.
+
+ config MMIOTRACE_TEST
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/ring_buffer.c linux-4.14/kernel/trace/ring_buffer.c
+--- linux-4.14.orig/kernel/trace/ring_buffer.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/ring_buffer.c 2018-09-05 11:05:07.000000000 +0200
+@@ -41,6 +41,8 @@
+ RINGBUF_TYPE_PADDING);
+ trace_seq_printf(s, "\ttime_extend : type == %d\n",
+ RINGBUF_TYPE_TIME_EXTEND);
++ trace_seq_printf(s, "\ttime_stamp : type == %d\n",
++ RINGBUF_TYPE_TIME_STAMP);
+ trace_seq_printf(s, "\tdata max type_len == %d\n",
+ RINGBUF_TYPE_DATA_TYPE_LEN_MAX);
+
+@@ -140,12 +142,15 @@
+
+ enum {
+ RB_LEN_TIME_EXTEND = 8,
+- RB_LEN_TIME_STAMP = 16,
++ RB_LEN_TIME_STAMP = 8,
+ };
+
+ #define skip_time_extend(event) \
+ ((struct ring_buffer_event *)((char *)event + RB_LEN_TIME_EXTEND))
+
++#define extended_time(event) \
++ (event->type_len >= RINGBUF_TYPE_TIME_EXTEND)
++
+ static inline int rb_null_event(struct ring_buffer_event *event)
{
- struct rcu_state *rsp;
+ return event->type_len == RINGBUF_TYPE_PADDING && !event->time_delta;
+@@ -209,7 +214,7 @@
+ {
+ unsigned len = 0;
- if (cpu_is_offline(smp_processor_id()))
- return;
-- trace_rcu_utilization(TPS("Start RCU core"));
- for_each_rcu_flavor(rsp)
- __rcu_process_callbacks(rsp);
-- trace_rcu_utilization(TPS("End RCU core"));
- }
+- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND) {
++ if (extended_time(event)) {
+ /* time extends include the data event after it */
+ len = RB_LEN_TIME_EXTEND;
+ event = skip_time_extend(event);
+@@ -231,7 +236,7 @@
+ {
+ unsigned length;
-+static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task);
- /*
- * Schedule RCU callback invocation. If the specified type of RCU
- * does not support RCU priority boosting, just do a direct call,
-@@ -3039,19 +3071,106 @@ static void invoke_rcu_callbacks(struct rcu_state *rsp, struct rcu_data *rdp)
+- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND)
++ if (extended_time(event))
+ event = skip_time_extend(event);
+
+ length = rb_event_length(event);
+@@ -248,7 +253,7 @@
+ static __always_inline void *
+ rb_event_data(struct ring_buffer_event *event)
{
- if (unlikely(!READ_ONCE(rcu_scheduler_fully_active)))
- return;
-- if (likely(!rsp->boost)) {
-- rcu_do_batch(rsp, rdp);
-- return;
-- }
-- invoke_rcu_callbacks_kthread();
-+ rcu_do_batch(rsp, rdp);
- }
+- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND)
++ if (extended_time(event))
+ event = skip_time_extend(event);
+ BUG_ON(event->type_len > RINGBUF_TYPE_DATA_TYPE_LEN_MAX);
+ /* If length is in len field, then array[0] has the data */
+@@ -275,6 +280,27 @@
+ #define TS_MASK ((1ULL << TS_SHIFT) - 1)
+ #define TS_DELTA_TEST (~TS_MASK)
-+static void rcu_wake_cond(struct task_struct *t, int status)
++/**
++ * ring_buffer_event_time_stamp - return the event's extended timestamp
++ * @event: the event to get the timestamp of
++ *
++ * Returns the extended timestamp associated with a data event.
++ * An extended time_stamp is a 64-bit timestamp represented
++ * internally in a special way that makes the best use of space
++ * contained within a ring buffer event. This function decodes
++ * it and maps it to a straight u64 value.
++ */
++u64 ring_buffer_event_time_stamp(struct ring_buffer_event *event)
+{
-+ /*
-+ * If the thread is yielding, only wake it when this
-+ * is invoked from idle
-+ */
-+ if (t && (status != RCU_KTHREAD_YIELDING || is_idle_task(current)))
-+ wake_up_process(t);
-+}
++ u64 ts;
+
-+/*
-+ * Wake up this CPU's rcuc kthread to do RCU core processing.
-+ */
- static void invoke_rcu_core(void)
- {
-- if (cpu_online(smp_processor_id()))
-- raise_softirq(RCU_SOFTIRQ);
-+ unsigned long flags;
-+ struct task_struct *t;
++ ts = event->array[0];
++ ts <<= TS_SHIFT;
++ ts += event->time_delta;
+
-+ if (!cpu_online(smp_processor_id()))
-+ return;
-+ local_irq_save(flags);
-+ __this_cpu_write(rcu_cpu_has_work, 1);
-+ t = __this_cpu_read(rcu_cpu_kthread_task);
-+ if (t != NULL && current != t)
-+ rcu_wake_cond(t, __this_cpu_read(rcu_cpu_kthread_status));
-+ local_irq_restore(flags);
++ return ts;
++}
++
+ /* Flag when events were overwritten */
+ #define RB_MISSED_EVENTS (1 << 31)
+ /* Missed count stored at end */
+@@ -451,6 +477,7 @@
+ struct buffer_page *reader_page;
+ unsigned long lost_events;
+ unsigned long last_overrun;
++ unsigned long nest;
+ local_t entries_bytes;
+ local_t entries;
+ local_t overrun;
+@@ -488,6 +515,7 @@
+ u64 (*clock)(void);
+
+ struct rb_irq_work irq_work;
++ bool time_stamp_abs;
+ };
+
+ struct ring_buffer_iter {
+@@ -1387,6 +1415,16 @@
+ buffer->clock = clock;
}
-+static void rcu_cpu_kthread_park(unsigned int cpu)
++void ring_buffer_set_time_stamp_abs(struct ring_buffer *buffer, bool abs)
+{
-+ per_cpu(rcu_cpu_kthread_status, cpu) = RCU_KTHREAD_OFFCPU;
++ buffer->time_stamp_abs = abs;
+}
+
-+static int rcu_cpu_kthread_should_run(unsigned int cpu)
++bool ring_buffer_time_stamp_abs(struct ring_buffer *buffer)
+{
-+ return __this_cpu_read(rcu_cpu_has_work);
++ return buffer->time_stamp_abs;
+}
+
-+/*
-+ * Per-CPU kernel thread that invokes RCU callbacks. This replaces the
-+ * RCU softirq used in flavors and configurations of RCU that do not
-+ * support RCU priority boosting.
+ static void rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer);
+
+ static inline unsigned long rb_page_entries(struct buffer_page *bpage)
+@@ -2217,12 +2255,15 @@
+
+ /* Slow path, do not inline */
+ static noinline struct ring_buffer_event *
+-rb_add_time_stamp(struct ring_buffer_event *event, u64 delta)
++rb_add_time_stamp(struct ring_buffer_event *event, u64 delta, bool abs)
+ {
+- event->type_len = RINGBUF_TYPE_TIME_EXTEND;
++ if (abs)
++ event->type_len = RINGBUF_TYPE_TIME_STAMP;
++ else
++ event->type_len = RINGBUF_TYPE_TIME_EXTEND;
+
+- /* Not the first event on the page? */
+- if (rb_event_index(event)) {
++ /* Not the first event on the page, or not delta? */
++ if (abs || rb_event_index(event)) {
+ event->time_delta = delta & TS_MASK;
+ event->array[0] = delta >> TS_SHIFT;
+ } else {
+@@ -2265,7 +2306,9 @@
+ * add it to the start of the resevered space.
+ */
+ if (unlikely(info->add_timestamp)) {
+- event = rb_add_time_stamp(event, delta);
++ bool abs = ring_buffer_time_stamp_abs(cpu_buffer->buffer);
++
++ event = rb_add_time_stamp(event, info->delta, abs);
+ length -= RB_LEN_TIME_EXTEND;
+ delta = 0;
+ }
+@@ -2453,7 +2496,7 @@
+
+ static inline void rb_event_discard(struct ring_buffer_event *event)
+ {
+- if (event->type_len == RINGBUF_TYPE_TIME_EXTEND)
++ if (extended_time(event))
+ event = skip_time_extend(event);
+
+ /* array[0] holds the actual length for the discarded event */
+@@ -2497,10 +2540,11 @@
+ cpu_buffer->write_stamp =
+ cpu_buffer->commit_page->page->time_stamp;
+ else if (event->type_len == RINGBUF_TYPE_TIME_EXTEND) {
+- delta = event->array[0];
+- delta <<= TS_SHIFT;
+- delta += event->time_delta;
++ delta = ring_buffer_event_time_stamp(event);
+ cpu_buffer->write_stamp += delta;
++ } else if (event->type_len == RINGBUF_TYPE_TIME_STAMP) {
++ delta = ring_buffer_event_time_stamp(event);
++ cpu_buffer->write_stamp = delta;
+ } else
+ cpu_buffer->write_stamp += event->time_delta;
+ }
+@@ -2583,22 +2627,19 @@
+ trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer)
+ {
+ unsigned int val = cpu_buffer->current_context;
++ unsigned long pc = preempt_count();
+ int bit;
+
+- if (in_interrupt()) {
+- if (in_nmi())
+- bit = RB_CTX_NMI;
+- else if (in_irq())
+- bit = RB_CTX_IRQ;
+- else
+- bit = RB_CTX_SOFTIRQ;
+- } else
++ if (!(pc & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
+ bit = RB_CTX_NORMAL;
++ else
++ bit = pc & NMI_MASK ? RB_CTX_NMI :
++ pc & HARDIRQ_MASK ? RB_CTX_IRQ : RB_CTX_SOFTIRQ;
+
+- if (unlikely(val & (1 << bit)))
++ if (unlikely(val & (1 << (bit + cpu_buffer->nest))))
+ return 1;
+
+- val |= (1 << bit);
++ val |= (1 << (bit + cpu_buffer->nest));
+ cpu_buffer->current_context = val;
+
+ return 0;
+@@ -2607,7 +2648,57 @@
+ static __always_inline void
+ trace_recursive_unlock(struct ring_buffer_per_cpu *cpu_buffer)
+ {
+- cpu_buffer->current_context &= cpu_buffer->current_context - 1;
++ cpu_buffer->current_context &=
++ cpu_buffer->current_context - (1 << cpu_buffer->nest);
++}
++
++/* The recursive locking above uses 4 bits */
++#define NESTED_BITS 4
++
++/**
++ * ring_buffer_nest_start - Allow to trace while nested
++ * @buffer: The ring buffer to modify
++ *
++ * The ring buffer has a safty mechanism to prevent recursion.
++ * But there may be a case where a trace needs to be done while
++ * tracing something else. In this case, calling this function
++ * will allow this function to nest within a currently active
++ * ring_buffer_lock_reserve().
++ *
++ * Call this function before calling another ring_buffer_lock_reserve() and
++ * call ring_buffer_nest_end() after the nested ring_buffer_unlock_commit().
+ */
-+static void rcu_cpu_kthread(unsigned int cpu)
++void ring_buffer_nest_start(struct ring_buffer *buffer)
+{
-+ unsigned int *statusp = this_cpu_ptr(&rcu_cpu_kthread_status);
-+ char work, *workp = this_cpu_ptr(&rcu_cpu_has_work);
-+ int spincnt;
++ struct ring_buffer_per_cpu *cpu_buffer;
++ int cpu;
+
-+ for (spincnt = 0; spincnt < 10; spincnt++) {
-+ trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
-+ local_bh_disable();
-+ *statusp = RCU_KTHREAD_RUNNING;
-+ this_cpu_inc(rcu_cpu_kthread_loops);
-+ local_irq_disable();
-+ work = *workp;
-+ *workp = 0;
-+ local_irq_enable();
-+ if (work)
-+ rcu_process_callbacks();
-+ local_bh_enable();
-+ if (*workp == 0) {
-+ trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
-+ *statusp = RCU_KTHREAD_WAITING;
-+ return;
-+ }
-+ }
-+ *statusp = RCU_KTHREAD_YIELDING;
-+ trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield"));
-+ schedule_timeout_interruptible(2);
-+ trace_rcu_utilization(TPS("End CPU kthread@rcu_yield"));
-+ *statusp = RCU_KTHREAD_WAITING;
++ /* Enabled by ring_buffer_nest_end() */
++ preempt_disable_notrace();
++ cpu = raw_smp_processor_id();
++ cpu_buffer = buffer->buffers[cpu];
++ /* This is the shift value for the above recusive locking */
++ cpu_buffer->nest += NESTED_BITS;
+}
+
-+static struct smp_hotplug_thread rcu_cpu_thread_spec = {
-+ .store = &rcu_cpu_kthread_task,
-+ .thread_should_run = rcu_cpu_kthread_should_run,
-+ .thread_fn = rcu_cpu_kthread,
-+ .thread_comm = "rcuc/%u",
-+ .setup = rcu_cpu_kthread_setup,
-+ .park = rcu_cpu_kthread_park,
-+};
-+
-+/*
-+ * Spawn per-CPU RCU core processing kthreads.
++/**
++ * ring_buffer_nest_end - Allow to trace while nested
++ * @buffer: The ring buffer to modify
++ *
++ * Must be called after ring_buffer_nest_start() and after the
++ * ring_buffer_unlock_commit().
+ */
-+static int __init rcu_spawn_core_kthreads(void)
++void ring_buffer_nest_end(struct ring_buffer *buffer)
+{
++ struct ring_buffer_per_cpu *cpu_buffer;
+ int cpu;
+
-+ for_each_possible_cpu(cpu)
-+ per_cpu(rcu_cpu_has_work, cpu) = 0;
-+ BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
-+ return 0;
++ /* disabled by ring_buffer_nest_start() */
++ cpu = raw_smp_processor_id();
++ cpu_buffer = buffer->buffers[cpu];
++ /* This is the shift value for the above recusive locking */
++ cpu_buffer->nest -= NESTED_BITS;
++ preempt_enable_notrace();
+ }
+
+ /**
+@@ -2683,7 +2774,7 @@
+ * If this is the first commit on the page, then it has the same
+ * timestamp as the page itself.
+ */
+- if (!tail)
++ if (!tail && !ring_buffer_time_stamp_abs(cpu_buffer->buffer))
+ info->delta = 0;
+
+ /* See if we shot pass the end of this buffer page */
+@@ -2760,8 +2851,11 @@
+ /* make sure this diff is calculated here */
+ barrier();
+
+- /* Did the write stamp get updated already? */
+- if (likely(info.ts >= cpu_buffer->write_stamp)) {
++ if (ring_buffer_time_stamp_abs(buffer)) {
++ info.delta = info.ts;
++ rb_handle_timestamp(cpu_buffer, &info);
++ } else /* Did the write stamp get updated already? */
++ if (likely(info.ts >= cpu_buffer->write_stamp)) {
+ info.delta = diff;
+ if (unlikely(test_time_stamp(info.delta)))
+ rb_handle_timestamp(cpu_buffer, &info);
+@@ -3459,14 +3553,13 @@
+ return;
+
+ case RINGBUF_TYPE_TIME_EXTEND:
+- delta = event->array[0];
+- delta <<= TS_SHIFT;
+- delta += event->time_delta;
++ delta = ring_buffer_event_time_stamp(event);
+ cpu_buffer->read_stamp += delta;
+ return;
+
+ case RINGBUF_TYPE_TIME_STAMP:
+- /* FIXME: not implemented */
++ delta = ring_buffer_event_time_stamp(event);
++ cpu_buffer->read_stamp = delta;
+ return;
+
+ case RINGBUF_TYPE_DATA:
+@@ -3490,14 +3583,13 @@
+ return;
+
+ case RINGBUF_TYPE_TIME_EXTEND:
+- delta = event->array[0];
+- delta <<= TS_SHIFT;
+- delta += event->time_delta;
++ delta = ring_buffer_event_time_stamp(event);
+ iter->read_stamp += delta;
+ return;
+
+ case RINGBUF_TYPE_TIME_STAMP:
+- /* FIXME: not implemented */
++ delta = ring_buffer_event_time_stamp(event);
++ iter->read_stamp = delta;
+ return;
+
+ case RINGBUF_TYPE_DATA:
+@@ -3721,6 +3813,8 @@
+ struct buffer_page *reader;
+ int nr_loops = 0;
+
++ if (ts)
++ *ts = 0;
+ again:
+ /*
+ * We repeat when a time extend is encountered.
+@@ -3757,12 +3851,17 @@
+ goto again;
+
+ case RINGBUF_TYPE_TIME_STAMP:
+- /* FIXME: not implemented */
++ if (ts) {
++ *ts = ring_buffer_event_time_stamp(event);
++ ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
++ cpu_buffer->cpu, ts);
++ }
++ /* Internal data, OK to advance */
+ rb_advance_reader(cpu_buffer);
+ goto again;
+
+ case RINGBUF_TYPE_DATA:
+- if (ts) {
++ if (ts && !(*ts)) {
+ *ts = cpu_buffer->read_stamp + event->time_delta;
+ ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
+ cpu_buffer->cpu, ts);
+@@ -3787,6 +3886,9 @@
+ struct ring_buffer_event *event;
+ int nr_loops = 0;
+
++ if (ts)
++ *ts = 0;
++
+ cpu_buffer = iter->cpu_buffer;
+ buffer = cpu_buffer->buffer;
+
+@@ -3839,12 +3941,17 @@
+ goto again;
+
+ case RINGBUF_TYPE_TIME_STAMP:
+- /* FIXME: not implemented */
++ if (ts) {
++ *ts = ring_buffer_event_time_stamp(event);
++ ring_buffer_normalize_time_stamp(cpu_buffer->buffer,
++ cpu_buffer->cpu, ts);
++ }
++ /* Internal data, OK to advance */
+ rb_advance_iter(iter);
+ goto again;
+
+ case RINGBUF_TYPE_DATA:
+- if (ts) {
++ if (ts && !(*ts)) {
+ *ts = iter->read_stamp + event->time_delta;
+ ring_buffer_normalize_time_stamp(buffer,
+ cpu_buffer->cpu, ts);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace.c linux-4.14/kernel/trace/trace.c
+--- linux-4.14.orig/kernel/trace/trace.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1170,6 +1170,14 @@
+ ARCH_TRACE_CLOCKS
+ };
+
++bool trace_clock_in_ns(struct trace_array *tr)
++{
++ if (trace_clocks[tr->clock_id].in_ns)
++ return true;
++
++ return false;
+}
-+early_initcall(rcu_spawn_core_kthreads);
+
/*
- * Handle any core-RCU processing required by a call_rcu() invocation.
+ * trace_parser_get_init - gets the buffer for trace parser
*/
-@@ -3195,6 +3314,7 @@ void call_rcu_sched(struct rcu_head *head, rcu_callback_t func)
+@@ -2127,6 +2135,7 @@
+ struct task_struct *tsk = current;
+
+ entry->preempt_count = pc & 0xff;
++ entry->preempt_lazy_count = preempt_lazy_count();
+ entry->pid = (tsk) ? tsk->pid : 0;
+ entry->flags =
+ #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT
+@@ -2137,8 +2146,11 @@
+ ((pc & NMI_MASK ) ? TRACE_FLAG_NMI : 0) |
+ ((pc & HARDIRQ_MASK) ? TRACE_FLAG_HARDIRQ : 0) |
+ ((pc & SOFTIRQ_OFFSET) ? TRACE_FLAG_SOFTIRQ : 0) |
+- (tif_need_resched() ? TRACE_FLAG_NEED_RESCHED : 0) |
++ (tif_need_resched_now() ? TRACE_FLAG_NEED_RESCHED : 0) |
++ (need_resched_lazy() ? TRACE_FLAG_NEED_RESCHED_LAZY : 0) |
+ (test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0);
++
++ entry->migrate_disable = (tsk) ? __migrate_disabled(tsk) & 0xFF : 0;
}
- EXPORT_SYMBOL_GPL(call_rcu_sched);
+ EXPORT_SYMBOL_GPL(tracing_generic_entry_update);
-+#ifndef CONFIG_PREEMPT_RT_FULL
- /*
- * Queue an RCU callback for invocation after a quicker grace period.
- */
-@@ -3203,6 +3323,7 @@ void call_rcu_bh(struct rcu_head *head, rcu_callback_t func)
- __call_rcu(head, func, &rcu_bh_state, -1, 0);
+@@ -2275,7 +2287,7 @@
+
+ *current_rb = trace_file->tr->trace_buffer.buffer;
+
+- if ((trace_file->flags &
++ if (!ring_buffer_time_stamp_abs(*current_rb) && (trace_file->flags &
+ (EVENT_FILE_FL_SOFT_DISABLED | EVENT_FILE_FL_FILTERED)) &&
+ (entry = this_cpu_read(trace_buffered_event))) {
+ /* Try to use the per cpu buffer first */
+@@ -3342,14 +3354,17 @@
+
+ static void print_lat_help_header(struct seq_file *m)
+ {
+- seq_puts(m, "# _------=> CPU# \n"
+- "# / _-----=> irqs-off \n"
+- "# | / _----=> need-resched \n"
+- "# || / _---=> hardirq/softirq \n"
+- "# ||| / _--=> preempt-depth \n"
+- "# |||| / delay \n"
+- "# cmd pid ||||| time | caller \n"
+- "# \\ / ||||| \\ | / \n");
++ seq_puts(m, "# _--------=> CPU# \n"
++ "# / _-------=> irqs-off \n"
++ "# | / _------=> need-resched \n"
++ "# || / _-----=> need-resched_lazy \n"
++ "# ||| / _----=> hardirq/softirq \n"
++ "# |||| / _---=> preempt-depth \n"
++ "# ||||| / _--=> preempt-lazy-depth\n"
++ "# |||||| / _-=> migrate-disable \n"
++ "# ||||||| / delay \n"
++ "# cmd pid |||||||| time | caller \n"
++ "# \\ / |||||||| \\ | / \n");
}
- EXPORT_SYMBOL_GPL(call_rcu_bh);
-+#endif
- /*
- * Queue an RCU callback for lazy invocation after a grace period.
-@@ -3294,6 +3415,7 @@ void synchronize_sched(void)
+ static void print_event_info(struct trace_buffer *buf, struct seq_file *m)
+@@ -3385,15 +3400,17 @@
+ tgid ? tgid_space : space);
+ seq_printf(m, "# %s / _----=> need-resched\n",
+ tgid ? tgid_space : space);
+- seq_printf(m, "# %s| / _---=> hardirq/softirq\n",
++ seq_printf(m, "# %s| / _----=> need-resched_lazy\n",
+ tgid ? tgid_space : space);
+- seq_printf(m, "# %s|| / _--=> preempt-depth\n",
++ seq_printf(m, "# %s|| / _---=> hardirq/softirq\n",
+ tgid ? tgid_space : space);
+- seq_printf(m, "# %s||| / delay\n",
++ seq_printf(m, "# %s||| / _--=> preempt-depth\n",
+ tgid ? tgid_space : space);
+- seq_printf(m, "# TASK-PID %sCPU# |||| TIMESTAMP FUNCTION\n",
++ seq_printf(m, "# %s|||| / delay\n",
++ tgid ? tgid_space : space);
++ seq_printf(m, "# TASK-PID %sCPU# ||||| TIMESTAMP FUNCTION\n",
+ tgid ? " TGID " : space);
+- seq_printf(m, "# | | %s | |||| | |\n",
++ seq_printf(m, "# | | %s | ||||| | |\n",
+ tgid ? " | " : space);
+ }
+
+@@ -4531,6 +4548,9 @@
+ #ifdef CONFIG_X86_64
+ " x86-tsc: TSC cycle counter\n"
+ #endif
++ "\n timestamp_mode\t-view the mode used to timestamp events\n"
++ " delta: Delta difference against a buffer-wide timestamp\n"
++ " absolute: Absolute (standalone) timestamp\n"
+ "\n trace_marker\t\t- Writes into this file writes into the kernel buffer\n"
+ "\n trace_marker_raw\t\t- Writes into this file writes binary data into the kernel buffer\n"
+ " tracing_cpumask\t- Limit which CPUs to trace\n"
+@@ -4707,8 +4727,9 @@
+ "\t .sym display an address as a symbol\n"
+ "\t .sym-offset display an address as a symbol and offset\n"
+ "\t .execname display a common_pid as a program name\n"
+- "\t .syscall display a syscall id as a syscall name\n\n"
+- "\t .log2 display log2 value rather than raw number\n\n"
++ "\t .syscall display a syscall id as a syscall name\n"
++ "\t .log2 display log2 value rather than raw number\n"
++ "\t .usecs display a common_timestamp in microseconds\n\n"
+ "\t The 'pause' parameter can be used to pause an existing hist\n"
+ "\t trigger or to start a hist trigger but not log any events\n"
+ "\t until told to do so. 'continue' can be used to start or\n"
+@@ -6218,7 +6239,7 @@
+ return 0;
}
- EXPORT_SYMBOL_GPL(synchronize_sched);
-+#ifndef CONFIG_PREEMPT_RT_FULL
- /**
- * synchronize_rcu_bh - wait until an rcu_bh grace period has elapsed.
- *
-@@ -3320,6 +3442,7 @@ void synchronize_rcu_bh(void)
- wait_rcu_gp(call_rcu_bh);
+-static int tracing_set_clock(struct trace_array *tr, const char *clockstr)
++int tracing_set_clock(struct trace_array *tr, const char *clockstr)
+ {
+ int i;
+
+@@ -6298,6 +6319,71 @@
+ return ret;
}
- EXPORT_SYMBOL_GPL(synchronize_rcu_bh);
+
++static int tracing_time_stamp_mode_show(struct seq_file *m, void *v)
++{
++ struct trace_array *tr = m->private;
++
++ mutex_lock(&trace_types_lock);
++
++ if (ring_buffer_time_stamp_abs(tr->trace_buffer.buffer))
++ seq_puts(m, "delta [absolute]\n");
++ else
++ seq_puts(m, "[delta] absolute\n");
++
++ mutex_unlock(&trace_types_lock);
++
++ return 0;
++}
++
++static int tracing_time_stamp_mode_open(struct inode *inode, struct file *file)
++{
++ struct trace_array *tr = inode->i_private;
++ int ret;
++
++ if (tracing_disabled)
++ return -ENODEV;
++
++ if (trace_array_get(tr))
++ return -ENODEV;
++
++ ret = single_open(file, tracing_time_stamp_mode_show, inode->i_private);
++ if (ret < 0)
++ trace_array_put(tr);
++
++ return ret;
++}
++
++int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs)
++{
++ int ret = 0;
++
++ mutex_lock(&trace_types_lock);
++
++ if (abs && tr->time_stamp_abs_ref++)
++ goto out;
++
++ if (!abs) {
++ if (WARN_ON_ONCE(!tr->time_stamp_abs_ref)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ if (--tr->time_stamp_abs_ref)
++ goto out;
++ }
++
++ ring_buffer_set_time_stamp_abs(tr->trace_buffer.buffer, abs);
++
++#ifdef CONFIG_TRACER_MAX_TRACE
++ if (tr->max_buffer.buffer)
++ ring_buffer_set_time_stamp_abs(tr->max_buffer.buffer, abs);
+#endif
++ out:
++ mutex_unlock(&trace_types_lock);
++
++ return ret;
++}
++
+ struct ftrace_buffer_info {
+ struct trace_iterator iter;
+ void *spare;
+@@ -6545,6 +6631,13 @@
+ .write = tracing_clock_write,
+ };
- /**
- * get_state_synchronize_rcu - Snapshot current RCU state
-@@ -3698,6 +3821,7 @@ static void _rcu_barrier(struct rcu_state *rsp)
- mutex_unlock(&rsp->barrier_mutex);
- }
++static const struct file_operations trace_time_stamp_mode_fops = {
++ .open = tracing_time_stamp_mode_open,
++ .read = seq_read,
++ .llseek = seq_lseek,
++ .release = tracing_single_release_tr,
++};
++
+ #ifdef CONFIG_TRACER_SNAPSHOT
+ static const struct file_operations snapshot_fops = {
+ .open = tracing_snapshot_open,
+@@ -7682,6 +7775,7 @@
+ struct trace_array *tr;
+ int ret;
-+#ifndef CONFIG_PREEMPT_RT_FULL
- /**
- * rcu_barrier_bh - Wait until all in-flight call_rcu_bh() callbacks complete.
- */
-@@ -3706,6 +3830,7 @@ void rcu_barrier_bh(void)
- _rcu_barrier(&rcu_bh_state);
- }
- EXPORT_SYMBOL_GPL(rcu_barrier_bh);
-+#endif
++ mutex_lock(&event_mutex);
+ mutex_lock(&trace_types_lock);
- /**
- * rcu_barrier_sched - Wait for in-flight call_rcu_sched() callbacks.
-@@ -4227,12 +4352,13 @@ void __init rcu_init(void)
+ ret = -EEXIST;
+@@ -7714,6 +7808,7 @@
- rcu_bootup_announce();
- rcu_init_geometry();
-+#ifndef CONFIG_PREEMPT_RT_FULL
- rcu_init_one(&rcu_bh_state);
-+#endif
- rcu_init_one(&rcu_sched_state);
- if (dump_tree)
- rcu_dump_rcu_node_tree(&rcu_sched_state);
- __rcu_init_preempt();
-- open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
+ INIT_LIST_HEAD(&tr->systems);
+ INIT_LIST_HEAD(&tr->events);
++ INIT_LIST_HEAD(&tr->hist_vars);
- /*
- * We don't need protection against CPU-hotplug here because
-diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
-index e99a5234d9ed..958ac107062c 100644
---- a/kernel/rcu/tree.h
-+++ b/kernel/rcu/tree.h
-@@ -588,18 +588,18 @@ extern struct list_head rcu_struct_flavors;
- */
- extern struct rcu_state rcu_sched_state;
+ if (allocate_trace_buffers(tr, trace_buf_size) < 0)
+ goto out_free_tr;
+@@ -7737,6 +7832,7 @@
+ list_add(&tr->list, &ftrace_trace_arrays);
-+#ifndef CONFIG_PREEMPT_RT_FULL
- extern struct rcu_state rcu_bh_state;
-+#endif
+ mutex_unlock(&trace_types_lock);
++ mutex_unlock(&event_mutex);
- #ifdef CONFIG_PREEMPT_RCU
- extern struct rcu_state rcu_preempt_state;
- #endif /* #ifdef CONFIG_PREEMPT_RCU */
+ return 0;
--#ifdef CONFIG_RCU_BOOST
- DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
- DECLARE_PER_CPU(int, rcu_cpu_kthread_cpu);
- DECLARE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
- DECLARE_PER_CPU(char, rcu_cpu_has_work);
--#endif /* #ifdef CONFIG_RCU_BOOST */
+@@ -7748,6 +7844,7 @@
- #ifndef RCU_TREE_NONCORE
+ out_unlock:
+ mutex_unlock(&trace_types_lock);
++ mutex_unlock(&event_mutex);
-@@ -619,10 +619,9 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func);
- static void __init __rcu_init_preempt(void);
- static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags);
- static void rcu_preempt_boost_start_gp(struct rcu_node *rnp);
--static void invoke_rcu_callbacks_kthread(void);
- static bool rcu_is_callbacks_kthread(void);
-+static void rcu_cpu_kthread_setup(unsigned int cpu);
- #ifdef CONFIG_RCU_BOOST
--static void rcu_preempt_do_callbacks(void);
- static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
- struct rcu_node *rnp);
- #endif /* #ifdef CONFIG_RCU_BOOST */
-diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
-index 56583e764ebf..7c656f8e192f 100644
---- a/kernel/rcu/tree_plugin.h
-+++ b/kernel/rcu/tree_plugin.h
-@@ -24,25 +24,10 @@
- * Paul E. McKenney <paulmck@linux.vnet.ibm.com>
- */
+ return ret;
--#include <linux/delay.h>
--#include <linux/gfp.h>
--#include <linux/oom.h>
--#include <linux/smpboot.h>
--#include "../time/tick-internal.h"
--
- #ifdef CONFIG_RCU_BOOST
+@@ -7760,6 +7857,7 @@
+ int ret;
+ int i;
- #include "../locking/rtmutex_common.h"
++ mutex_lock(&event_mutex);
+ mutex_lock(&trace_types_lock);
--/*
-- * Control variables for per-CPU and per-rcu_node kthreads. These
-- * handle all flavors of RCU.
-- */
--static DEFINE_PER_CPU(struct task_struct *, rcu_cpu_kthread_task);
--DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
--DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
--DEFINE_PER_CPU(char, rcu_cpu_has_work);
--
- #else /* #ifdef CONFIG_RCU_BOOST */
+ ret = -ENODEV;
+@@ -7805,6 +7903,7 @@
- /*
-@@ -55,6 +40,14 @@ DEFINE_PER_CPU(char, rcu_cpu_has_work);
+ out_unlock:
+ mutex_unlock(&trace_types_lock);
++ mutex_unlock(&event_mutex);
- #endif /* #else #ifdef CONFIG_RCU_BOOST */
+ return ret;
+ }
+@@ -7862,6 +7961,9 @@
+ trace_create_file("tracing_on", 0644, d_tracer,
+ tr, &rb_simple_fops);
-+/*
-+ * Control variables for per-CPU and per-rcu_node kthreads. These
-+ * handle all flavors of RCU.
-+ */
-+DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_status);
-+DEFINE_PER_CPU(unsigned int, rcu_cpu_kthread_loops);
-+DEFINE_PER_CPU(char, rcu_cpu_has_work);
++ trace_create_file("timestamp_mode", 0444, d_tracer, tr,
++ &trace_time_stamp_mode_fops);
+
- #ifdef CONFIG_RCU_NOCB_CPU
- static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */
- static bool have_rcu_nocb_mask; /* Was rcu_nocb_mask allocated? */
-@@ -426,7 +419,7 @@ void rcu_read_unlock_special(struct task_struct *t)
- }
+ create_trace_options_dir(tr);
- /* Hardware IRQ handlers cannot block, complain if they get here. */
-- if (in_irq() || in_serving_softirq()) {
-+ if (preempt_count() & (HARDIRQ_MASK | SOFTIRQ_OFFSET)) {
- lockdep_rcu_suspicious(__FILE__, __LINE__,
- "rcu_read_unlock() from irq or softirq with blocking in critical section!!!\n");
- pr_alert("->rcu_read_unlock_special: %#x (b: %d, enq: %d nq: %d)\n",
-@@ -632,15 +625,6 @@ static void rcu_preempt_check_callbacks(void)
- t->rcu_read_unlock_special.b.need_qs = true;
+ #if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)
+@@ -8271,6 +8373,92 @@
}
+ EXPORT_SYMBOL_GPL(ftrace_dump);
--#ifdef CONFIG_RCU_BOOST
--
--static void rcu_preempt_do_callbacks(void)
--{
-- rcu_do_batch(rcu_state_p, this_cpu_ptr(rcu_data_p));
--}
--
--#endif /* #ifdef CONFIG_RCU_BOOST */
--
- /*
- * Queue a preemptible-RCU callback for invocation after a grace period.
- */
-@@ -829,6 +813,19 @@ void exit_rcu(void)
-
- #endif /* #else #ifdef CONFIG_PREEMPT_RCU */
-
-+/*
-+ * If boosting, set rcuc kthreads to realtime priority.
-+ */
-+static void rcu_cpu_kthread_setup(unsigned int cpu)
++int trace_run_command(const char *buf, int (*createfn)(int, char **))
+{
-+#ifdef CONFIG_RCU_BOOST
-+ struct sched_param sp;
++ char **argv;
++ int argc, ret;
+
-+ sp.sched_priority = kthread_prio;
-+ sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
-+#endif /* #ifdef CONFIG_RCU_BOOST */
++ argc = 0;
++ ret = 0;
++ argv = argv_split(GFP_KERNEL, buf, &argc);
++ if (!argv)
++ return -ENOMEM;
++
++ if (argc)
++ ret = createfn(argc, argv);
++
++ argv_free(argv);
++
++ return ret;
+}
+
- #ifdef CONFIG_RCU_BOOST
-
- #include "../locking/rtmutex_common.h"
-@@ -860,16 +857,6 @@ static void rcu_initiate_boost_trace(struct rcu_node *rnp)
-
- #endif /* #else #ifdef CONFIG_RCU_TRACE */
-
--static void rcu_wake_cond(struct task_struct *t, int status)
--{
-- /*
-- * If the thread is yielding, only wake it when this
-- * is invoked from idle
-- */
-- if (status != RCU_KTHREAD_YIELDING || is_idle_task(current))
-- wake_up_process(t);
--}
--
- /*
- * Carry out RCU priority boosting on the task indicated by ->exp_tasks
- * or ->boost_tasks, advancing the pointer to the next task in the
-@@ -1013,23 +1000,6 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
- }
-
- /*
-- * Wake up the per-CPU kthread to invoke RCU callbacks.
-- */
--static void invoke_rcu_callbacks_kthread(void)
--{
-- unsigned long flags;
--
-- local_irq_save(flags);
-- __this_cpu_write(rcu_cpu_has_work, 1);
-- if (__this_cpu_read(rcu_cpu_kthread_task) != NULL &&
-- current != __this_cpu_read(rcu_cpu_kthread_task)) {
-- rcu_wake_cond(__this_cpu_read(rcu_cpu_kthread_task),
-- __this_cpu_read(rcu_cpu_kthread_status));
-- }
-- local_irq_restore(flags);
--}
--
--/*
- * Is the current CPU running the RCU-callbacks kthread?
- * Caller must have preemption disabled.
- */
-@@ -1083,67 +1053,6 @@ static int rcu_spawn_one_boost_kthread(struct rcu_state *rsp,
- return 0;
- }
-
--static void rcu_kthread_do_work(void)
--{
-- rcu_do_batch(&rcu_sched_state, this_cpu_ptr(&rcu_sched_data));
-- rcu_do_batch(&rcu_bh_state, this_cpu_ptr(&rcu_bh_data));
-- rcu_preempt_do_callbacks();
--}
--
--static void rcu_cpu_kthread_setup(unsigned int cpu)
--{
-- struct sched_param sp;
--
-- sp.sched_priority = kthread_prio;
-- sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
--}
--
--static void rcu_cpu_kthread_park(unsigned int cpu)
--{
-- per_cpu(rcu_cpu_kthread_status, cpu) = RCU_KTHREAD_OFFCPU;
--}
--
--static int rcu_cpu_kthread_should_run(unsigned int cpu)
--{
-- return __this_cpu_read(rcu_cpu_has_work);
--}
--
--/*
-- * Per-CPU kernel thread that invokes RCU callbacks. This replaces the
-- * RCU softirq used in flavors and configurations of RCU that do not
-- * support RCU priority boosting.
-- */
--static void rcu_cpu_kthread(unsigned int cpu)
--{
-- unsigned int *statusp = this_cpu_ptr(&rcu_cpu_kthread_status);
-- char work, *workp = this_cpu_ptr(&rcu_cpu_has_work);
-- int spincnt;
--
-- for (spincnt = 0; spincnt < 10; spincnt++) {
-- trace_rcu_utilization(TPS("Start CPU kthread@rcu_wait"));
-- local_bh_disable();
-- *statusp = RCU_KTHREAD_RUNNING;
-- this_cpu_inc(rcu_cpu_kthread_loops);
-- local_irq_disable();
-- work = *workp;
-- *workp = 0;
-- local_irq_enable();
-- if (work)
-- rcu_kthread_do_work();
-- local_bh_enable();
-- if (*workp == 0) {
-- trace_rcu_utilization(TPS("End CPU kthread@rcu_wait"));
-- *statusp = RCU_KTHREAD_WAITING;
-- return;
-- }
-- }
-- *statusp = RCU_KTHREAD_YIELDING;
-- trace_rcu_utilization(TPS("Start CPU kthread@rcu_yield"));
-- schedule_timeout_interruptible(2);
-- trace_rcu_utilization(TPS("End CPU kthread@rcu_yield"));
-- *statusp = RCU_KTHREAD_WAITING;
--}
--
- /*
- * Set the per-rcu_node kthread's affinity to cover all CPUs that are
- * served by the rcu_node in question. The CPU hotplug lock is still
-@@ -1174,26 +1083,12 @@ static void rcu_boost_kthread_setaffinity(struct rcu_node *rnp, int outgoingcpu)
- free_cpumask_var(cm);
++#define WRITE_BUFSIZE 4096
++
++ssize_t trace_parse_run_command(struct file *file, const char __user *buffer,
++ size_t count, loff_t *ppos,
++ int (*createfn)(int, char **))
++{
++ char *kbuf, *buf, *tmp;
++ int ret = 0;
++ size_t done = 0;
++ size_t size;
++
++ kbuf = kmalloc(WRITE_BUFSIZE, GFP_KERNEL);
++ if (!kbuf)
++ return -ENOMEM;
++
++ while (done < count) {
++ size = count - done;
++
++ if (size >= WRITE_BUFSIZE)
++ size = WRITE_BUFSIZE - 1;
++
++ if (copy_from_user(kbuf, buffer + done, size)) {
++ ret = -EFAULT;
++ goto out;
++ }
++ kbuf[size] = '\0';
++ buf = kbuf;
++ do {
++ tmp = strchr(buf, '\n');
++ if (tmp) {
++ *tmp = '\0';
++ size = tmp - buf + 1;
++ } else {
++ size = strlen(buf);
++ if (done + size < count) {
++ if (buf != kbuf)
++ break;
++ /* This can accept WRITE_BUFSIZE - 2 ('\n' + '\0') */
++ pr_warn("Line length is too long: Should be less than %d\n",
++ WRITE_BUFSIZE - 2);
++ ret = -EINVAL;
++ goto out;
++ }
++ }
++ done += size;
++
++ /* Remove comments */
++ tmp = strchr(buf, '#');
++
++ if (tmp)
++ *tmp = '\0';
++
++ ret = trace_run_command(buf, createfn);
++ if (ret)
++ goto out;
++ buf += size;
++
++ } while (done < count);
++ }
++ ret = done;
++
++out:
++ kfree(kbuf);
++
++ return ret;
++}
++
+ __init static int tracer_alloc_buffers(void)
+ {
+ int ring_buf_size;
+@@ -8371,6 +8559,7 @@
+
+ INIT_LIST_HEAD(&global_trace.systems);
+ INIT_LIST_HEAD(&global_trace.events);
++ INIT_LIST_HEAD(&global_trace.hist_vars);
+ list_add(&global_trace.list, &ftrace_trace_arrays);
+
+ apply_trace_boot_options();
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_events.c linux-4.14/kernel/trace/trace_events.c
+--- linux-4.14.orig/kernel/trace/trace_events.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_events.c 2018-09-05 11:05:07.000000000 +0200
+@@ -187,6 +187,8 @@
+ __common_field(unsigned char, flags);
+ __common_field(unsigned char, preempt_count);
+ __common_field(int, pid);
++ __common_field(unsigned short, migrate_disable);
++ __common_field(unsigned short, padding);
+
+ return ret;
}
+@@ -1406,8 +1408,8 @@
+ return -ENODEV;
--static struct smp_hotplug_thread rcu_cpu_thread_spec = {
-- .store = &rcu_cpu_kthread_task,
-- .thread_should_run = rcu_cpu_kthread_should_run,
-- .thread_fn = rcu_cpu_kthread,
-- .thread_comm = "rcuc/%u",
-- .setup = rcu_cpu_kthread_setup,
-- .park = rcu_cpu_kthread_park,
--};
--
- /*
- * Spawn boost kthreads -- called as soon as the scheduler is running.
- */
- static void __init rcu_spawn_boost_kthreads(void)
+ /* Make sure the system still exists */
+- mutex_lock(&trace_types_lock);
+ mutex_lock(&event_mutex);
++ mutex_lock(&trace_types_lock);
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+ list_for_each_entry(dir, &tr->systems, list) {
+ if (dir == inode->i_private) {
+@@ -1421,8 +1423,8 @@
+ }
+ }
+ exit_loop:
+- mutex_unlock(&event_mutex);
+ mutex_unlock(&trace_types_lock);
++ mutex_unlock(&event_mutex);
+
+ if (!system)
+ return -ENODEV;
+@@ -2308,15 +2310,15 @@
+ int trace_add_event_call(struct trace_event_call *call)
{
- struct rcu_node *rnp;
-- int cpu;
--
-- for_each_possible_cpu(cpu)
-- per_cpu(rcu_cpu_has_work, cpu) = 0;
-- BUG_ON(smpboot_register_percpu_thread(&rcu_cpu_thread_spec));
- rcu_for_each_leaf_node(rcu_state_p, rnp)
- (void)rcu_spawn_one_boost_kthread(rcu_state_p, rnp);
- }
-@@ -1216,11 +1111,6 @@ static void rcu_initiate_boost(struct rcu_node *rnp, unsigned long flags)
- raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ int ret;
+- mutex_lock(&trace_types_lock);
+ mutex_lock(&event_mutex);
++ mutex_lock(&trace_types_lock);
+
+ ret = __register_event(call, NULL);
+ if (ret >= 0)
+ __add_event_to_tracers(call);
+
+- mutex_unlock(&event_mutex);
+ mutex_unlock(&trace_types_lock);
++ mutex_unlock(&event_mutex);
+ return ret;
}
--static void invoke_rcu_callbacks_kthread(void)
--{
-- WARN_ON_ONCE(1);
--}
--
- static bool rcu_is_callbacks_kthread(void)
+@@ -2370,13 +2372,13 @@
{
- return false;
-@@ -1244,7 +1134,7 @@ static void rcu_prepare_kthreads(int cpu)
-
- #endif /* #else #ifdef CONFIG_RCU_BOOST */
+ int ret;
--#if !defined(CONFIG_RCU_FAST_NO_HZ)
-+#if !defined(CONFIG_RCU_FAST_NO_HZ) || defined(CONFIG_PREEMPT_RT_FULL)
+- mutex_lock(&trace_types_lock);
+ mutex_lock(&event_mutex);
++ mutex_lock(&trace_types_lock);
+ down_write(&trace_event_sem);
+ ret = probe_remove_event_call(call);
+ up_write(&trace_event_sem);
+- mutex_unlock(&event_mutex);
+ mutex_unlock(&trace_types_lock);
++ mutex_unlock(&event_mutex);
- /*
- * Check to see if any future RCU-related work will need to be done
-@@ -1261,7 +1151,9 @@ int rcu_needs_cpu(u64 basemono, u64 *nextevt)
- return IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL)
- ? 0 : rcu_cpu_has_callbacks(NULL);
+ return ret;
}
-+#endif /* !defined(CONFIG_RCU_FAST_NO_HZ) || defined(CONFIG_PREEMPT_RT_FULL) */
+@@ -2438,8 +2440,8 @@
+ {
+ struct module *mod = data;
-+#if !defined(CONFIG_RCU_FAST_NO_HZ)
- /*
- * Because we do not have RCU_FAST_NO_HZ, don't bother cleaning up
- * after it.
-@@ -1357,6 +1249,8 @@ static bool __maybe_unused rcu_try_advance_all_cbs(void)
- return cbs_ready;
- }
+- mutex_lock(&trace_types_lock);
+ mutex_lock(&event_mutex);
++ mutex_lock(&trace_types_lock);
+ switch (val) {
+ case MODULE_STATE_COMING:
+ trace_module_add_events(mod);
+@@ -2448,8 +2450,8 @@
+ trace_module_remove_events(mod);
+ break;
+ }
+- mutex_unlock(&event_mutex);
+ mutex_unlock(&trace_types_lock);
++ mutex_unlock(&event_mutex);
-+#ifndef CONFIG_PREEMPT_RT_FULL
-+
- /*
- * Allow the CPU to enter dyntick-idle mode unless it has callbacks ready
- * to invoke. If the CPU has callbacks, try to advance them. Tell the
-@@ -1402,6 +1296,7 @@ int rcu_needs_cpu(u64 basemono, u64 *nextevt)
- *nextevt = basemono + dj * TICK_NSEC;
return 0;
}
-+#endif /* #ifndef CONFIG_PREEMPT_RT_FULL */
+@@ -2964,24 +2966,24 @@
+ * creates the event hierachry in the @parent/events directory.
+ *
+ * Returns 0 on success.
++ *
++ * Must be called with event_mutex held.
+ */
+ int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr)
+ {
+ int ret;
- /*
- * Prepare a CPU for idle from an RCU perspective. The first major task
-diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
-index 4f6db7e6a117..ee02e1e1b3e5 100644
---- a/kernel/rcu/update.c
-+++ b/kernel/rcu/update.c
-@@ -62,7 +62,7 @@
- #ifndef CONFIG_TINY_RCU
- module_param(rcu_expedited, int, 0);
- module_param(rcu_normal, int, 0);
--static int rcu_normal_after_boot;
-+static int rcu_normal_after_boot = IS_ENABLED(CONFIG_PREEMPT_RT_FULL);
- module_param(rcu_normal_after_boot, int, 0);
- #endif /* #ifndef CONFIG_TINY_RCU */
+- mutex_lock(&event_mutex);
++ lockdep_assert_held(&event_mutex);
-@@ -132,8 +132,7 @@ bool rcu_gp_is_normal(void)
- }
- EXPORT_SYMBOL_GPL(rcu_gp_is_normal);
+ ret = create_event_toplevel_files(parent, tr);
+ if (ret)
+- goto out_unlock;
++ goto out;
--static atomic_t rcu_expedited_nesting =
-- ATOMIC_INIT(IS_ENABLED(CONFIG_RCU_EXPEDITE_BOOT) ? 1 : 0);
-+static atomic_t rcu_expedited_nesting = ATOMIC_INIT(1);
+ down_write(&trace_event_sem);
+ __trace_add_event_dirs(tr);
+ up_write(&trace_event_sem);
- /*
- * Should normal grace-period primitives be expedited? Intended for
-@@ -182,8 +181,7 @@ EXPORT_SYMBOL_GPL(rcu_unexpedite_gp);
- */
- void rcu_end_inkernel_boot(void)
- {
-- if (IS_ENABLED(CONFIG_RCU_EXPEDITE_BOOT))
-- rcu_unexpedite_gp();
-+ rcu_unexpedite_gp();
- if (rcu_normal_after_boot)
- WRITE_ONCE(rcu_normal, 1);
- }
-@@ -298,6 +296,7 @@ int rcu_read_lock_held(void)
+- out_unlock:
+- mutex_unlock(&event_mutex);
+-
++ out:
+ return ret;
}
- EXPORT_SYMBOL_GPL(rcu_read_lock_held);
-+#ifndef CONFIG_PREEMPT_RT_FULL
- /**
- * rcu_read_lock_bh_held() - might we be in RCU-bh read-side critical section?
- *
-@@ -324,6 +323,7 @@ int rcu_read_lock_bh_held(void)
- return in_softirq() || irqs_disabled();
+@@ -3010,9 +3012,10 @@
+ return ret;
}
- EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
-+#endif
- #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
++/* Must be called with event_mutex held */
+ int event_trace_del_tracer(struct trace_array *tr)
+ {
+- mutex_lock(&event_mutex);
++ lockdep_assert_held(&event_mutex);
-diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
-index 5e59b832ae2b..7337a7f60e3f 100644
---- a/kernel/sched/Makefile
-+++ b/kernel/sched/Makefile
-@@ -17,7 +17,7 @@ endif
+ /* Disable any event triggers and associated soft-disabled events */
+ clear_event_triggers(tr);
+@@ -3033,8 +3036,6 @@
- obj-y += core.o loadavg.o clock.o cputime.o
- obj-y += idle_task.o fair.o rt.o deadline.o stop_task.o
--obj-y += wait.o swait.o completion.o idle.o
-+obj-y += wait.o swait.o swork.o completion.o idle.o
- obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o
- obj-$(CONFIG_SCHED_AUTOGROUP) += auto_group.o
- obj-$(CONFIG_SCHEDSTATS) += stats.o
-diff --git a/kernel/sched/completion.c b/kernel/sched/completion.c
-index 8d0f35debf35..b62cf6400fe0 100644
---- a/kernel/sched/completion.c
-+++ b/kernel/sched/completion.c
-@@ -30,10 +30,10 @@ void complete(struct completion *x)
- {
- unsigned long flags;
+ tr->event_dir = NULL;
-- spin_lock_irqsave(&x->wait.lock, flags);
-+ raw_spin_lock_irqsave(&x->wait.lock, flags);
- x->done++;
-- __wake_up_locked(&x->wait, TASK_NORMAL, 1);
-- spin_unlock_irqrestore(&x->wait.lock, flags);
-+ swake_up_locked(&x->wait);
-+ raw_spin_unlock_irqrestore(&x->wait.lock, flags);
+- mutex_unlock(&event_mutex);
+-
+ return 0;
}
- EXPORT_SYMBOL(complete);
-@@ -50,10 +50,10 @@ void complete_all(struct completion *x)
- {
- unsigned long flags;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_events_hist.c linux-4.14/kernel/trace/trace_events_hist.c
+--- linux-4.14.orig/kernel/trace/trace_events_hist.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_events_hist.c 2018-09-05 11:05:07.000000000 +0200
+@@ -20,13 +20,39 @@
+ #include <linux/slab.h>
+ #include <linux/stacktrace.h>
+ #include <linux/rculist.h>
++#include <linux/tracefs.h>
-- spin_lock_irqsave(&x->wait.lock, flags);
-+ raw_spin_lock_irqsave(&x->wait.lock, flags);
- x->done += UINT_MAX/2;
-- __wake_up_locked(&x->wait, TASK_NORMAL, 0);
-- spin_unlock_irqrestore(&x->wait.lock, flags);
-+ swake_up_all_locked(&x->wait);
-+ raw_spin_unlock_irqrestore(&x->wait.lock, flags);
- }
- EXPORT_SYMBOL(complete_all);
+ #include "tracing_map.h"
+ #include "trace.h"
-@@ -62,20 +62,20 @@ do_wait_for_common(struct completion *x,
- long (*action)(long), long timeout, int state)
- {
- if (!x->done) {
-- DECLARE_WAITQUEUE(wait, current);
-+ DECLARE_SWAITQUEUE(wait);
++#define SYNTH_SYSTEM "synthetic"
++#define SYNTH_FIELDS_MAX 16
++
++#define STR_VAR_LEN_MAX 32 /* must be multiple of sizeof(u64) */
++
+ struct hist_field;
-- __add_wait_queue_tail_exclusive(&x->wait, &wait);
-+ __prepare_to_swait(&x->wait, &wait);
- do {
- if (signal_pending_state(state, current)) {
- timeout = -ERESTARTSYS;
- break;
- }
- __set_current_state(state);
-- spin_unlock_irq(&x->wait.lock);
-+ raw_spin_unlock_irq(&x->wait.lock);
- timeout = action(timeout);
-- spin_lock_irq(&x->wait.lock);
-+ raw_spin_lock_irq(&x->wait.lock);
- } while (!x->done && timeout);
-- __remove_wait_queue(&x->wait, &wait);
-+ __finish_swait(&x->wait, &wait);
- if (!x->done)
- return timeout;
- }
-@@ -89,9 +89,9 @@ __wait_for_common(struct completion *x,
+-typedef u64 (*hist_field_fn_t) (struct hist_field *field, void *event);
++typedef u64 (*hist_field_fn_t) (struct hist_field *field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event);
++
++#define HIST_FIELD_OPERANDS_MAX 2
++#define HIST_FIELDS_MAX (TRACING_MAP_FIELDS_MAX + TRACING_MAP_VARS_MAX)
++#define HIST_ACTIONS_MAX 8
++
++enum field_op_id {
++ FIELD_OP_NONE,
++ FIELD_OP_PLUS,
++ FIELD_OP_MINUS,
++ FIELD_OP_UNARY_MINUS,
++};
++
++struct hist_var {
++ char *name;
++ struct hist_trigger_data *hist_data;
++ unsigned int idx;
++};
+
+ struct hist_field {
+ struct ftrace_event_field *field;
+@@ -34,26 +60,50 @@
+ hist_field_fn_t fn;
+ unsigned int size;
+ unsigned int offset;
++ unsigned int is_signed;
++ const char *type;
++ struct hist_field *operands[HIST_FIELD_OPERANDS_MAX];
++ struct hist_trigger_data *hist_data;
++ struct hist_var var;
++ enum field_op_id operator;
++ char *system;
++ char *event_name;
++ char *name;
++ unsigned int var_idx;
++ unsigned int var_ref_idx;
++ bool read_once;
+ };
+
+-static u64 hist_field_none(struct hist_field *field, void *event)
++static u64 hist_field_none(struct hist_field *field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
{
- might_sleep();
+ return 0;
+ }
-- spin_lock_irq(&x->wait.lock);
-+ raw_spin_lock_irq(&x->wait.lock);
- timeout = do_wait_for_common(x, action, timeout, state);
-- spin_unlock_irq(&x->wait.lock);
-+ raw_spin_unlock_irq(&x->wait.lock);
- return timeout;
+-static u64 hist_field_counter(struct hist_field *field, void *event)
++static u64 hist_field_counter(struct hist_field *field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
+ {
+ return 1;
}
-@@ -277,12 +277,12 @@ bool try_wait_for_completion(struct completion *x)
- if (!READ_ONCE(x->done))
- return 0;
+-static u64 hist_field_string(struct hist_field *hist_field, void *event)
++static u64 hist_field_string(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
+ {
+ char *addr = (char *)(event + hist_field->field->offset);
-- spin_lock_irqsave(&x->wait.lock, flags);
-+ raw_spin_lock_irqsave(&x->wait.lock, flags);
- if (!x->done)
- ret = 0;
- else
- x->done--;
-- spin_unlock_irqrestore(&x->wait.lock, flags);
-+ raw_spin_unlock_irqrestore(&x->wait.lock, flags);
- return ret;
+ return (u64)(unsigned long)addr;
}
- EXPORT_SYMBOL(try_wait_for_completion);
-@@ -311,7 +311,7 @@ bool completion_done(struct completion *x)
- * after it's acquired the lock.
- */
- smp_rmb();
-- spin_unlock_wait(&x->wait.lock);
-+ raw_spin_unlock_wait(&x->wait.lock);
- return true;
+
+-static u64 hist_field_dynstring(struct hist_field *hist_field, void *event)
++static u64 hist_field_dynstring(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
+ {
+ u32 str_item = *(u32 *)(event + hist_field->field->offset);
+ int str_loc = str_item & 0xffff;
+@@ -62,22 +112,74 @@
+ return (u64)(unsigned long)addr;
}
- EXPORT_SYMBOL(completion_done);
-diff --git a/kernel/sched/core.c b/kernel/sched/core.c
-index 154fd689fe02..a6aa5801b21e 100644
---- a/kernel/sched/core.c
-+++ b/kernel/sched/core.c
-@@ -129,7 +129,11 @@ const_debug unsigned int sysctl_sched_features =
- * Number of tasks to iterate in a single balance run.
- * Limited because this is done with IRQs disabled.
- */
-+#ifndef CONFIG_PREEMPT_RT_FULL
- const_debug unsigned int sysctl_sched_nr_migrate = 32;
-+#else
-+const_debug unsigned int sysctl_sched_nr_migrate = 8;
-+#endif
- /*
- * period over which we average the RT time consumption, measured
-@@ -345,6 +349,7 @@ static void init_rq_hrtick(struct rq *rq)
+-static u64 hist_field_pstring(struct hist_field *hist_field, void *event)
++static u64 hist_field_pstring(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
+ {
+ char **addr = (char **)(event + hist_field->field->offset);
- hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
- rq->hrtick_timer.function = hrtick;
-+ rq->hrtick_timer.irqsafe = 1;
- }
- #else /* CONFIG_SCHED_HRTICK */
- static inline void hrtick_clear(struct rq *rq)
-@@ -449,7 +454,7 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
- head->lastp = &node->next;
+ return (u64)(unsigned long)*addr;
}
--void wake_up_q(struct wake_q_head *head)
-+void __wake_up_q(struct wake_q_head *head, bool sleeper)
+-static u64 hist_field_log2(struct hist_field *hist_field, void *event)
++static u64 hist_field_log2(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
{
- struct wake_q_node *node = head->first;
+- u64 val = *(u64 *)(event + hist_field->field->offset);
++ struct hist_field *operand = hist_field->operands[0];
++
++ u64 val = operand->fn(operand, elt, rbe, event);
-@@ -466,7 +471,10 @@ void wake_up_q(struct wake_q_head *head)
- * wake_up_process() implies a wmb() to pair with the queueing
- * in wake_q_add() so as not to miss wakeups.
- */
-- wake_up_process(task);
-+ if (sleeper)
-+ wake_up_lock_sleeper(task);
-+ else
-+ wake_up_process(task);
- put_task_struct(task);
- }
- }
-@@ -502,6 +510,38 @@ void resched_curr(struct rq *rq)
- trace_sched_wake_idle_without_ipi(cpu);
+ return (u64) ilog2(roundup_pow_of_two(val));
}
-+#ifdef CONFIG_PREEMPT_LAZY
-+void resched_curr_lazy(struct rq *rq)
++static u64 hist_field_plus(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
+{
-+ struct task_struct *curr = rq->curr;
-+ int cpu;
++ struct hist_field *operand1 = hist_field->operands[0];
++ struct hist_field *operand2 = hist_field->operands[1];
++
++ u64 val1 = operand1->fn(operand1, elt, rbe, event);
++ u64 val2 = operand2->fn(operand2, elt, rbe, event);
++
++ return val1 + val2;
++}
++
++static u64 hist_field_minus(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
++{
++ struct hist_field *operand1 = hist_field->operands[0];
++ struct hist_field *operand2 = hist_field->operands[1];
++
++ u64 val1 = operand1->fn(operand1, elt, rbe, event);
++ u64 val2 = operand2->fn(operand2, elt, rbe, event);
++
++ return val1 - val2;
++}
++
++static u64 hist_field_unary_minus(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
++{
++ struct hist_field *operand = hist_field->operands[0];
++
++ s64 sval = (s64)operand->fn(operand, elt, rbe, event);
++ u64 val = (u64)-sval;
++
++ return val;
++}
++
+ #define DEFINE_HIST_FIELD_FN(type) \
+-static u64 hist_field_##type(struct hist_field *hist_field, void *event)\
++ static u64 hist_field_##type(struct hist_field *hist_field, \
++ struct tracing_map_elt *elt, \
++ struct ring_buffer_event *rbe, \
++ void *event) \
+ { \
+ type *addr = (type *)(event + hist_field->field->offset); \
+ \
+@@ -110,16 +212,29 @@
+ #define HIST_KEY_SIZE_MAX (MAX_FILTER_STR_VAL + HIST_STACKTRACE_SIZE)
+
+ enum hist_field_flags {
+- HIST_FIELD_FL_HITCOUNT = 1,
+- HIST_FIELD_FL_KEY = 2,
+- HIST_FIELD_FL_STRING = 4,
+- HIST_FIELD_FL_HEX = 8,
+- HIST_FIELD_FL_SYM = 16,
+- HIST_FIELD_FL_SYM_OFFSET = 32,
+- HIST_FIELD_FL_EXECNAME = 64,
+- HIST_FIELD_FL_SYSCALL = 128,
+- HIST_FIELD_FL_STACKTRACE = 256,
+- HIST_FIELD_FL_LOG2 = 512,
++ HIST_FIELD_FL_HITCOUNT = 1 << 0,
++ HIST_FIELD_FL_KEY = 1 << 1,
++ HIST_FIELD_FL_STRING = 1 << 2,
++ HIST_FIELD_FL_HEX = 1 << 3,
++ HIST_FIELD_FL_SYM = 1 << 4,
++ HIST_FIELD_FL_SYM_OFFSET = 1 << 5,
++ HIST_FIELD_FL_EXECNAME = 1 << 6,
++ HIST_FIELD_FL_SYSCALL = 1 << 7,
++ HIST_FIELD_FL_STACKTRACE = 1 << 8,
++ HIST_FIELD_FL_LOG2 = 1 << 9,
++ HIST_FIELD_FL_TIMESTAMP = 1 << 10,
++ HIST_FIELD_FL_TIMESTAMP_USECS = 1 << 11,
++ HIST_FIELD_FL_VAR = 1 << 12,
++ HIST_FIELD_FL_EXPR = 1 << 13,
++ HIST_FIELD_FL_VAR_REF = 1 << 14,
++ HIST_FIELD_FL_CPU = 1 << 15,
++ HIST_FIELD_FL_ALIAS = 1 << 16,
++};
++
++struct var_defs {
++ unsigned int n_vars;
++ char *name[TRACING_MAP_VARS_MAX];
++ char *expr[TRACING_MAP_VARS_MAX];
+ };
+
+ struct hist_trigger_attrs {
+@@ -127,25 +242,1474 @@
+ char *vals_str;
+ char *sort_key_str;
+ char *name;
++ char *clock;
+ bool pause;
+ bool cont;
+ bool clear;
++ bool ts_in_usecs;
+ unsigned int map_bits;
++
++ char *assignment_str[TRACING_MAP_VARS_MAX];
++ unsigned int n_assignments;
++
++ char *action_str[HIST_ACTIONS_MAX];
++ unsigned int n_actions;
++
++ struct var_defs var_defs;
++};
++
++struct field_var {
++ struct hist_field *var;
++ struct hist_field *val;
++};
+
-+ if (!sched_feat(PREEMPT_LAZY)) {
-+ resched_curr(rq);
-+ return;
-+ }
++struct field_var_hist {
++ struct hist_trigger_data *hist_data;
++ char *cmd;
+ };
+
+ struct hist_trigger_data {
+- struct hist_field *fields[TRACING_MAP_FIELDS_MAX];
++ struct hist_field *fields[HIST_FIELDS_MAX];
+ unsigned int n_vals;
+ unsigned int n_keys;
+ unsigned int n_fields;
++ unsigned int n_vars;
+ unsigned int key_size;
+ struct tracing_map_sort_key sort_keys[TRACING_MAP_SORT_KEYS_MAX];
+ unsigned int n_sort_keys;
+ struct trace_event_file *event_file;
+ struct hist_trigger_attrs *attrs;
+ struct tracing_map *map;
++ bool enable_timestamps;
++ bool remove;
++ struct hist_field *var_refs[TRACING_MAP_VARS_MAX];
++ unsigned int n_var_refs;
++
++ struct action_data *actions[HIST_ACTIONS_MAX];
++ unsigned int n_actions;
++
++ struct hist_field *synth_var_refs[SYNTH_FIELDS_MAX];
++ unsigned int n_synth_var_refs;
++ struct field_var *field_vars[SYNTH_FIELDS_MAX];
++ unsigned int n_field_vars;
++ unsigned int n_field_var_str;
++ struct field_var_hist *field_var_hists[SYNTH_FIELDS_MAX];
++ unsigned int n_field_var_hists;
++
++ struct field_var *max_vars[SYNTH_FIELDS_MAX];
++ unsigned int n_max_vars;
++ unsigned int n_max_var_str;
++};
+
-+ lockdep_assert_held(&rq->lock);
++struct synth_field {
++ char *type;
++ char *name;
++ size_t size;
++ bool is_signed;
++ bool is_string;
++};
+
-+ if (test_tsk_need_resched(curr))
-+ return;
++struct synth_event {
++ struct list_head list;
++ int ref;
++ char *name;
++ struct synth_field **fields;
++ unsigned int n_fields;
++ unsigned int n_u64;
++ struct trace_event_class class;
++ struct trace_event_call call;
++ struct tracepoint *tp;
++};
+
-+ if (test_tsk_need_resched_lazy(curr))
-+ return;
++struct action_data;
+
-+ set_tsk_need_resched_lazy(curr);
++typedef void (*action_fn_t) (struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt, void *rec,
++ struct ring_buffer_event *rbe,
++ struct action_data *data, u64 *var_ref_vals);
+
-+ cpu = cpu_of(rq);
-+ if (cpu == smp_processor_id())
++struct action_data {
++ action_fn_t fn;
++ unsigned int n_params;
++ char *params[SYNTH_FIELDS_MAX];
++
++ union {
++ struct {
++ unsigned int var_ref_idx;
++ char *match_event;
++ char *match_event_system;
++ char *synth_event_name;
++ struct synth_event *synth_event;
++ } onmatch;
++
++ struct {
++ char *var_str;
++ char *fn_name;
++ unsigned int max_var_ref_idx;
++ struct hist_field *max_var;
++ struct hist_field *var;
++ } onmax;
++ };
++};
++
++
++static char last_hist_cmd[MAX_FILTER_STR_VAL];
++static char hist_err_str[MAX_FILTER_STR_VAL];
++
++static void last_cmd_set(char *str)
++{
++ if (!str)
+ return;
+
-+ /* NEED_RESCHED_LAZY must be visible before we test polling */
-+ smp_mb();
-+ if (!tsk_is_polling(curr))
-+ smp_send_reschedule(cpu);
++ strncpy(last_hist_cmd, str, MAX_FILTER_STR_VAL - 1);
+}
-+#endif
+
- void resched_cpu(int cpu)
- {
- struct rq *rq = cpu_rq(cpu);
-@@ -525,11 +565,14 @@ void resched_cpu(int cpu)
- */
- int get_nohz_timer_target(void)
- {
-- int i, cpu = smp_processor_id();
-+ int i, cpu;
- struct sched_domain *sd;
-
-+ preempt_disable_rt();
-+ cpu = smp_processor_id();
++static void hist_err(char *str, char *var)
++{
++ int maxlen = MAX_FILTER_STR_VAL - 1;
+
- if (!idle_cpu(cpu) && is_housekeeping_cpu(cpu))
-- return cpu;
-+ goto preempt_en_rt;
-
- rcu_read_lock();
- for_each_domain(cpu, sd) {
-@@ -548,6 +591,8 @@ int get_nohz_timer_target(void)
- cpu = housekeeping_any_cpu();
- unlock:
- rcu_read_unlock();
-+preempt_en_rt:
-+ preempt_enable_rt();
- return cpu;
- }
- /*
-@@ -1100,6 +1145,11 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
-
- lockdep_assert_held(&p->pi_lock);
-
-+ if (__migrate_disabled(p)) {
-+ cpumask_copy(&p->cpus_allowed, new_mask);
++ if (!str)
+ return;
-+ }
+
- queued = task_on_rq_queued(p);
- running = task_current(rq, p);
-
-@@ -1122,6 +1172,84 @@ void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
- set_curr_task(rq, p);
- }
-
-+static DEFINE_PER_CPU(struct cpumask, sched_cpumasks);
-+static DEFINE_MUTEX(sched_down_mutex);
-+static cpumask_t sched_down_cpumask;
++ if (strlen(hist_err_str))
++ return;
++
++ if (!var)
++ var = "";
++
++ if (strlen(hist_err_str) + strlen(str) + strlen(var) > maxlen)
++ return;
+
-+void tell_sched_cpu_down_begin(int cpu)
++ strcat(hist_err_str, str);
++ strcat(hist_err_str, var);
++}
++
++static void hist_err_event(char *str, char *system, char *event, char *var)
+{
-+ mutex_lock(&sched_down_mutex);
-+ cpumask_set_cpu(cpu, &sched_down_cpumask);
-+ mutex_unlock(&sched_down_mutex);
++ char err[MAX_FILTER_STR_VAL];
++
++ if (system && var)
++ snprintf(err, MAX_FILTER_STR_VAL, "%s.%s.%s", system, event, var);
++ else if (system)
++ snprintf(err, MAX_FILTER_STR_VAL, "%s.%s", system, event);
++ else
++ strncpy(err, var, MAX_FILTER_STR_VAL);
++
++ hist_err(str, err);
+}
+
-+void tell_sched_cpu_down_done(int cpu)
++static void hist_err_clear(void)
+{
-+ mutex_lock(&sched_down_mutex);
-+ cpumask_clear_cpu(cpu, &sched_down_cpumask);
-+ mutex_unlock(&sched_down_mutex);
++ hist_err_str[0] = '\0';
+}
+
-+/**
-+ * migrate_me - try to move the current task off this cpu
-+ *
-+ * Used by the pin_current_cpu() code to try to get tasks
-+ * to move off the current CPU as it is going down.
-+ * It will only move the task if the task isn't pinned to
-+ * the CPU (with migrate_disable, affinity or NO_SETAFFINITY)
-+ * and the task has to be in a RUNNING state. Otherwise the
-+ * movement of the task will wake it up (change its state
-+ * to running) when the task did not expect it.
-+ *
-+ * Returns 1 if it succeeded in moving the current task
-+ * 0 otherwise.
-+ */
-+int migrate_me(void)
++static bool have_hist_err(void)
+{
-+ struct task_struct *p = current;
-+ struct migration_arg arg;
-+ struct cpumask *cpumask;
-+ struct cpumask *mask;
-+ unsigned int dest_cpu;
-+ struct rq_flags rf;
-+ struct rq *rq;
++ if (strlen(hist_err_str))
++ return true;
+
-+ /*
-+ * We can not migrate tasks bounded to a CPU or tasks not
-+ * running. The movement of the task will wake it up.
-+ */
-+ if (p->flags & PF_NO_SETAFFINITY || p->state)
-+ return 0;
++ return false;
++}
+
-+ mutex_lock(&sched_down_mutex);
-+ rq = task_rq_lock(p, &rf);
++static LIST_HEAD(synth_event_list);
++static DEFINE_MUTEX(synth_event_mutex);
+
-+ cpumask = this_cpu_ptr(&sched_cpumasks);
-+ mask = &p->cpus_allowed;
++struct synth_trace_event {
++ struct trace_entry ent;
++ u64 fields[];
++};
+
-+ cpumask_andnot(cpumask, mask, &sched_down_cpumask);
++static int synth_event_define_fields(struct trace_event_call *call)
++{
++ struct synth_trace_event trace;
++ int offset = offsetof(typeof(trace), fields);
++ struct synth_event *event = call->data;
++ unsigned int i, size, n_u64;
++ char *name, *type;
++ bool is_signed;
++ int ret = 0;
++
++ for (i = 0, n_u64 = 0; i < event->n_fields; i++) {
++ size = event->fields[i]->size;
++ is_signed = event->fields[i]->is_signed;
++ type = event->fields[i]->type;
++ name = event->fields[i]->name;
++ ret = trace_define_field(call, type, name, offset, size,
++ is_signed, FILTER_OTHER);
++ if (ret)
++ break;
+
-+ if (!cpumask_weight(cpumask)) {
-+ /* It's only on this CPU? */
-+ task_rq_unlock(rq, p, &rf);
-+ mutex_unlock(&sched_down_mutex);
-+ return 0;
++ if (event->fields[i]->is_string) {
++ offset += STR_VAR_LEN_MAX;
++ n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
++ } else {
++ offset += sizeof(u64);
++ n_u64++;
++ }
+ }
+
-+ dest_cpu = cpumask_any_and(cpu_active_mask, cpumask);
++ event->n_u64 = n_u64;
++
++ return ret;
++}
+
-+ arg.task = p;
-+ arg.dest_cpu = dest_cpu;
++static bool synth_field_signed(char *type)
++{
++ if (strncmp(type, "u", 1) == 0)
++ return false;
+
-+ task_rq_unlock(rq, p, &rf);
++ return true;
++}
+
-+ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
-+ tlb_migrate_finish(p->mm);
-+ mutex_unlock(&sched_down_mutex);
++static int synth_field_is_string(char *type)
++{
++ if (strstr(type, "char[") != NULL)
++ return true;
+
-+ return 1;
++ return false;
+}
+
- /*
- * Change a given task's CPU affinity. Migrate the thread to a
- * proper CPU and schedule it away if the CPU it's executing on
-@@ -1179,7 +1307,7 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
- }
-
- /* Can the task run on the task's current CPU? If so, we're done */
-- if (cpumask_test_cpu(task_cpu(p), new_mask))
-+ if (cpumask_test_cpu(task_cpu(p), new_mask) || __migrate_disabled(p))
- goto out;
-
- dest_cpu = cpumask_any_and(cpu_valid_mask, new_mask);
-@@ -1366,6 +1494,18 @@ int migrate_swap(struct task_struct *cur, struct task_struct *p)
- return ret;
- }
-
-+static bool check_task_state(struct task_struct *p, long match_state)
++static int synth_field_string_size(char *type)
+{
-+ bool match = false;
++ char buf[4], *end, *start;
++ unsigned int len;
++ int size, err;
+
-+ raw_spin_lock_irq(&p->pi_lock);
-+ if (p->state == match_state || p->saved_state == match_state)
-+ match = true;
-+ raw_spin_unlock_irq(&p->pi_lock);
++ start = strstr(type, "char[");
++ if (start == NULL)
++ return -EINVAL;
++ start += strlen("char[");
+
-+ return match;
++ end = strchr(type, ']');
++ if (!end || end < start)
++ return -EINVAL;
++
++ len = end - start;
++ if (len > 3)
++ return -EINVAL;
++
++ strncpy(buf, start, len);
++ buf[len] = '\0';
++
++ err = kstrtouint(buf, 0, &size);
++ if (err)
++ return err;
++
++ if (size > STR_VAR_LEN_MAX)
++ return -EINVAL;
++
++ return size;
+}
+
- /*
- * wait_task_inactive - wait for a thread to unschedule.
- *
-@@ -1410,7 +1550,7 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
- * is actually now running somewhere else!
- */
- while (task_running(rq, p)) {
-- if (match_state && unlikely(p->state != match_state))
-+ if (match_state && !check_task_state(p, match_state))
- return 0;
- cpu_relax();
- }
-@@ -1425,7 +1565,8 @@ unsigned long wait_task_inactive(struct task_struct *p, long match_state)
- running = task_running(rq, p);
- queued = task_on_rq_queued(p);
- ncsw = 0;
-- if (!match_state || p->state == match_state)
-+ if (!match_state || p->state == match_state ||
-+ p->saved_state == match_state)
- ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
- task_rq_unlock(rq, p, &rf);
-
-@@ -1680,10 +1821,6 @@ static inline void ttwu_activate(struct rq *rq, struct task_struct *p, int en_fl
- {
- activate_task(rq, p, en_flags);
- p->on_rq = TASK_ON_RQ_QUEUED;
--
-- /* if a worker is waking up, notify workqueue */
-- if (p->flags & PF_WQ_WORKER)
-- wq_worker_waking_up(p, cpu_of(rq));
- }
-
- /*
-@@ -2018,8 +2155,27 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
- */
- smp_mb__before_spinlock();
- raw_spin_lock_irqsave(&p->pi_lock, flags);
-- if (!(p->state & state))
-+ if (!(p->state & state)) {
-+ /*
-+ * The task might be running due to a spinlock sleeper
-+ * wakeup. Check the saved state and set it to running
-+ * if the wakeup condition is true.
-+ */
-+ if (!(wake_flags & WF_LOCK_SLEEPER)) {
-+ if (p->saved_state & state) {
-+ p->saved_state = TASK_RUNNING;
-+ success = 1;
-+ }
-+ }
- goto out;
-+ }
++static int synth_field_size(char *type)
++{
++ int size = 0;
++
++ if (strcmp(type, "s64") == 0)
++ size = sizeof(s64);
++ else if (strcmp(type, "u64") == 0)
++ size = sizeof(u64);
++ else if (strcmp(type, "s32") == 0)
++ size = sizeof(s32);
++ else if (strcmp(type, "u32") == 0)
++ size = sizeof(u32);
++ else if (strcmp(type, "s16") == 0)
++ size = sizeof(s16);
++ else if (strcmp(type, "u16") == 0)
++ size = sizeof(u16);
++ else if (strcmp(type, "s8") == 0)
++ size = sizeof(s8);
++ else if (strcmp(type, "u8") == 0)
++ size = sizeof(u8);
++ else if (strcmp(type, "char") == 0)
++ size = sizeof(char);
++ else if (strcmp(type, "unsigned char") == 0)
++ size = sizeof(unsigned char);
++ else if (strcmp(type, "int") == 0)
++ size = sizeof(int);
++ else if (strcmp(type, "unsigned int") == 0)
++ size = sizeof(unsigned int);
++ else if (strcmp(type, "long") == 0)
++ size = sizeof(long);
++ else if (strcmp(type, "unsigned long") == 0)
++ size = sizeof(unsigned long);
++ else if (strcmp(type, "pid_t") == 0)
++ size = sizeof(pid_t);
++ else if (synth_field_is_string(type))
++ size = synth_field_string_size(type);
+
-+ /*
-+ * If this is a regular wakeup, then we can unconditionally
-+ * clear the saved state of a "lock sleeper".
-+ */
-+ if (!(wake_flags & WF_LOCK_SLEEPER))
-+ p->saved_state = TASK_RUNNING;
-
- trace_sched_waking(p);
-
-@@ -2102,53 +2258,6 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
- }
-
- /**
-- * try_to_wake_up_local - try to wake up a local task with rq lock held
-- * @p: the thread to be awakened
-- * @cookie: context's cookie for pinning
-- *
-- * Put @p on the run-queue if it's not already there. The caller must
-- * ensure that this_rq() is locked, @p is bound to this_rq() and not
-- * the current task.
-- */
--static void try_to_wake_up_local(struct task_struct *p, struct pin_cookie cookie)
--{
-- struct rq *rq = task_rq(p);
--
-- if (WARN_ON_ONCE(rq != this_rq()) ||
-- WARN_ON_ONCE(p == current))
-- return;
--
-- lockdep_assert_held(&rq->lock);
--
-- if (!raw_spin_trylock(&p->pi_lock)) {
-- /*
-- * This is OK, because current is on_cpu, which avoids it being
-- * picked for load-balance and preemption/IRQs are still
-- * disabled avoiding further scheduler activity on it and we've
-- * not yet picked a replacement task.
-- */
-- lockdep_unpin_lock(&rq->lock, cookie);
-- raw_spin_unlock(&rq->lock);
-- raw_spin_lock(&p->pi_lock);
-- raw_spin_lock(&rq->lock);
-- lockdep_repin_lock(&rq->lock, cookie);
-- }
--
-- if (!(p->state & TASK_NORMAL))
-- goto out;
--
-- trace_sched_waking(p);
--
-- if (!task_on_rq_queued(p))
-- ttwu_activate(rq, p, ENQUEUE_WAKEUP);
--
-- ttwu_do_wakeup(rq, p, 0, cookie);
-- ttwu_stat(p, smp_processor_id(), 0);
--out:
-- raw_spin_unlock(&p->pi_lock);
--}
--
--/**
- * wake_up_process - Wake up a specific process
- * @p: The process to be woken up.
- *
-@@ -2166,6 +2275,18 @@ int wake_up_process(struct task_struct *p)
- }
- EXPORT_SYMBOL(wake_up_process);
-
-+/**
-+ * wake_up_lock_sleeper - Wake up a specific process blocked on a "sleeping lock"
-+ * @p: The process to be woken up.
-+ *
-+ * Same as wake_up_process() above, but wake_flags=WF_LOCK_SLEEPER to indicate
-+ * the nature of the wakeup.
-+ */
-+int wake_up_lock_sleeper(struct task_struct *p)
-+{
-+ return try_to_wake_up(p, TASK_ALL, WF_LOCK_SLEEPER);
++ return size;
++}
++
++static const char *synth_field_fmt(char *type)
++{
++ const char *fmt = "%llu";
++
++ if (strcmp(type, "s64") == 0)
++ fmt = "%lld";
++ else if (strcmp(type, "u64") == 0)
++ fmt = "%llu";
++ else if (strcmp(type, "s32") == 0)
++ fmt = "%d";
++ else if (strcmp(type, "u32") == 0)
++ fmt = "%u";
++ else if (strcmp(type, "s16") == 0)
++ fmt = "%d";
++ else if (strcmp(type, "u16") == 0)
++ fmt = "%u";
++ else if (strcmp(type, "s8") == 0)
++ fmt = "%d";
++ else if (strcmp(type, "u8") == 0)
++ fmt = "%u";
++ else if (strcmp(type, "char") == 0)
++ fmt = "%d";
++ else if (strcmp(type, "unsigned char") == 0)
++ fmt = "%u";
++ else if (strcmp(type, "int") == 0)
++ fmt = "%d";
++ else if (strcmp(type, "unsigned int") == 0)
++ fmt = "%u";
++ else if (strcmp(type, "long") == 0)
++ fmt = "%ld";
++ else if (strcmp(type, "unsigned long") == 0)
++ fmt = "%lu";
++ else if (strcmp(type, "pid_t") == 0)
++ fmt = "%d";
++ else if (synth_field_is_string(type))
++ fmt = "%s";
++
++ return fmt;
++}
++
++static enum print_line_t print_synth_event(struct trace_iterator *iter,
++ int flags,
++ struct trace_event *event)
++{
++ struct trace_array *tr = iter->tr;
++ struct trace_seq *s = &iter->seq;
++ struct synth_trace_event *entry;
++ struct synth_event *se;
++ unsigned int i, n_u64;
++ char print_fmt[32];
++ const char *fmt;
++
++ entry = (struct synth_trace_event *)iter->ent;
++ se = container_of(event, struct synth_event, call.event);
++
++ trace_seq_printf(s, "%s: ", se->name);
++
++ for (i = 0, n_u64 = 0; i < se->n_fields; i++) {
++ if (trace_seq_has_overflowed(s))
++ goto end;
++
++ fmt = synth_field_fmt(se->fields[i]->type);
++
++ /* parameter types */
++ if (tr->trace_flags & TRACE_ITER_VERBOSE)
++ trace_seq_printf(s, "%s ", fmt);
++
++ snprintf(print_fmt, sizeof(print_fmt), "%%s=%s%%s", fmt);
++
++ /* parameter values */
++ if (se->fields[i]->is_string) {
++ trace_seq_printf(s, print_fmt, se->fields[i]->name,
++ (char *)&entry->fields[n_u64],
++ i == se->n_fields - 1 ? "" : " ");
++ n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
++ } else {
++ trace_seq_printf(s, print_fmt, se->fields[i]->name,
++ entry->fields[n_u64],
++ i == se->n_fields - 1 ? "" : " ");
++ n_u64++;
++ }
++ }
++end:
++ trace_seq_putc(s, '\n');
++
++ return trace_handle_return(s);
+}
+
- int wake_up_state(struct task_struct *p, unsigned int state)
- {
- return try_to_wake_up(p, state, 0);
-@@ -2442,6 +2563,9 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
- p->on_cpu = 0;
- #endif
- init_task_preempt_count(p);
-+#ifdef CONFIG_HAVE_PREEMPT_LAZY
-+ task_thread_info(p)->preempt_lazy_count = 0;
-+#endif
- #ifdef CONFIG_SMP
- plist_node_init(&p->pushable_tasks, MAX_PRIO);
- RB_CLEAR_NODE(&p->pushable_dl_tasks);
-@@ -2770,21 +2894,16 @@ static struct rq *finish_task_switch(struct task_struct *prev)
- finish_arch_post_lock_switch();
-
- fire_sched_in_preempt_notifiers(current);
-+ /*
-+ * We use mmdrop_delayed() here so we don't have to do the
-+ * full __mmdrop() when we are the last user.
-+ */
- if (mm)
-- mmdrop(mm);
-+ mmdrop_delayed(mm);
- if (unlikely(prev_state == TASK_DEAD)) {
- if (prev->sched_class->task_dead)
- prev->sched_class->task_dead(prev);
-
-- /*
-- * Remove function-return probe instances associated with this
-- * task and put them back on the free list.
-- */
-- kprobe_flush_task(prev);
--
-- /* Task is done with its stack. */
-- put_task_stack(prev);
--
- put_task_struct(prev);
- }
-
-@@ -3252,6 +3371,77 @@ static inline void schedule_debug(struct task_struct *prev)
- schedstat_inc(this_rq()->sched_count);
- }
-
-+#if defined(CONFIG_PREEMPT_RT_FULL) && defined(CONFIG_SMP)
++static struct trace_event_functions synth_event_funcs = {
++ .trace = print_synth_event
++};
+
-+void migrate_disable(void)
++static notrace void trace_event_raw_event_synth(void *__data,
++ u64 *var_ref_vals,
++ unsigned int var_ref_idx)
+{
-+ struct task_struct *p = current;
++ struct trace_event_file *trace_file = __data;
++ struct synth_trace_event *entry;
++ struct trace_event_buffer fbuffer;
++ struct ring_buffer *buffer;
++ struct synth_event *event;
++ unsigned int i, n_u64;
++ int fields_size = 0;
+
-+ if (in_atomic() || irqs_disabled()) {
-+#ifdef CONFIG_SCHED_DEBUG
-+ p->migrate_disable_atomic++;
-+#endif
++ event = trace_file->event_call->data;
++
++ if (trace_trigger_soft_disabled(trace_file))
+ return;
-+ }
+
-+#ifdef CONFIG_SCHED_DEBUG
-+ if (unlikely(p->migrate_disable_atomic)) {
-+ tracing_off();
-+ WARN_ON_ONCE(1);
-+ }
-+#endif
++ fields_size = event->n_u64 * sizeof(u64);
+
-+ if (p->migrate_disable) {
-+ p->migrate_disable++;
-+ return;
++ /*
++ * Avoid ring buffer recursion detection, as this event
++ * is being performed within another event.
++ */
++ buffer = trace_file->tr->trace_buffer.buffer;
++ ring_buffer_nest_start(buffer);
++
++ entry = trace_event_buffer_reserve(&fbuffer, trace_file,
++ sizeof(*entry) + fields_size);
++ if (!entry)
++ goto out;
++
++ for (i = 0, n_u64 = 0; i < event->n_fields; i++) {
++ if (event->fields[i]->is_string) {
++ char *str_val = (char *)(long)var_ref_vals[var_ref_idx + i];
++ char *str_field = (char *)&entry->fields[n_u64];
++
++ strscpy(str_field, str_val, STR_VAR_LEN_MAX);
++ n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
++ } else {
++ entry->fields[n_u64] = var_ref_vals[var_ref_idx + i];
++ n_u64++;
++ }
+ }
+
-+ preempt_disable();
-+ preempt_lazy_disable();
-+ pin_current_cpu();
-+ p->migrate_disable = 1;
-+ preempt_enable();
++ trace_event_buffer_commit(&fbuffer);
++out:
++ ring_buffer_nest_end(buffer);
+}
-+EXPORT_SYMBOL(migrate_disable);
+
-+void migrate_enable(void)
++static void free_synth_event_print_fmt(struct trace_event_call *call)
+{
-+ struct task_struct *p = current;
-+
-+ if (in_atomic() || irqs_disabled()) {
-+#ifdef CONFIG_SCHED_DEBUG
-+ p->migrate_disable_atomic--;
-+#endif
-+ return;
++ if (call) {
++ kfree(call->print_fmt);
++ call->print_fmt = NULL;
+ }
++}
+
-+#ifdef CONFIG_SCHED_DEBUG
-+ if (unlikely(p->migrate_disable_atomic)) {
-+ tracing_off();
-+ WARN_ON_ONCE(1);
++static int __set_synth_event_print_fmt(struct synth_event *event,
++ char *buf, int len)
++{
++ const char *fmt;
++ int pos = 0;
++ int i;
++
++ /* When len=0, we just calculate the needed length */
++#define LEN_OR_ZERO (len ? len - pos : 0)
++
++ pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
++ for (i = 0; i < event->n_fields; i++) {
++ fmt = synth_field_fmt(event->fields[i]->type);
++ pos += snprintf(buf + pos, LEN_OR_ZERO, "%s=%s%s",
++ event->fields[i]->name, fmt,
++ i == event->n_fields - 1 ? "" : ", ");
+ }
-+#endif
-+ WARN_ON_ONCE(p->migrate_disable <= 0);
++ pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
+
-+ if (p->migrate_disable > 1) {
-+ p->migrate_disable--;
-+ return;
++ for (i = 0; i < event->n_fields; i++) {
++ pos += snprintf(buf + pos, LEN_OR_ZERO,
++ ", REC->%s", event->fields[i]->name);
+ }
+
-+ preempt_disable();
-+ /*
-+ * Clearing migrate_disable causes tsk_cpus_allowed to
-+ * show the tasks original cpu affinity.
-+ */
-+ p->migrate_disable = 0;
++#undef LEN_OR_ZERO
+
-+ unpin_current_cpu();
-+ preempt_enable();
-+ preempt_lazy_enable();
++ /* return the length of print_fmt */
++ return pos;
+}
-+EXPORT_SYMBOL(migrate_enable);
-+#endif
+
- /*
- * Pick up the highest-prio task:
- */
-@@ -3368,19 +3558,6 @@ static void __sched notrace __schedule(bool preempt)
- } else {
- deactivate_task(rq, prev, DEQUEUE_SLEEP);
- prev->on_rq = 0;
--
-- /*
-- * If a worker went to sleep, notify and ask workqueue
-- * whether it wants to wake up a task to maintain
-- * concurrency.
-- */
-- if (prev->flags & PF_WQ_WORKER) {
-- struct task_struct *to_wakeup;
--
-- to_wakeup = wq_worker_sleeping(prev);
-- if (to_wakeup)
-- try_to_wake_up_local(to_wakeup, cookie);
-- }
- }
- switch_count = &prev->nvcsw;
- }
-@@ -3390,6 +3567,7 @@ static void __sched notrace __schedule(bool preempt)
-
- next = pick_next_task(rq, prev, cookie);
- clear_tsk_need_resched(prev);
-+ clear_tsk_need_resched_lazy(prev);
- clear_preempt_need_resched();
- rq->clock_skip_update = 0;
-
-@@ -3437,9 +3615,20 @@ void __noreturn do_task_dead(void)
-
- static inline void sched_submit_work(struct task_struct *tsk)
- {
-- if (!tsk->state || tsk_is_pi_blocked(tsk))
-+ if (!tsk->state)
- return;
- /*
-+ * If a worker went to sleep, notify and ask workqueue whether
-+ * it wants to wake up a task to maintain concurrency.
-+ */
-+ if (tsk->flags & PF_WQ_WORKER)
-+ wq_worker_sleeping(tsk);
++static int set_synth_event_print_fmt(struct trace_event_call *call)
++{
++ struct synth_event *event = call->data;
++ char *print_fmt;
++ int len;
+
++ /* First: called with 0 length to calculate the needed length */
++ len = __set_synth_event_print_fmt(event, NULL, 0);
+
-+ if (tsk_is_pi_blocked(tsk))
-+ return;
++ print_fmt = kmalloc(len + 1, GFP_KERNEL);
++ if (!print_fmt)
++ return -ENOMEM;
+
-+ /*
- * If we are going to sleep and we have plugged IO queued,
- * make sure to submit it to avoid deadlocks.
- */
-@@ -3447,6 +3636,12 @@ static inline void sched_submit_work(struct task_struct *tsk)
- blk_schedule_flush_plug(tsk);
- }
-
-+static void sched_update_worker(struct task_struct *tsk)
-+{
-+ if (tsk->flags & PF_WQ_WORKER)
-+ wq_worker_running(tsk);
++ /* Second: actually write the @print_fmt */
++ __set_synth_event_print_fmt(event, print_fmt, len + 1);
++ call->print_fmt = print_fmt;
++
++ return 0;
+}
+
- asmlinkage __visible void __sched schedule(void)
- {
- struct task_struct *tsk = current;
-@@ -3457,6 +3652,7 @@ asmlinkage __visible void __sched schedule(void)
- __schedule(false);
- sched_preempt_enable_no_resched();
- } while (need_resched());
-+ sched_update_worker(tsk);
- }
- EXPORT_SYMBOL(schedule);
-
-@@ -3520,6 +3716,30 @@ static void __sched notrace preempt_schedule_common(void)
- } while (need_resched());
- }
-
-+#ifdef CONFIG_PREEMPT_LAZY
-+/*
-+ * If TIF_NEED_RESCHED is then we allow to be scheduled away since this is
-+ * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
-+ * preempt_lazy_count counter >0.
-+ */
-+static __always_inline int preemptible_lazy(void)
++static void free_synth_field(struct synth_field *field)
+{
-+ if (test_thread_flag(TIF_NEED_RESCHED))
-+ return 1;
-+ if (current_thread_info()->preempt_lazy_count)
-+ return 0;
-+ return 1;
++ kfree(field->type);
++ kfree(field->name);
++ kfree(field);
+}
+
-+#else
-+
-+static inline int preemptible_lazy(void)
++static struct synth_field *parse_synth_field(char *field_type,
++ char *field_name)
+{
-+ return 1;
-+}
++ struct synth_field *field;
++ int len, ret = 0;
++ char *array;
+
-+#endif
++ if (field_type[0] == ';')
++ field_type++;
+
- #ifdef CONFIG_PREEMPT
- /*
- * this is the entry point to schedule() from in-kernel preemption
-@@ -3534,7 +3754,8 @@ asmlinkage __visible void __sched notrace preempt_schedule(void)
- */
- if (likely(!preemptible()))
- return;
--
-+ if (!preemptible_lazy())
-+ return;
- preempt_schedule_common();
- }
- NOKPROBE_SYMBOL(preempt_schedule);
-@@ -3561,6 +3782,9 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
- if (likely(!preemptible()))
- return;
-
-+ if (!preemptible_lazy())
-+ return;
++ len = strlen(field_name);
++ if (field_name[len - 1] == ';')
++ field_name[len - 1] = '\0';
+
- do {
- /*
- * Because the function tracer can trace preempt_count_sub()
-@@ -3583,7 +3807,16 @@ asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
- * an infinite recursion.
- */
- prev_ctx = exception_enter();
-+ /*
-+ * The add/subtract must not be traced by the function
-+ * tracer. But we still want to account for the
-+ * preempt off latency tracer. Since the _notrace versions
-+ * of add/subtract skip the accounting for latency tracer
-+ * we must force it manually.
-+ */
-+ start_critical_timings();
- __schedule(true);
-+ stop_critical_timings();
- exception_exit(prev_ctx);
-
- preempt_latency_stop(1);
-@@ -4939,6 +5172,7 @@ int __cond_resched_lock(spinlock_t *lock)
- }
- EXPORT_SYMBOL(__cond_resched_lock);
-
-+#ifndef CONFIG_PREEMPT_RT_FULL
- int __sched __cond_resched_softirq(void)
- {
- BUG_ON(!in_softirq());
-@@ -4952,6 +5186,7 @@ int __sched __cond_resched_softirq(void)
- return 0;
- }
- EXPORT_SYMBOL(__cond_resched_softirq);
-+#endif
-
- /**
- * yield - yield the current processor to other threads.
-@@ -5315,7 +5550,9 @@ void init_idle(struct task_struct *idle, int cpu)
-
- /* Set the preempt count _outside_ the spinlocks! */
- init_idle_preempt_count(idle, cpu);
--
-+#ifdef CONFIG_HAVE_PREEMPT_LAZY
-+ task_thread_info(idle)->preempt_lazy_count = 0;
-+#endif
- /*
- * The idle tasks have their own, simple scheduling class:
- */
-@@ -5458,6 +5695,8 @@ void sched_setnuma(struct task_struct *p, int nid)
- #endif /* CONFIG_NUMA_BALANCING */
-
- #ifdef CONFIG_HOTPLUG_CPU
-+static DEFINE_PER_CPU(struct mm_struct *, idle_last_mm);
++ field = kzalloc(sizeof(*field), GFP_KERNEL);
++ if (!field)
++ return ERR_PTR(-ENOMEM);
+
- /*
- * Ensures that the idle task is using init_mm right before its cpu goes
- * offline.
-@@ -5472,7 +5711,12 @@ void idle_task_exit(void)
- switch_mm_irqs_off(mm, &init_mm, current);
- finish_arch_post_lock_switch();
- }
-- mmdrop(mm);
-+ /*
-+ * Defer the cleanup to an alive cpu. On RT we can neither
-+ * call mmdrop() nor mmdrop_delayed() from here.
-+ */
-+ per_cpu(idle_last_mm, smp_processor_id()) = mm;
++ len = strlen(field_type) + 1;
++ array = strchr(field_name, '[');
++ if (array)
++ len += strlen(array);
++ field->type = kzalloc(len, GFP_KERNEL);
++ if (!field->type) {
++ ret = -ENOMEM;
++ goto free;
++ }
++ strcat(field->type, field_type);
++ if (array) {
++ strcat(field->type, array);
++ *array = '\0';
++ }
+
- }
-
- /*
-@@ -7418,6 +7662,10 @@ int sched_cpu_dying(unsigned int cpu)
- update_max_interval();
- nohz_balance_exit_idle(cpu);
- hrtick_clear(rq);
-+ if (per_cpu(idle_last_mm, cpu)) {
-+ mmdrop_delayed(per_cpu(idle_last_mm, cpu));
-+ per_cpu(idle_last_mm, cpu) = NULL;
++ field->size = synth_field_size(field->type);
++ if (!field->size) {
++ ret = -EINVAL;
++ goto free;
+ }
- return 0;
- }
- #endif
-@@ -7698,7 +7946,7 @@ void __init sched_init(void)
- #ifdef CONFIG_DEBUG_ATOMIC_SLEEP
- static inline int preempt_count_equals(int preempt_offset)
- {
-- int nested = preempt_count() + rcu_preempt_depth();
-+ int nested = preempt_count() + sched_rcu_preempt_depth();
-
- return (nested == preempt_offset);
- }
-diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
-index 37e2449186c4..26dcaabde8b3 100644
---- a/kernel/sched/deadline.c
-+++ b/kernel/sched/deadline.c
-@@ -687,6 +687,7 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
-
- hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
- timer->function = dl_task_timer;
-+ timer->irqsafe = 1;
- }
-
- static
-diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
-index fa178b62ea79..935224123441 100644
---- a/kernel/sched/debug.c
-+++ b/kernel/sched/debug.c
-@@ -558,6 +558,9 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
- P(rt_throttled);
- PN(rt_time);
- PN(rt_runtime);
-+#ifdef CONFIG_SMP
-+ P(rt_nr_migratory);
-+#endif
-
- #undef PN
- #undef P
-@@ -953,6 +956,10 @@ void proc_sched_show_task(struct task_struct *p, struct seq_file *m)
- #endif
- P(policy);
- P(prio);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ P(migrate_disable);
-+#endif
-+ P(nr_cpus_allowed);
- #undef PN_SCHEDSTAT
- #undef PN
- #undef __PN
-diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
-index c242944f5cbd..4aeb2e2e41bc 100644
---- a/kernel/sched/fair.c
-+++ b/kernel/sched/fair.c
-@@ -3518,7 +3518,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
- ideal_runtime = sched_slice(cfs_rq, curr);
- delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
- if (delta_exec > ideal_runtime) {
-- resched_curr(rq_of(cfs_rq));
-+ resched_curr_lazy(rq_of(cfs_rq));
- /*
- * The current task ran long enough, ensure it doesn't get
- * re-elected due to buddy favours.
-@@ -3542,7 +3542,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
- return;
-
- if (delta > ideal_runtime)
-- resched_curr(rq_of(cfs_rq));
-+ resched_curr_lazy(rq_of(cfs_rq));
- }
-
- static void
-@@ -3684,7 +3684,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued)
- * validating it and just reschedule.
- */
- if (queued) {
-- resched_curr(rq_of(cfs_rq));
-+ resched_curr_lazy(rq_of(cfs_rq));
- return;
- }
- /*
-@@ -3866,7 +3866,7 @@ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
- * hierarchy can be throttled
- */
- if (!assign_cfs_rq_runtime(cfs_rq) && likely(cfs_rq->curr))
-- resched_curr(rq_of(cfs_rq));
-+ resched_curr_lazy(rq_of(cfs_rq));
- }
-
- static __always_inline
-@@ -4494,7 +4494,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
-
- if (delta < 0) {
- if (rq->curr == p)
-- resched_curr(rq);
-+ resched_curr_lazy(rq);
- return;
- }
- hrtick_start(rq, delta);
-@@ -5905,7 +5905,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
- return;
-
- preempt:
-- resched_curr(rq);
-+ resched_curr_lazy(rq);
- /*
- * Only set the backward buddy when the current task is still
- * on the rq. This can happen when a wakeup gets interleaved
-@@ -8631,7 +8631,7 @@ static void task_fork_fair(struct task_struct *p)
- * 'current' within the tree based on its new key value.
- */
- swap(curr->vruntime, se->vruntime);
-- resched_curr(rq);
-+ resched_curr_lazy(rq);
- }
-
- se->vruntime -= cfs_rq->min_vruntime;
-@@ -8655,7 +8655,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
- */
- if (rq->curr == p) {
- if (p->prio > oldprio)
-- resched_curr(rq);
-+ resched_curr_lazy(rq);
- } else
- check_preempt_curr(rq, p, 0);
- }
-diff --git a/kernel/sched/features.h b/kernel/sched/features.h
-index 69631fa46c2f..6d28fcd08872 100644
---- a/kernel/sched/features.h
-+++ b/kernel/sched/features.h
-@@ -45,11 +45,19 @@ SCHED_FEAT(LB_BIAS, true)
- */
- SCHED_FEAT(NONTASK_CAPACITY, true)
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+SCHED_FEAT(TTWU_QUEUE, false)
-+# ifdef CONFIG_PREEMPT_LAZY
-+SCHED_FEAT(PREEMPT_LAZY, true)
-+# endif
-+#else
+
- /*
- * Queue remote wakeups on the target CPU and process them
- * using the scheduler IPI. Reduces rq->lock contention/bounces.
- */
- SCHED_FEAT(TTWU_QUEUE, true)
-+#endif
-
- #ifdef HAVE_RT_PUSH_IPI
- /*
-diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
-index 2516b8df6dbb..2556baa0a97e 100644
---- a/kernel/sched/rt.c
-+++ b/kernel/sched/rt.c
-@@ -47,6 +47,7 @@ void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime)
-
- hrtimer_init(&rt_b->rt_period_timer,
- CLOCK_MONOTONIC, HRTIMER_MODE_REL);
-+ rt_b->rt_period_timer.irqsafe = 1;
- rt_b->rt_period_timer.function = sched_rt_period_timer;
- }
-
-@@ -101,6 +102,7 @@ void init_rt_rq(struct rt_rq *rt_rq)
- rt_rq->push_cpu = nr_cpu_ids;
- raw_spin_lock_init(&rt_rq->push_lock);
- init_irq_work(&rt_rq->push_work, push_irq_work_func);
-+ rt_rq->push_work.flags |= IRQ_WORK_HARD_IRQ;
- #endif
- #endif /* CONFIG_SMP */
- /* We start is dequeued state, because no RT tasks are queued */
-diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
-index 055f935d4421..19324ac27026 100644
---- a/kernel/sched/sched.h
-+++ b/kernel/sched/sched.h
-@@ -1163,6 +1163,7 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
- #define WF_SYNC 0x01 /* waker goes to sleep after wakeup */
- #define WF_FORK 0x02 /* child wakeup after fork */
- #define WF_MIGRATED 0x4 /* internal use, task got migrated */
-+#define WF_LOCK_SLEEPER 0x08 /* wakeup spinlock "sleeper" */
-
- /*
- * To aid in avoiding the subversion of "niceness" due to uneven distribution
-@@ -1346,6 +1347,15 @@ extern void init_sched_fair_class(void);
- extern void resched_curr(struct rq *rq);
- extern void resched_cpu(int cpu);
-
-+#ifdef CONFIG_PREEMPT_LAZY
-+extern void resched_curr_lazy(struct rq *rq);
-+#else
-+static inline void resched_curr_lazy(struct rq *rq)
++ if (synth_field_is_string(field->type))
++ field->is_string = true;
++
++ field->is_signed = synth_field_signed(field->type);
++
++ field->name = kstrdup(field_name, GFP_KERNEL);
++ if (!field->name) {
++ ret = -ENOMEM;
++ goto free;
++ }
++ out:
++ return field;
++ free:
++ free_synth_field(field);
++ field = ERR_PTR(ret);
++ goto out;
++}
++
++static void free_synth_tracepoint(struct tracepoint *tp)
+{
-+ resched_curr(rq);
++ if (!tp)
++ return;
++
++ kfree(tp->name);
++ kfree(tp);
+}
-+#endif
+
- extern struct rt_bandwidth def_rt_bandwidth;
- extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
-
-diff --git a/kernel/sched/swait.c b/kernel/sched/swait.c
-index 82f0dff90030..ef027ff3250a 100644
---- a/kernel/sched/swait.c
-+++ b/kernel/sched/swait.c
-@@ -1,5 +1,6 @@
- #include <linux/sched.h>
- #include <linux/swait.h>
-+#include <linux/suspend.h>
-
- void __init_swait_queue_head(struct swait_queue_head *q, const char *name,
- struct lock_class_key *key)
-@@ -29,6 +30,25 @@ void swake_up_locked(struct swait_queue_head *q)
- }
- EXPORT_SYMBOL(swake_up_locked);
-
-+void swake_up_all_locked(struct swait_queue_head *q)
++static struct tracepoint *alloc_synth_tracepoint(char *name)
+{
-+ struct swait_queue *curr;
-+ int wakes = 0;
++ struct tracepoint *tp;
+
-+ while (!list_empty(&q->task_list)) {
++ tp = kzalloc(sizeof(*tp), GFP_KERNEL);
++ if (!tp)
++ return ERR_PTR(-ENOMEM);
+
-+ curr = list_first_entry(&q->task_list, typeof(*curr),
-+ task_list);
-+ wake_up_process(curr->task);
-+ list_del_init(&curr->task_list);
-+ wakes++;
++ tp->name = kstrdup(name, GFP_KERNEL);
++ if (!tp->name) {
++ kfree(tp);
++ return ERR_PTR(-ENOMEM);
+ }
-+ if (pm_in_action)
-+ return;
-+ WARN(wakes > 2, "complete_all() with %d waiters\n", wakes);
++
++ return tp;
+}
-+EXPORT_SYMBOL(swake_up_all_locked);
+
- void swake_up(struct swait_queue_head *q)
- {
- unsigned long flags;
-@@ -54,6 +74,7 @@ void swake_up_all(struct swait_queue_head *q)
- if (!swait_active(q))
- return;
-
-+ WARN_ON(irqs_disabled());
- raw_spin_lock_irq(&q->lock);
- list_splice_init(&q->task_list, &tmp);
- while (!list_empty(&tmp)) {
-diff --git a/kernel/sched/swork.c b/kernel/sched/swork.c
-new file mode 100644
-index 000000000000..1950f40ca725
---- /dev/null
-+++ b/kernel/sched/swork.c
-@@ -0,0 +1,173 @@
-+/*
-+ * Copyright (C) 2014 BMW Car IT GmbH, Daniel Wagner daniel.wagner@bmw-carit.de
-+ *
-+ * Provides a framework for enqueuing callbacks from irq context
-+ * PREEMPT_RT_FULL safe. The callbacks are executed in kthread context.
-+ */
++typedef void (*synth_probe_func_t) (void *__data, u64 *var_ref_vals,
++ unsigned int var_ref_idx);
+
-+#include <linux/swait.h>
-+#include <linux/swork.h>
-+#include <linux/kthread.h>
-+#include <linux/slab.h>
-+#include <linux/spinlock.h>
-+#include <linux/export.h>
++static inline void trace_synth(struct synth_event *event, u64 *var_ref_vals,
++ unsigned int var_ref_idx)
++{
++ struct tracepoint *tp = event->tp;
++
++ if (unlikely(atomic_read(&tp->key.enabled) > 0)) {
++ struct tracepoint_func *probe_func_ptr;
++ synth_probe_func_t probe_func;
++ void *__data;
++
++ if (!(cpu_online(raw_smp_processor_id())))
++ return;
++
++ probe_func_ptr = rcu_dereference_sched((tp)->funcs);
++ if (probe_func_ptr) {
++ do {
++ probe_func = probe_func_ptr->func;
++ __data = probe_func_ptr->data;
++ probe_func(__data, var_ref_vals, var_ref_idx);
++ } while ((++probe_func_ptr)->func);
++ }
++ }
++}
++
++static struct synth_event *find_synth_event(const char *name)
++{
++ struct synth_event *event;
++
++ list_for_each_entry(event, &synth_event_list, list) {
++ if (strcmp(event->name, name) == 0)
++ return event;
++ }
++
++ return NULL;
++}
++
++static int register_synth_event(struct synth_event *event)
++{
++ struct trace_event_call *call = &event->call;
++ int ret = 0;
++
++ event->call.class = &event->class;
++ event->class.system = kstrdup(SYNTH_SYSTEM, GFP_KERNEL);
++ if (!event->class.system) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ event->tp = alloc_synth_tracepoint(event->name);
++ if (IS_ERR(event->tp)) {
++ ret = PTR_ERR(event->tp);
++ event->tp = NULL;
++ goto out;
++ }
++
++ INIT_LIST_HEAD(&call->class->fields);
++ call->event.funcs = &synth_event_funcs;
++ call->class->define_fields = synth_event_define_fields;
++
++ ret = register_trace_event(&call->event);
++ if (!ret) {
++ ret = -ENODEV;
++ goto out;
++ }
++ call->flags = TRACE_EVENT_FL_TRACEPOINT;
++ call->class->reg = trace_event_reg;
++ call->class->probe = trace_event_raw_event_synth;
++ call->data = event;
++ call->tp = event->tp;
++
++ ret = trace_add_event_call(call);
++ if (ret) {
++ pr_warn("Failed to register synthetic event: %s\n",
++ trace_event_name(call));
++ goto err;
++ }
++
++ ret = set_synth_event_print_fmt(call);
++ if (ret < 0) {
++ trace_remove_event_call(call);
++ goto err;
++ }
++ out:
++ return ret;
++ err:
++ unregister_trace_event(&call->event);
++ goto out;
++}
++
++static int unregister_synth_event(struct synth_event *event)
++{
++ struct trace_event_call *call = &event->call;
++ int ret;
++
++ ret = trace_remove_event_call(call);
++
++ return ret;
++}
++
++static void free_synth_event(struct synth_event *event)
++{
++ unsigned int i;
++
++ if (!event)
++ return;
++
++ for (i = 0; i < event->n_fields; i++)
++ free_synth_field(event->fields[i]);
++
++ kfree(event->fields);
++ kfree(event->name);
++ kfree(event->class.system);
++ free_synth_tracepoint(event->tp);
++ free_synth_event_print_fmt(&event->call);
++ kfree(event);
++}
++
++static struct synth_event *alloc_synth_event(char *event_name, int n_fields,
++ struct synth_field **fields)
++{
++ struct synth_event *event;
++ unsigned int i;
+
-+#define SWORK_EVENT_PENDING (1 << 0)
++ event = kzalloc(sizeof(*event), GFP_KERNEL);
++ if (!event) {
++ event = ERR_PTR(-ENOMEM);
++ goto out;
++ }
+
-+static DEFINE_MUTEX(worker_mutex);
-+static struct sworker *glob_worker;
++ event->name = kstrdup(event_name, GFP_KERNEL);
++ if (!event->name) {
++ kfree(event);
++ event = ERR_PTR(-ENOMEM);
++ goto out;
++ }
+
-+struct sworker {
-+ struct list_head events;
-+ struct swait_queue_head wq;
++ event->fields = kcalloc(n_fields, sizeof(*event->fields), GFP_KERNEL);
++ if (!event->fields) {
++ free_synth_event(event);
++ event = ERR_PTR(-ENOMEM);
++ goto out;
++ }
+
-+ raw_spinlock_t lock;
++ for (i = 0; i < n_fields; i++)
++ event->fields[i] = fields[i];
+
-+ struct task_struct *task;
-+ int refs;
-+};
++ event->n_fields = n_fields;
++ out:
++ return event;
++}
+
-+static bool swork_readable(struct sworker *worker)
++static void action_trace(struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt, void *rec,
++ struct ring_buffer_event *rbe,
++ struct action_data *data, u64 *var_ref_vals)
+{
-+ bool r;
++ struct synth_event *event = data->onmatch.synth_event;
+
-+ if (kthread_should_stop())
-+ return true;
++ trace_synth(event, var_ref_vals, data->onmatch.var_ref_idx);
++}
+
-+ raw_spin_lock_irq(&worker->lock);
-+ r = !list_empty(&worker->events);
-+ raw_spin_unlock_irq(&worker->lock);
++struct hist_var_data {
++ struct list_head list;
++ struct hist_trigger_data *hist_data;
++};
+
-+ return r;
++static void add_or_delete_synth_event(struct synth_event *event, int delete)
++{
++ if (delete)
++ free_synth_event(event);
++ else {
++ mutex_lock(&synth_event_mutex);
++ if (!find_synth_event(event->name))
++ list_add(&event->list, &synth_event_list);
++ else
++ free_synth_event(event);
++ mutex_unlock(&synth_event_mutex);
++ }
+}
+
-+static int swork_kthread(void *arg)
++static int create_synth_event(int argc, char **argv)
+{
-+ struct sworker *worker = arg;
++ struct synth_field *field, *fields[SYNTH_FIELDS_MAX];
++ struct synth_event *event = NULL;
++ bool delete_event = false;
++ int i, n_fields = 0, ret = 0;
++ char *name;
+
-+ for (;;) {
-+ swait_event_interruptible(worker->wq,
-+ swork_readable(worker));
-+ if (kthread_should_stop())
-+ break;
++ mutex_lock(&synth_event_mutex);
+
-+ raw_spin_lock_irq(&worker->lock);
-+ while (!list_empty(&worker->events)) {
-+ struct swork_event *sev;
++ /*
++ * Argument syntax:
++ * - Add synthetic event: <event_name> field[;field] ...
++ * - Remove synthetic event: !<event_name> field[;field] ...
++ * where 'field' = type field_name
++ */
++ if (argc < 1) {
++ ret = -EINVAL;
++ goto out;
++ }
+
-+ sev = list_first_entry(&worker->events,
-+ struct swork_event, item);
-+ list_del(&sev->item);
-+ raw_spin_unlock_irq(&worker->lock);
++ name = argv[0];
++ if (name[0] == '!') {
++ delete_event = true;
++ name++;
++ }
+
-+ WARN_ON_ONCE(!test_and_clear_bit(SWORK_EVENT_PENDING,
-+ &sev->flags));
-+ sev->func(sev);
-+ raw_spin_lock_irq(&worker->lock);
++ event = find_synth_event(name);
++ if (event) {
++ if (delete_event) {
++ if (event->ref) {
++ event = NULL;
++ ret = -EBUSY;
++ goto out;
++ }
++ list_del(&event->list);
++ goto out;
+ }
-+ raw_spin_unlock_irq(&worker->lock);
++ event = NULL;
++ ret = -EEXIST;
++ goto out;
++ } else if (delete_event)
++ goto out;
++
++ if (argc < 2) {
++ ret = -EINVAL;
++ goto out;
+ }
-+ return 0;
-+}
+
-+static struct sworker *swork_create(void)
-+{
-+ struct sworker *worker;
++ for (i = 1; i < argc - 1; i++) {
++ if (strcmp(argv[i], ";") == 0)
++ continue;
++ if (n_fields == SYNTH_FIELDS_MAX) {
++ ret = -EINVAL;
++ goto err;
++ }
+
-+ worker = kzalloc(sizeof(*worker), GFP_KERNEL);
-+ if (!worker)
-+ return ERR_PTR(-ENOMEM);
++ field = parse_synth_field(argv[i], argv[i + 1]);
++ if (IS_ERR(field)) {
++ ret = PTR_ERR(field);
++ goto err;
++ }
++ fields[n_fields] = field;
++ i++; n_fields++;
++ }
+
-+ INIT_LIST_HEAD(&worker->events);
-+ raw_spin_lock_init(&worker->lock);
-+ init_swait_queue_head(&worker->wq);
++ if (i < argc) {
++ ret = -EINVAL;
++ goto err;
++ }
+
-+ worker->task = kthread_run(swork_kthread, worker, "kswork");
-+ if (IS_ERR(worker->task)) {
-+ kfree(worker);
-+ return ERR_PTR(-ENOMEM);
++ event = alloc_synth_event(name, n_fields, fields);
++ if (IS_ERR(event)) {
++ ret = PTR_ERR(event);
++ event = NULL;
++ goto err;
+ }
++ out:
++ mutex_unlock(&synth_event_mutex);
+
-+ return worker;
++ if (event) {
++ if (delete_event) {
++ ret = unregister_synth_event(event);
++ add_or_delete_synth_event(event, !ret);
++ } else {
++ ret = register_synth_event(event);
++ add_or_delete_synth_event(event, ret);
++ }
++ }
++
++ return ret;
++ err:
++ mutex_unlock(&synth_event_mutex);
++
++ for (i = 0; i < n_fields; i++)
++ free_synth_field(fields[i]);
++ free_synth_event(event);
++
++ return ret;
+}
+
-+static void swork_destroy(struct sworker *worker)
++static int release_all_synth_events(void)
+{
-+ kthread_stop(worker->task);
++ struct list_head release_events;
++ struct synth_event *event, *e;
++ int ret = 0;
+
-+ WARN_ON(!list_empty(&worker->events));
-+ kfree(worker);
++ INIT_LIST_HEAD(&release_events);
++
++ mutex_lock(&synth_event_mutex);
++
++ list_for_each_entry(event, &synth_event_list, list) {
++ if (event->ref) {
++ mutex_unlock(&synth_event_mutex);
++ return -EBUSY;
++ }
++ }
++
++ list_splice_init(&event->list, &release_events);
++
++ mutex_unlock(&synth_event_mutex);
++
++ list_for_each_entry_safe(event, e, &release_events, list) {
++ list_del(&event->list);
++
++ ret = unregister_synth_event(event);
++ add_or_delete_synth_event(event, !ret);
++ }
++
++ return ret;
+}
+
-+/**
-+ * swork_queue - queue swork
-+ *
-+ * Returns %false if @work was already on a queue, %true otherwise.
-+ *
-+ * The work is queued and processed on a random CPU
-+ */
-+bool swork_queue(struct swork_event *sev)
++
++static void *synth_events_seq_start(struct seq_file *m, loff_t *pos)
+{
-+ unsigned long flags;
++ mutex_lock(&synth_event_mutex);
+
-+ if (test_and_set_bit(SWORK_EVENT_PENDING, &sev->flags))
-+ return false;
++ return seq_list_start(&synth_event_list, *pos);
++}
+
-+ raw_spin_lock_irqsave(&glob_worker->lock, flags);
-+ list_add_tail(&sev->item, &glob_worker->events);
-+ raw_spin_unlock_irqrestore(&glob_worker->lock, flags);
++static void *synth_events_seq_next(struct seq_file *m, void *v, loff_t *pos)
++{
++ return seq_list_next(v, &synth_event_list, pos);
++}
+
-+ swake_up(&glob_worker->wq);
-+ return true;
++static void synth_events_seq_stop(struct seq_file *m, void *v)
++{
++ mutex_unlock(&synth_event_mutex);
+}
-+EXPORT_SYMBOL_GPL(swork_queue);
+
-+/**
-+ * swork_get - get an instance of the sworker
-+ *
-+ * Returns an negative error code if the initialization if the worker did not
-+ * work, %0 otherwise.
-+ *
-+ */
-+int swork_get(void)
++static int synth_events_seq_show(struct seq_file *m, void *v)
+{
-+ struct sworker *worker;
++ struct synth_field *field;
++ struct synth_event *event = v;
++ unsigned int i;
+
-+ mutex_lock(&worker_mutex);
-+ if (!glob_worker) {
-+ worker = swork_create();
-+ if (IS_ERR(worker)) {
-+ mutex_unlock(&worker_mutex);
-+ return -ENOMEM;
-+ }
++ seq_printf(m, "%s\t", event->name);
+
-+ glob_worker = worker;
++ for (i = 0; i < event->n_fields; i++) {
++ field = event->fields[i];
++
++ /* parameter values */
++ seq_printf(m, "%s %s%s", field->type, field->name,
++ i == event->n_fields - 1 ? "" : "; ");
+ }
+
-+ glob_worker->refs++;
-+ mutex_unlock(&worker_mutex);
++ seq_putc(m, '\n');
+
+ return 0;
+}
-+EXPORT_SYMBOL_GPL(swork_get);
+
-+/**
-+ * swork_put - puts an instance of the sworker
-+ *
-+ * Will destroy the sworker thread. This function must not be called until all
-+ * queued events have been completed.
-+ */
-+void swork_put(void)
++static const struct seq_operations synth_events_seq_op = {
++ .start = synth_events_seq_start,
++ .next = synth_events_seq_next,
++ .stop = synth_events_seq_stop,
++ .show = synth_events_seq_show
++};
++
++static int synth_events_open(struct inode *inode, struct file *file)
+{
-+ mutex_lock(&worker_mutex);
++ int ret;
+
-+ glob_worker->refs--;
-+ if (glob_worker->refs > 0)
-+ goto out;
++ if ((file->f_mode & FMODE_WRITE) && (file->f_flags & O_TRUNC)) {
++ ret = release_all_synth_events();
++ if (ret < 0)
++ return ret;
++ }
+
-+ swork_destroy(glob_worker);
-+ glob_worker = NULL;
-+out:
-+ mutex_unlock(&worker_mutex);
++ return seq_open(file, &synth_events_seq_op);
+}
-+EXPORT_SYMBOL_GPL(swork_put);
-diff --git a/kernel/signal.c b/kernel/signal.c
-index 75761acc77cf..ae0773c76bb0 100644
---- a/kernel/signal.c
-+++ b/kernel/signal.c
-@@ -14,6 +14,7 @@
- #include <linux/export.h>
- #include <linux/init.h>
- #include <linux/sched.h>
-+#include <linux/sched/rt.h>
- #include <linux/fs.h>
- #include <linux/tty.h>
- #include <linux/binfmts.h>
-@@ -352,13 +353,30 @@ static bool task_participate_group_stop(struct task_struct *task)
- return false;
- }
-
-+static inline struct sigqueue *get_task_cache(struct task_struct *t)
++
++static ssize_t synth_events_write(struct file *file,
++ const char __user *buffer,
++ size_t count, loff_t *ppos)
+{
-+ struct sigqueue *q = t->sigqueue_cache;
++ return trace_parse_run_command(file, buffer, count, ppos,
++ create_synth_event);
++}
+
-+ if (cmpxchg(&t->sigqueue_cache, q, NULL) != q)
-+ return NULL;
-+ return q;
++static const struct file_operations synth_events_fops = {
++ .open = synth_events_open,
++ .write = synth_events_write,
++ .read = seq_read,
++ .llseek = seq_lseek,
++ .release = seq_release,
++};
++
++static u64 hist_field_timestamp(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
++{
++ struct hist_trigger_data *hist_data = hist_field->hist_data;
++ struct trace_array *tr = hist_data->event_file->tr;
++
++ u64 ts = ring_buffer_event_time_stamp(rbe);
++
++ if (hist_data->attrs->ts_in_usecs && trace_clock_in_ns(tr))
++ ts = ns2usecs(ts);
++
++ return ts;
+}
+
-+static inline int put_task_cache(struct task_struct *t, struct sigqueue *q)
++static u64 hist_field_cpu(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
+{
-+ if (cmpxchg(&t->sigqueue_cache, NULL, q) == NULL)
-+ return 0;
-+ return 1;
++ int cpu = smp_processor_id();
++
++ return cpu;
+}
+
- /*
- * allocate a new signal queue record
- * - this may be called without locks if and only if t == current, otherwise an
- * appropriate lock must be held to stop the target task from exiting
- */
- static struct sigqueue *
--__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimit)
-+__sigqueue_do_alloc(int sig, struct task_struct *t, gfp_t flags,
-+ int override_rlimit, int fromslab)
- {
- struct sigqueue *q = NULL;
- struct user_struct *user;
-@@ -375,7 +393,10 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi
- if (override_rlimit ||
- atomic_read(&user->sigpending) <=
- task_rlimit(t, RLIMIT_SIGPENDING)) {
-- q = kmem_cache_alloc(sigqueue_cachep, flags);
-+ if (!fromslab)
-+ q = get_task_cache(t);
-+ if (!q)
-+ q = kmem_cache_alloc(sigqueue_cachep, flags);
- } else {
- print_dropped_signal(sig);
- }
-@@ -392,6 +413,13 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi
- return q;
- }
-
-+static struct sigqueue *
-+__sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags,
-+ int override_rlimit)
++static struct hist_field *
++check_field_for_var_ref(struct hist_field *hist_field,
++ struct hist_trigger_data *var_data,
++ unsigned int var_idx)
+{
-+ return __sigqueue_do_alloc(sig, t, flags, override_rlimit, 0);
++ struct hist_field *found = NULL;
++
++ if (hist_field && hist_field->flags & HIST_FIELD_FL_VAR_REF) {
++ if (hist_field->var.idx == var_idx &&
++ hist_field->var.hist_data == var_data) {
++ found = hist_field;
++ }
++ }
++
++ return found;
+}
+
- static void __sigqueue_free(struct sigqueue *q)
- {
- if (q->flags & SIGQUEUE_PREALLOC)
-@@ -401,6 +429,21 @@ static void __sigqueue_free(struct sigqueue *q)
- kmem_cache_free(sigqueue_cachep, q);
- }
-
-+static void sigqueue_free_current(struct sigqueue *q)
++static struct hist_field *
++check_field_for_var_refs(struct hist_trigger_data *hist_data,
++ struct hist_field *hist_field,
++ struct hist_trigger_data *var_data,
++ unsigned int var_idx,
++ unsigned int level)
++{
++ struct hist_field *found = NULL;
++ unsigned int i;
++
++ if (level > 3)
++ return found;
++
++ if (!hist_field)
++ return found;
++
++ found = check_field_for_var_ref(hist_field, var_data, var_idx);
++ if (found)
++ return found;
++
++ for (i = 0; i < HIST_FIELD_OPERANDS_MAX; i++) {
++ struct hist_field *operand;
++
++ operand = hist_field->operands[i];
++ found = check_field_for_var_refs(hist_data, operand, var_data,
++ var_idx, level + 1);
++ if (found)
++ return found;
++ }
++
++ return found;
++}
++
++static struct hist_field *find_var_ref(struct hist_trigger_data *hist_data,
++ struct hist_trigger_data *var_data,
++ unsigned int var_idx)
++{
++ struct hist_field *hist_field, *found = NULL;
++ unsigned int i;
++
++ for_each_hist_field(i, hist_data) {
++ hist_field = hist_data->fields[i];
++ found = check_field_for_var_refs(hist_data, hist_field,
++ var_data, var_idx, 0);
++ if (found)
++ return found;
++ }
++
++ for (i = 0; i < hist_data->n_synth_var_refs; i++) {
++ hist_field = hist_data->synth_var_refs[i];
++ found = check_field_for_var_refs(hist_data, hist_field,
++ var_data, var_idx, 0);
++ if (found)
++ return found;
++ }
++
++ return found;
++}
++
++static struct hist_field *find_any_var_ref(struct hist_trigger_data *hist_data,
++ unsigned int var_idx)
+{
-+ struct user_struct *up;
++ struct trace_array *tr = hist_data->event_file->tr;
++ struct hist_field *found = NULL;
++ struct hist_var_data *var_data;
+
-+ if (q->flags & SIGQUEUE_PREALLOC)
-+ return;
++ list_for_each_entry(var_data, &tr->hist_vars, list) {
++ if (var_data->hist_data == hist_data)
++ continue;
++ found = find_var_ref(var_data->hist_data, hist_data, var_idx);
++ if (found)
++ break;
++ }
+
-+ up = q->user;
-+ if (rt_prio(current->normal_prio) && !put_task_cache(current, q)) {
-+ atomic_dec(&up->sigpending);
-+ free_uid(up);
-+ } else
-+ __sigqueue_free(q);
++ return found;
+}
+
- void flush_sigqueue(struct sigpending *queue)
- {
- struct sigqueue *q;
-@@ -414,6 +457,21 @@ void flush_sigqueue(struct sigpending *queue)
- }
-
- /*
-+ * Called from __exit_signal. Flush tsk->pending and
-+ * tsk->sigqueue_cache
-+ */
-+void flush_task_sigqueue(struct task_struct *tsk)
++static bool check_var_refs(struct hist_trigger_data *hist_data)
+{
-+ struct sigqueue *q;
++ struct hist_field *field;
++ bool found = false;
++ int i;
+
-+ flush_sigqueue(&tsk->pending);
++ for_each_hist_field(i, hist_data) {
++ field = hist_data->fields[i];
++ if (field && field->flags & HIST_FIELD_FL_VAR) {
++ if (find_any_var_ref(hist_data, field->var.idx)) {
++ found = true;
++ break;
++ }
++ }
++ }
+
-+ q = get_task_cache(tsk);
-+ if (q)
-+ kmem_cache_free(sigqueue_cachep, q);
++ return found;
+}
+
-+/*
- * Flush all pending signals for this kthread.
- */
- void flush_signals(struct task_struct *t)
-@@ -525,7 +583,7 @@ static void collect_signal(int sig, struct sigpending *list, siginfo_t *info)
- still_pending:
- list_del_init(&first->list);
- copy_siginfo(info, &first->info);
-- __sigqueue_free(first);
-+ sigqueue_free_current(first);
- } else {
- /*
- * Ok, it wasn't in the queue. This must be
-@@ -560,6 +618,8 @@ int dequeue_signal(struct task_struct *tsk, sigset_t *mask, siginfo_t *info)
- {
- int signr;
-
-+ WARN_ON_ONCE(tsk != current);
-+
- /* We only dequeue private signals from ourselves, we don't let
- * signalfd steal them
- */
-@@ -1156,8 +1216,8 @@ int do_send_sig_info(int sig, struct siginfo *info, struct task_struct *p,
- * We don't want to have recursive SIGSEGV's etc, for example,
- * that is why we also clear SIGNAL_UNKILLABLE.
- */
--int
--force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
-+static int
-+do_force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
- {
- unsigned long int flags;
- int ret, blocked, ignored;
-@@ -1182,6 +1242,39 @@ force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
- return ret;
- }
-
-+int force_sig_info(int sig, struct siginfo *info, struct task_struct *t)
++static struct hist_var_data *find_hist_vars(struct hist_trigger_data *hist_data)
+{
-+/*
-+ * On some archs, PREEMPT_RT has to delay sending a signal from a trap
-+ * since it can not enable preemption, and the signal code's spin_locks
-+ * turn into mutexes. Instead, it must set TIF_NOTIFY_RESUME which will
-+ * send the signal on exit of the trap.
-+ */
-+#ifdef ARCH_RT_DELAYS_SIGNAL_SEND
-+ if (in_atomic()) {
-+ if (WARN_ON_ONCE(t != current))
-+ return 0;
-+ if (WARN_ON_ONCE(t->forced_info.si_signo))
-+ return 0;
++ struct trace_array *tr = hist_data->event_file->tr;
++ struct hist_var_data *var_data, *found = NULL;
+
-+ if (is_si_special(info)) {
-+ WARN_ON_ONCE(info != SEND_SIG_PRIV);
-+ t->forced_info.si_signo = sig;
-+ t->forced_info.si_errno = 0;
-+ t->forced_info.si_code = SI_KERNEL;
-+ t->forced_info.si_pid = 0;
-+ t->forced_info.si_uid = 0;
-+ } else {
-+ t->forced_info = *info;
++ list_for_each_entry(var_data, &tr->hist_vars, list) {
++ if (var_data->hist_data == hist_data) {
++ found = var_data;
++ break;
+ }
++ }
+
-+ set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
-+ return 0;
++ return found;
++}
++
++static bool field_has_hist_vars(struct hist_field *hist_field,
++ unsigned int level)
++{
++ int i;
++
++ if (level > 3)
++ return false;
++
++ if (!hist_field)
++ return false;
++
++ if (hist_field->flags & HIST_FIELD_FL_VAR ||
++ hist_field->flags & HIST_FIELD_FL_VAR_REF)
++ return true;
++
++ for (i = 0; i < HIST_FIELD_OPERANDS_MAX; i++) {
++ struct hist_field *operand;
++
++ operand = hist_field->operands[i];
++ if (field_has_hist_vars(operand, level + 1))
++ return true;
+ }
-+#endif
-+ return do_force_sig_info(sig, info, t);
++
++ return false;
+}
+
- /*
- * Nuke all other threads in the group.
- */
-@@ -1216,12 +1309,12 @@ struct sighand_struct *__lock_task_sighand(struct task_struct *tsk,
- * Disable interrupts early to avoid deadlocks.
- * See rcu_read_unlock() comment header for details.
- */
-- local_irq_save(*flags);
-+ local_irq_save_nort(*flags);
- rcu_read_lock();
- sighand = rcu_dereference(tsk->sighand);
- if (unlikely(sighand == NULL)) {
- rcu_read_unlock();
-- local_irq_restore(*flags);
-+ local_irq_restore_nort(*flags);
- break;
- }
- /*
-@@ -1242,7 +1335,7 @@ struct sighand_struct *__lock_task_sighand(struct task_struct *tsk,
- }
- spin_unlock(&sighand->siglock);
- rcu_read_unlock();
-- local_irq_restore(*flags);
-+ local_irq_restore_nort(*flags);
- }
-
- return sighand;
-@@ -1485,7 +1578,8 @@ EXPORT_SYMBOL(kill_pid);
- */
- struct sigqueue *sigqueue_alloc(void)
- {
-- struct sigqueue *q = __sigqueue_alloc(-1, current, GFP_KERNEL, 0);
-+ /* Preallocated sigqueue objects always from the slabcache ! */
-+ struct sigqueue *q = __sigqueue_do_alloc(-1, current, GFP_KERNEL, 0, 1);
-
- if (q)
- q->flags |= SIGQUEUE_PREALLOC;
-@@ -1846,15 +1940,7 @@ static void ptrace_stop(int exit_code, int why, int clear_code, siginfo_t *info)
- if (gstop_done && ptrace_reparented(current))
- do_notify_parent_cldstop(current, false, why);
-
-- /*
-- * Don't want to allow preemption here, because
-- * sys_ptrace() needs this task to be inactive.
-- *
-- * XXX: implement read_unlock_no_resched().
-- */
-- preempt_disable();
- read_unlock(&tasklist_lock);
-- preempt_enable_no_resched();
- freezable_schedule();
- } else {
- /*
-diff --git a/kernel/softirq.c b/kernel/softirq.c
-index 744fa611cae0..819bd7cf5ad0 100644
---- a/kernel/softirq.c
-+++ b/kernel/softirq.c
-@@ -21,10 +21,12 @@
- #include <linux/freezer.h>
- #include <linux/kthread.h>
- #include <linux/rcupdate.h>
-+#include <linux/delay.h>
- #include <linux/ftrace.h>
- #include <linux/smp.h>
- #include <linux/smpboot.h>
- #include <linux/tick.h>
-+#include <linux/locallock.h>
- #include <linux/irq.h>
-
- #define CREATE_TRACE_POINTS
-@@ -56,12 +58,108 @@ EXPORT_SYMBOL(irq_stat);
- static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp;
-
- DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+#define TIMER_SOFTIRQS ((1 << TIMER_SOFTIRQ) | (1 << HRTIMER_SOFTIRQ))
-+DEFINE_PER_CPU(struct task_struct *, ktimer_softirqd);
-+#endif
-
- const char * const softirq_to_name[NR_SOFTIRQS] = {
- "HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
- "TASKLET", "SCHED", "HRTIMER", "RCU"
- };
-
-+#ifdef CONFIG_NO_HZ_COMMON
-+# ifdef CONFIG_PREEMPT_RT_FULL
++static bool has_hist_vars(struct hist_trigger_data *hist_data)
++{
++ struct hist_field *hist_field;
++ int i;
+
-+struct softirq_runner {
-+ struct task_struct *runner[NR_SOFTIRQS];
-+};
++ for_each_hist_field(i, hist_data) {
++ hist_field = hist_data->fields[i];
++ if (field_has_hist_vars(hist_field, 0))
++ return true;
++ }
+
-+static DEFINE_PER_CPU(struct softirq_runner, softirq_runners);
++ return false;
++}
+
-+static inline void softirq_set_runner(unsigned int sirq)
++static int save_hist_vars(struct hist_trigger_data *hist_data)
+{
-+ struct softirq_runner *sr = this_cpu_ptr(&softirq_runners);
++ struct trace_array *tr = hist_data->event_file->tr;
++ struct hist_var_data *var_data;
+
-+ sr->runner[sirq] = current;
++ var_data = find_hist_vars(hist_data);
++ if (var_data)
++ return 0;
++
++ if (trace_array_get(tr) < 0)
++ return -ENODEV;
++
++ var_data = kzalloc(sizeof(*var_data), GFP_KERNEL);
++ if (!var_data) {
++ trace_array_put(tr);
++ return -ENOMEM;
++ }
++
++ var_data->hist_data = hist_data;
++ list_add(&var_data->list, &tr->hist_vars);
++
++ return 0;
+}
+
-+static inline void softirq_clr_runner(unsigned int sirq)
++static void remove_hist_vars(struct hist_trigger_data *hist_data)
+{
-+ struct softirq_runner *sr = this_cpu_ptr(&softirq_runners);
++ struct trace_array *tr = hist_data->event_file->tr;
++ struct hist_var_data *var_data;
+
-+ sr->runner[sirq] = NULL;
++ var_data = find_hist_vars(hist_data);
++ if (!var_data)
++ return;
++
++ if (WARN_ON(check_var_refs(hist_data)))
++ return;
++
++ list_del(&var_data->list);
++
++ kfree(var_data);
++
++ trace_array_put(tr);
+}
+
-+/*
-+ * On preempt-rt a softirq running context might be blocked on a
-+ * lock. There might be no other runnable task on this CPU because the
-+ * lock owner runs on some other CPU. So we have to go into idle with
-+ * the pending bit set. Therefor we need to check this otherwise we
-+ * warn about false positives which confuses users and defeats the
-+ * whole purpose of this test.
-+ *
-+ * This code is called with interrupts disabled.
-+ */
-+void softirq_check_pending_idle(void)
++static struct hist_field *find_var_field(struct hist_trigger_data *hist_data,
++ const char *var_name)
+{
-+ static int rate_limit;
-+ struct softirq_runner *sr = this_cpu_ptr(&softirq_runners);
-+ u32 warnpending;
++ struct hist_field *hist_field, *found = NULL;
+ int i;
+
-+ if (rate_limit >= 10)
-+ return;
++ for_each_hist_field(i, hist_data) {
++ hist_field = hist_data->fields[i];
++ if (hist_field && hist_field->flags & HIST_FIELD_FL_VAR &&
++ strcmp(hist_field->var.name, var_name) == 0) {
++ found = hist_field;
++ break;
++ }
++ }
+
-+ warnpending = local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK;
-+ for (i = 0; i < NR_SOFTIRQS; i++) {
-+ struct task_struct *tsk = sr->runner[i];
++ return found;
++}
+
-+ /*
-+ * The wakeup code in rtmutex.c wakes up the task
-+ * _before_ it sets pi_blocked_on to NULL under
-+ * tsk->pi_lock. So we need to check for both: state
-+ * and pi_blocked_on.
-+ */
-+ if (tsk) {
-+ raw_spin_lock(&tsk->pi_lock);
-+ if (tsk->pi_blocked_on || tsk->state == TASK_RUNNING) {
-+ /* Clear all bits pending in that task */
-+ warnpending &= ~(tsk->softirqs_raised);
-+ warnpending &= ~(1 << i);
++static struct hist_field *find_var(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file,
++ const char *var_name)
++{
++ struct hist_trigger_data *test_data;
++ struct event_trigger_data *test;
++ struct hist_field *hist_field;
++
++ hist_field = find_var_field(hist_data, var_name);
++ if (hist_field)
++ return hist_field;
++
++ list_for_each_entry_rcu(test, &file->triggers, list) {
++ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ test_data = test->private_data;
++ hist_field = find_var_field(test_data, var_name);
++ if (hist_field)
++ return hist_field;
++ }
++ }
++
++ return NULL;
++}
++
++static struct trace_event_file *find_var_file(struct trace_array *tr,
++ char *system,
++ char *event_name,
++ char *var_name)
++{
++ struct hist_trigger_data *var_hist_data;
++ struct hist_var_data *var_data;
++ struct trace_event_file *file, *found = NULL;
++
++ if (system)
++ return find_event_file(tr, system, event_name);
++
++ list_for_each_entry(var_data, &tr->hist_vars, list) {
++ var_hist_data = var_data->hist_data;
++ file = var_hist_data->event_file;
++ if (file == found)
++ continue;
++
++ if (find_var_field(var_hist_data, var_name)) {
++ if (found) {
++ hist_err_event("Variable name not unique, need to use fully qualified name (subsys.event.var) for variable: ", system, event_name, var_name);
++ return NULL;
+ }
-+ raw_spin_unlock(&tsk->pi_lock);
++
++ found = file;
+ }
+ }
+
-+ if (warnpending) {
-+ printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
-+ warnpending);
-+ rate_limit++;
++ return found;
++}
++
++static struct hist_field *find_file_var(struct trace_event_file *file,
++ const char *var_name)
++{
++ struct hist_trigger_data *test_data;
++ struct event_trigger_data *test;
++ struct hist_field *hist_field;
++
++ list_for_each_entry_rcu(test, &file->triggers, list) {
++ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ test_data = test->private_data;
++ hist_field = find_var_field(test_data, var_name);
++ if (hist_field)
++ return hist_field;
++ }
+ }
++
++ return NULL;
+}
-+# else
-+/*
-+ * On !PREEMPT_RT we just printk rate limited:
-+ */
-+void softirq_check_pending_idle(void)
++
++static struct hist_field *
++find_match_var(struct hist_trigger_data *hist_data, char *var_name)
+{
-+ static int rate_limit;
++ struct trace_array *tr = hist_data->event_file->tr;
++ struct hist_field *hist_field, *found = NULL;
++ struct trace_event_file *file;
++ unsigned int i;
+
-+ if (rate_limit < 10 &&
-+ (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
-+ printk(KERN_ERR "NOHZ: local_softirq_pending %02x\n",
-+ local_softirq_pending());
-+ rate_limit++;
++ for (i = 0; i < hist_data->n_actions; i++) {
++ struct action_data *data = hist_data->actions[i];
++
++ if (data->fn == action_trace) {
++ char *system = data->onmatch.match_event_system;
++ char *event_name = data->onmatch.match_event;
++
++ file = find_var_file(tr, system, event_name, var_name);
++ if (!file)
++ continue;
++ hist_field = find_file_var(file, var_name);
++ if (hist_field) {
++ if (found) {
++ hist_err_event("Variable name not unique, need to use fully qualified name (subsys.event.var) for variable: ", system, event_name, var_name);
++ return ERR_PTR(-EINVAL);
++ }
++
++ found = hist_field;
++ }
++ }
+ }
++ return found;
+}
-+# endif
+
-+#else /* !CONFIG_NO_HZ_COMMON */
-+static inline void softirq_set_runner(unsigned int sirq) { }
-+static inline void softirq_clr_runner(unsigned int sirq) { }
-+#endif
++static struct hist_field *find_event_var(struct hist_trigger_data *hist_data,
++ char *system,
++ char *event_name,
++ char *var_name)
++{
++ struct trace_array *tr = hist_data->event_file->tr;
++ struct hist_field *hist_field = NULL;
++ struct trace_event_file *file;
+
- /*
- * we cannot loop indefinitely here to avoid userspace starvation,
- * but we also don't want to introduce a worst case 1/HZ latency
-@@ -77,6 +175,38 @@ static void wakeup_softirqd(void)
- wake_up_process(tsk);
- }
++ if (!system || !event_name) {
++ hist_field = find_match_var(hist_data, var_name);
++ if (IS_ERR(hist_field))
++ return NULL;
++ if (hist_field)
++ return hist_field;
++ }
++
++ file = find_var_file(tr, system, event_name, var_name);
++ if (!file)
++ return NULL;
++
++ hist_field = find_file_var(file, var_name);
++
++ return hist_field;
++}
++
++struct hist_elt_data {
++ char *comm;
++ u64 *var_ref_vals;
++ char *field_var_str[SYNTH_FIELDS_MAX];
+ };
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static void wakeup_timer_softirqd(void)
++static u64 hist_field_var_ref(struct hist_field *hist_field,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *event)
+{
-+ /* Interrupts are disabled: no need to stop preemption */
-+ struct task_struct *tsk = __this_cpu_read(ktimer_softirqd);
++ struct hist_elt_data *elt_data;
++ u64 var_val = 0;
+
-+ if (tsk && tsk->state != TASK_RUNNING)
-+ wake_up_process(tsk);
++ elt_data = elt->private_data;
++ var_val = elt_data->var_ref_vals[hist_field->var_ref_idx];
++
++ return var_val;
+}
-+#endif
+
-+static void handle_softirq(unsigned int vec_nr)
++static bool resolve_var_refs(struct hist_trigger_data *hist_data, void *key,
++ u64 *var_ref_vals, bool self)
+{
-+ struct softirq_action *h = softirq_vec + vec_nr;
-+ int prev_count;
++ struct hist_trigger_data *var_data;
++ struct tracing_map_elt *var_elt;
++ struct hist_field *hist_field;
++ unsigned int i, var_idx;
++ bool resolved = true;
++ u64 var_val = 0;
+
-+ prev_count = preempt_count();
++ for (i = 0; i < hist_data->n_var_refs; i++) {
++ hist_field = hist_data->var_refs[i];
++ var_idx = hist_field->var.idx;
++ var_data = hist_field->var.hist_data;
+
-+ kstat_incr_softirqs_this_cpu(vec_nr);
++ if (var_data == NULL) {
++ resolved = false;
++ break;
++ }
+
-+ trace_softirq_entry(vec_nr);
-+ h->action(h);
-+ trace_softirq_exit(vec_nr);
-+ if (unlikely(prev_count != preempt_count())) {
-+ pr_err("huh, entered softirq %u %s %p with preempt_count %08x, exited with %08x?\n",
-+ vec_nr, softirq_to_name[vec_nr], h->action,
-+ prev_count, preempt_count());
-+ preempt_count_set(prev_count);
++ if ((self && var_data != hist_data) ||
++ (!self && var_data == hist_data))
++ continue;
++
++ var_elt = tracing_map_lookup(var_data->map, key);
++ if (!var_elt) {
++ resolved = false;
++ break;
++ }
++
++ if (!tracing_map_var_set(var_elt, var_idx)) {
++ resolved = false;
++ break;
++ }
++
++ if (self || !hist_field->read_once)
++ var_val = tracing_map_read_var(var_elt, var_idx);
++ else
++ var_val = tracing_map_read_var_once(var_elt, var_idx);
++
++ var_ref_vals[i] = var_val;
+ }
++
++ return resolved;
+}
+
-+#ifndef CONFIG_PREEMPT_RT_FULL
- /*
- * If ksoftirqd is scheduled, we do not want to process pending softirqs
- * right now. Let ksoftirqd handle this at its own rate, to get fairness.
-@@ -88,6 +218,47 @@ static bool ksoftirqd_running(void)
- return tsk && (tsk->state == TASK_RUNNING);
- }
-
-+static inline int ksoftirqd_softirq_pending(void)
++static const char *hist_field_name(struct hist_field *field,
++ unsigned int level)
+{
-+ return local_softirq_pending();
++ const char *field_name = "";
++
++ if (level > 1)
++ return field_name;
++
++ if (field->field)
++ field_name = field->field->name;
++ else if (field->flags & HIST_FIELD_FL_LOG2 ||
++ field->flags & HIST_FIELD_FL_ALIAS)
++ field_name = hist_field_name(field->operands[0], ++level);
++ else if (field->flags & HIST_FIELD_FL_CPU)
++ field_name = "cpu";
++ else if (field->flags & HIST_FIELD_FL_EXPR ||
++ field->flags & HIST_FIELD_FL_VAR_REF) {
++ if (field->system) {
++ static char full_name[MAX_FILTER_STR_VAL];
++
++ strcat(full_name, field->system);
++ strcat(full_name, ".");
++ strcat(full_name, field->event_name);
++ strcat(full_name, ".");
++ strcat(full_name, field->name);
++ field_name = full_name;
++ } else
++ field_name = field->name;
++ } else if (field->flags & HIST_FIELD_FL_TIMESTAMP)
++ field_name = "common_timestamp";
++
++ if (field_name == NULL)
++ field_name = "";
++
++ return field_name;
+}
+
-+static void handle_pending_softirqs(u32 pending)
-+{
-+ struct softirq_action *h = softirq_vec;
-+ int softirq_bit;
+ static hist_field_fn_t select_value_fn(int field_size, int field_is_signed)
+ {
+ hist_field_fn_t fn = NULL;
+@@ -207,16 +1771,119 @@
+
+ static void destroy_hist_trigger_attrs(struct hist_trigger_attrs *attrs)
+ {
++ unsigned int i;
+
-+ local_irq_enable();
+ if (!attrs)
+ return;
+
++ for (i = 0; i < attrs->n_assignments; i++)
++ kfree(attrs->assignment_str[i]);
+
-+ h = softirq_vec;
++ for (i = 0; i < attrs->n_actions; i++)
++ kfree(attrs->action_str[i]);
+
-+ while ((softirq_bit = ffs(pending))) {
-+ unsigned int vec_nr;
+ kfree(attrs->name);
+ kfree(attrs->sort_key_str);
+ kfree(attrs->keys_str);
+ kfree(attrs->vals_str);
++ kfree(attrs->clock);
+ kfree(attrs);
+ }
+
++static int parse_action(char *str, struct hist_trigger_attrs *attrs)
++{
++ int ret = -EINVAL;
+
-+ h += softirq_bit - 1;
-+ vec_nr = h - softirq_vec;
-+ handle_softirq(vec_nr);
++ if (attrs->n_actions >= HIST_ACTIONS_MAX)
++ return ret;
+
-+ h++;
-+ pending >>= softirq_bit;
++ if ((strncmp(str, "onmatch(", strlen("onmatch(")) == 0) ||
++ (strncmp(str, "onmax(", strlen("onmax(")) == 0)) {
++ attrs->action_str[attrs->n_actions] = kstrdup(str, GFP_KERNEL);
++ if (!attrs->action_str[attrs->n_actions]) {
++ ret = -ENOMEM;
++ return ret;
++ }
++ attrs->n_actions++;
++ ret = 0;
+ }
+
-+ rcu_bh_qs();
-+ local_irq_disable();
++ return ret;
+}
+
-+static void run_ksoftirqd(unsigned int cpu)
++static int parse_assignment(char *str, struct hist_trigger_attrs *attrs)
+{
-+ local_irq_disable();
-+ if (ksoftirqd_softirq_pending()) {
-+ __do_softirq();
-+ local_irq_enable();
-+ cond_resched_rcu_qs();
-+ return;
++ int ret = 0;
++
++ if ((strncmp(str, "key=", strlen("key=")) == 0) ||
++ (strncmp(str, "keys=", strlen("keys=")) == 0)) {
++ attrs->keys_str = kstrdup(str, GFP_KERNEL);
++ if (!attrs->keys_str) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ } else if ((strncmp(str, "val=", strlen("val=")) == 0) ||
++ (strncmp(str, "vals=", strlen("vals=")) == 0) ||
++ (strncmp(str, "values=", strlen("values=")) == 0)) {
++ attrs->vals_str = kstrdup(str, GFP_KERNEL);
++ if (!attrs->vals_str) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ } else if (strncmp(str, "sort=", strlen("sort=")) == 0) {
++ attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
++ if (!attrs->sort_key_str) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ } else if (strncmp(str, "name=", strlen("name=")) == 0) {
++ attrs->name = kstrdup(str, GFP_KERNEL);
++ if (!attrs->name) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ } else if (strncmp(str, "clock=", strlen("clock=")) == 0) {
++ strsep(&str, "=");
++ if (!str) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ str = strstrip(str);
++ attrs->clock = kstrdup(str, GFP_KERNEL);
++ if (!attrs->clock) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ } else if (strncmp(str, "size=", strlen("size=")) == 0) {
++ int map_bits = parse_map_size(str);
++
++ if (map_bits < 0) {
++ ret = map_bits;
++ goto out;
++ }
++ attrs->map_bits = map_bits;
++ } else {
++ char *assignment;
++
++ if (attrs->n_assignments == TRACING_MAP_VARS_MAX) {
++ hist_err("Too many variables defined: ", str);
++ ret = -EINVAL;
++ goto out;
++ }
++
++ assignment = kstrdup(str, GFP_KERNEL);
++ if (!assignment) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ attrs->assignment_str[attrs->n_assignments++] = assignment;
+ }
-+ local_irq_enable();
++ out:
++ return ret;
+}
+
- /*
- * preempt_count and SOFTIRQ_OFFSET usage:
- * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving
-@@ -243,10 +414,8 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
- unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
- unsigned long old_flags = current->flags;
- int max_restart = MAX_SOFTIRQ_RESTART;
-- struct softirq_action *h;
- bool in_hardirq;
- __u32 pending;
-- int softirq_bit;
+ static struct hist_trigger_attrs *parse_hist_trigger_attrs(char *trigger_str)
+ {
+ struct hist_trigger_attrs *attrs;
+@@ -229,35 +1896,21 @@
+ while (trigger_str) {
+ char *str = strsep(&trigger_str, ":");
+
+- if ((strncmp(str, "key=", strlen("key=")) == 0) ||
+- (strncmp(str, "keys=", strlen("keys=")) == 0))
+- attrs->keys_str = kstrdup(str, GFP_KERNEL);
+- else if ((strncmp(str, "val=", strlen("val=")) == 0) ||
+- (strncmp(str, "vals=", strlen("vals=")) == 0) ||
+- (strncmp(str, "values=", strlen("values=")) == 0))
+- attrs->vals_str = kstrdup(str, GFP_KERNEL);
+- else if (strncmp(str, "sort=", strlen("sort=")) == 0)
+- attrs->sort_key_str = kstrdup(str, GFP_KERNEL);
+- else if (strncmp(str, "name=", strlen("name=")) == 0)
+- attrs->name = kstrdup(str, GFP_KERNEL);
+- else if (strcmp(str, "pause") == 0)
++ if (strchr(str, '=')) {
++ ret = parse_assignment(str, attrs);
++ if (ret)
++ goto free;
++ } else if (strcmp(str, "pause") == 0)
+ attrs->pause = true;
+ else if ((strcmp(str, "cont") == 0) ||
+ (strcmp(str, "continue") == 0))
+ attrs->cont = true;
+ else if (strcmp(str, "clear") == 0)
+ attrs->clear = true;
+- else if (strncmp(str, "size=", strlen("size=")) == 0) {
+- int map_bits = parse_map_size(str);
+-
+- if (map_bits < 0) {
+- ret = map_bits;
++ else {
++ ret = parse_action(str, attrs);
++ if (ret)
+ goto free;
+- }
+- attrs->map_bits = map_bits;
+- } else {
+- ret = -EINVAL;
+- goto free;
+ }
+ }
- /*
- * Mask out PF_MEMALLOC s current task context is borrowed for the
-@@ -265,36 +434,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
- /* Reset the pending bitmask before enabling irqs */
- set_softirq_pending(0);
+@@ -266,6 +1919,14 @@
+ goto free;
+ }
-- local_irq_enable();
--
-- h = softirq_vec;
--
-- while ((softirq_bit = ffs(pending))) {
-- unsigned int vec_nr;
-- int prev_count;
--
-- h += softirq_bit - 1;
--
-- vec_nr = h - softirq_vec;
-- prev_count = preempt_count();
--
-- kstat_incr_softirqs_this_cpu(vec_nr);
--
-- trace_softirq_entry(vec_nr);
-- h->action(h);
-- trace_softirq_exit(vec_nr);
-- if (unlikely(prev_count != preempt_count())) {
-- pr_err("huh, entered softirq %u %s %p with preempt_count %08x, exited with %08x?\n",
-- vec_nr, softirq_to_name[vec_nr], h->action,
-- prev_count, preempt_count());
-- preempt_count_set(prev_count);
-- }
-- h++;
-- pending >>= softirq_bit;
-- }
--
-- rcu_bh_qs();
-- local_irq_disable();
-+ handle_pending_softirqs(pending);
++ if (!attrs->clock) {
++ attrs->clock = kstrdup("global", GFP_KERNEL);
++ if (!attrs->clock) {
++ ret = -ENOMEM;
++ goto free;
++ }
++ }
++
+ return attrs;
+ free:
+ destroy_hist_trigger_attrs(attrs);
+@@ -288,65 +1949,222 @@
+ memcpy(comm, task->comm, TASK_COMM_LEN);
+ }
- pending = local_softirq_pending();
- if (pending) {
-@@ -331,6 +471,309 @@ asmlinkage __visible void do_softirq(void)
+-static void hist_trigger_elt_comm_free(struct tracing_map_elt *elt)
++static void hist_elt_data_free(struct hist_elt_data *elt_data)
+ {
+- kfree((char *)elt->private_data);
++ unsigned int i;
++
++ for (i = 0; i < SYNTH_FIELDS_MAX; i++)
++ kfree(elt_data->field_var_str[i]);
++
++ kfree(elt_data->comm);
++ kfree(elt_data);
}
- /*
-+ * This function must run with irqs disabled!
-+ */
-+void raise_softirq_irqoff(unsigned int nr)
+-static int hist_trigger_elt_comm_alloc(struct tracing_map_elt *elt)
++static void hist_trigger_elt_data_free(struct tracing_map_elt *elt)
+{
-+ __raise_softirq_irqoff(nr);
++ struct hist_elt_data *elt_data = elt->private_data;
+
-+ /*
-+ * If we're in an interrupt or softirq, we're done
-+ * (this also catches softirq-disabled code). We will
-+ * actually run the softirq once we return from
-+ * the irq or softirq.
-+ *
-+ * Otherwise we wake up ksoftirqd to make sure we
-+ * schedule the softirq soon.
-+ */
-+ if (!in_interrupt())
-+ wakeup_softirqd();
++ hist_elt_data_free(elt_data);
+}
+
-+void __raise_softirq_irqoff(unsigned int nr)
-+{
-+ trace_softirq_raise(nr);
-+ or_softirq_pending(1UL << nr);
-+}
++static int hist_trigger_elt_data_alloc(struct tracing_map_elt *elt)
+ {
+ struct hist_trigger_data *hist_data = elt->map->private_data;
++ unsigned int size = TASK_COMM_LEN;
++ struct hist_elt_data *elt_data;
+ struct hist_field *key_field;
+- unsigned int i;
++ unsigned int i, n_str;
+
-+static inline void local_bh_disable_nort(void) { local_bh_disable(); }
-+static inline void _local_bh_enable_nort(void) { _local_bh_enable(); }
-+static void ksoftirqd_set_sched_params(unsigned int cpu) { }
++ elt_data = kzalloc(sizeof(*elt_data), GFP_KERNEL);
++ if (!elt_data)
++ return -ENOMEM;
+
+ for_each_hist_key_field(i, hist_data) {
+ key_field = hist_data->fields[i];
+
+ if (key_field->flags & HIST_FIELD_FL_EXECNAME) {
+- unsigned int size = TASK_COMM_LEN + 1;
+-
+- elt->private_data = kzalloc(size, GFP_KERNEL);
+- if (!elt->private_data)
++ elt_data->comm = kzalloc(size, GFP_KERNEL);
++ if (!elt_data->comm) {
++ kfree(elt_data);
+ return -ENOMEM;
++ }
+ break;
+ }
+ }
+
++ n_str = hist_data->n_field_var_str + hist_data->n_max_var_str;
++
++ size = STR_VAR_LEN_MAX;
+
-+#else /* !PREEMPT_RT_FULL */
++ for (i = 0; i < n_str; i++) {
++ elt_data->field_var_str[i] = kzalloc(size, GFP_KERNEL);
++ if (!elt_data->field_var_str[i]) {
++ hist_elt_data_free(elt_data);
++ return -ENOMEM;
++ }
++ }
+
-+/*
-+ * On RT we serialize softirq execution with a cpu local lock per softirq
-+ */
-+static DEFINE_PER_CPU(struct local_irq_lock [NR_SOFTIRQS], local_softirq_locks);
++ elt->private_data = elt_data;
+
-+void __init softirq_early_init(void)
+ return 0;
+ }
+
+-static void hist_trigger_elt_comm_copy(struct tracing_map_elt *to,
+- struct tracing_map_elt *from)
++static void hist_trigger_elt_data_init(struct tracing_map_elt *elt)
+ {
+- char *comm_from = from->private_data;
+- char *comm_to = to->private_data;
++ struct hist_elt_data *elt_data = elt->private_data;
+
+- if (comm_from)
+- memcpy(comm_to, comm_from, TASK_COMM_LEN + 1);
++ if (elt_data->comm)
++ save_comm(elt_data->comm, current);
+ }
+
+-static void hist_trigger_elt_comm_init(struct tracing_map_elt *elt)
++static const struct tracing_map_ops hist_trigger_elt_data_ops = {
++ .elt_alloc = hist_trigger_elt_data_alloc,
++ .elt_free = hist_trigger_elt_data_free,
++ .elt_init = hist_trigger_elt_data_init,
++};
++
++static const char *get_hist_field_flags(struct hist_field *hist_field)
+ {
+- char *comm = elt->private_data;
++ const char *flags_str = NULL;
+
+- if (comm)
+- save_comm(comm, current);
++ if (hist_field->flags & HIST_FIELD_FL_HEX)
++ flags_str = "hex";
++ else if (hist_field->flags & HIST_FIELD_FL_SYM)
++ flags_str = "sym";
++ else if (hist_field->flags & HIST_FIELD_FL_SYM_OFFSET)
++ flags_str = "sym-offset";
++ else if (hist_field->flags & HIST_FIELD_FL_EXECNAME)
++ flags_str = "execname";
++ else if (hist_field->flags & HIST_FIELD_FL_SYSCALL)
++ flags_str = "syscall";
++ else if (hist_field->flags & HIST_FIELD_FL_LOG2)
++ flags_str = "log2";
++ else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP_USECS)
++ flags_str = "usecs";
++
++ return flags_str;
+ }
+
+-static const struct tracing_map_ops hist_trigger_elt_comm_ops = {
+- .elt_alloc = hist_trigger_elt_comm_alloc,
+- .elt_copy = hist_trigger_elt_comm_copy,
+- .elt_free = hist_trigger_elt_comm_free,
+- .elt_init = hist_trigger_elt_comm_init,
+-};
++static void expr_field_str(struct hist_field *field, char *expr)
+{
-+ int i;
++ if (field->flags & HIST_FIELD_FL_VAR_REF)
++ strcat(expr, "$");
+
+-static void destroy_hist_field(struct hist_field *hist_field)
++ strcat(expr, hist_field_name(field, 0));
+
-+ for (i = 0; i < NR_SOFTIRQS; i++)
-+ local_irq_lock_init(local_softirq_locks[i]);
-+}
++ if (field->flags && !(field->flags & HIST_FIELD_FL_VAR_REF)) {
++ const char *flags_str = get_hist_field_flags(field);
+
-+static void lock_softirq(int which)
-+{
-+ local_lock(local_softirq_locks[which]);
++ if (flags_str) {
++ strcat(expr, ".");
++ strcat(expr, flags_str);
++ }
++ }
+}
+
-+static void unlock_softirq(int which)
++static char *expr_str(struct hist_field *field, unsigned int level)
+{
-+ local_unlock(local_softirq_locks[which]);
-+}
++ char *expr;
+
-+static void do_single_softirq(int which)
-+{
-+ unsigned long old_flags = current->flags;
++ if (level > 1)
++ return NULL;
+
-+ current->flags &= ~PF_MEMALLOC;
-+ vtime_account_irq_enter(current);
-+ current->flags |= PF_IN_SOFTIRQ;
-+ lockdep_softirq_enter();
-+ local_irq_enable();
-+ handle_softirq(which);
-+ local_irq_disable();
-+ lockdep_softirq_exit();
-+ current->flags &= ~PF_IN_SOFTIRQ;
-+ vtime_account_irq_enter(current);
-+ tsk_restore_flags(current, old_flags, PF_MEMALLOC);
-+}
++ expr = kzalloc(MAX_FILTER_STR_VAL, GFP_KERNEL);
++ if (!expr)
++ return NULL;
+
-+/*
-+ * Called with interrupts disabled. Process softirqs which were raised
-+ * in current context (or on behalf of ksoftirqd).
-+ */
-+static void do_current_softirqs(void)
-+{
-+ while (current->softirqs_raised) {
-+ int i = __ffs(current->softirqs_raised);
-+ unsigned int pending, mask = (1U << i);
++ if (!field->operands[0]) {
++ expr_field_str(field, expr);
++ return expr;
++ }
+
-+ current->softirqs_raised &= ~mask;
-+ local_irq_enable();
++ if (field->operator == FIELD_OP_UNARY_MINUS) {
++ char *subexpr;
+
-+ /*
-+ * If the lock is contended, we boost the owner to
-+ * process the softirq or leave the critical section
-+ * now.
-+ */
-+ lock_softirq(i);
-+ local_irq_disable();
-+ softirq_set_runner(i);
-+ /*
-+ * Check with the local_softirq_pending() bits,
-+ * whether we need to process this still or if someone
-+ * else took care of it.
-+ */
-+ pending = local_softirq_pending();
-+ if (pending & mask) {
-+ set_softirq_pending(pending & ~mask);
-+ do_single_softirq(i);
++ strcat(expr, "-(");
++ subexpr = expr_str(field->operands[0], ++level);
++ if (!subexpr) {
++ kfree(expr);
++ return NULL;
+ }
-+ softirq_clr_runner(i);
-+ WARN_ON(current->softirq_nestcnt != 1);
-+ local_irq_enable();
-+ unlock_softirq(i);
-+ local_irq_disable();
-+ }
-+}
++ strcat(expr, subexpr);
++ strcat(expr, ")");
+
-+void __local_bh_disable(void)
-+{
-+ if (++current->softirq_nestcnt == 1)
-+ migrate_disable();
-+}
-+EXPORT_SYMBOL(__local_bh_disable);
++ kfree(subexpr);
+
-+void __local_bh_enable(void)
-+{
-+ if (WARN_ON(current->softirq_nestcnt == 0))
-+ return;
++ return expr;
++ }
+
-+ local_irq_disable();
-+ if (current->softirq_nestcnt == 1 && current->softirqs_raised)
-+ do_current_softirqs();
-+ local_irq_enable();
++ expr_field_str(field->operands[0], expr);
+
-+ if (--current->softirq_nestcnt == 0)
-+ migrate_enable();
-+}
-+EXPORT_SYMBOL(__local_bh_enable);
++ switch (field->operator) {
++ case FIELD_OP_MINUS:
++ strcat(expr, "-");
++ break;
++ case FIELD_OP_PLUS:
++ strcat(expr, "+");
++ break;
++ default:
++ kfree(expr);
++ return NULL;
++ }
+
-+void _local_bh_enable(void)
-+{
-+ if (WARN_ON(current->softirq_nestcnt == 0))
-+ return;
-+ if (--current->softirq_nestcnt == 0)
-+ migrate_enable();
-+}
-+EXPORT_SYMBOL(_local_bh_enable);
++ expr_field_str(field->operands[1], expr);
+
-+int in_serving_softirq(void)
-+{
-+ return current->flags & PF_IN_SOFTIRQ;
++ return expr;
+}
-+EXPORT_SYMBOL(in_serving_softirq);
+
-+/* Called with preemption disabled */
-+static void run_ksoftirqd(unsigned int cpu)
++static int contains_operator(char *str)
+{
-+ local_irq_disable();
-+ current->softirq_nestcnt++;
++ enum field_op_id field_op = FIELD_OP_NONE;
++ char *op;
+
-+ do_current_softirqs();
-+ current->softirq_nestcnt--;
-+ local_irq_enable();
-+ cond_resched_rcu_qs();
-+}
++ op = strpbrk(str, "+-");
++ if (!op)
++ return FIELD_OP_NONE;
+
-+/*
-+ * Called from netif_rx_ni(). Preemption enabled, but migration
-+ * disabled. So the cpu can't go away under us.
-+ */
-+void thread_do_softirq(void)
-+{
-+ if (!in_serving_softirq() && current->softirqs_raised) {
-+ current->softirq_nestcnt++;
-+ do_current_softirqs();
-+ current->softirq_nestcnt--;
++ switch (*op) {
++ case '-':
++ if (*str == '-')
++ field_op = FIELD_OP_UNARY_MINUS;
++ else
++ field_op = FIELD_OP_MINUS;
++ break;
++ case '+':
++ field_op = FIELD_OP_PLUS;
++ break;
++ default:
++ break;
+ }
-+}
+
-+static void do_raise_softirq_irqoff(unsigned int nr)
-+{
-+ unsigned int mask;
++ return field_op;
++}
+
-+ mask = 1UL << nr;
++static void destroy_hist_field(struct hist_field *hist_field,
++ unsigned int level)
+ {
++ unsigned int i;
+
-+ trace_softirq_raise(nr);
-+ or_softirq_pending(mask);
++ if (level > 3)
++ return;
+
-+ /*
-+ * If we are not in a hard interrupt and inside a bh disabled
-+ * region, we simply raise the flag on current. local_bh_enable()
-+ * will make sure that the softirq is executed. Otherwise we
-+ * delegate it to ksoftirqd.
-+ */
-+ if (!in_irq() && current->softirq_nestcnt)
-+ current->softirqs_raised |= mask;
-+ else if (!__this_cpu_read(ksoftirqd) || !__this_cpu_read(ktimer_softirqd))
++ if (!hist_field)
+ return;
+
-+ if (mask & TIMER_SOFTIRQS)
-+ __this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
-+ else
-+ __this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
-+}
++ for (i = 0; i < HIST_FIELD_OPERANDS_MAX; i++)
++ destroy_hist_field(hist_field->operands[i], level + 1);
+
-+static void wakeup_proper_softirq(unsigned int nr)
-+{
-+ if ((1UL << nr) & TIMER_SOFTIRQS)
-+ wakeup_timer_softirqd();
-+ else
-+ wakeup_softirqd();
-+}
++ kfree(hist_field->var.name);
++ kfree(hist_field->name);
++ kfree(hist_field->type);
+
-+void __raise_softirq_irqoff(unsigned int nr)
-+{
-+ do_raise_softirq_irqoff(nr);
-+ if (!in_irq() && !current->softirq_nestcnt)
-+ wakeup_proper_softirq(nr);
-+}
+ kfree(hist_field);
+ }
+
+-static struct hist_field *create_hist_field(struct ftrace_event_field *field,
+- unsigned long flags)
++static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
++ struct ftrace_event_field *field,
++ unsigned long flags,
++ char *var_name)
+ {
+ struct hist_field *hist_field;
+
+@@ -357,8 +2175,22 @@
+ if (!hist_field)
+ return NULL;
+
++ hist_field->hist_data = hist_data;
+
-+/*
-+ * Same as __raise_softirq_irqoff() but will process them in ksoftirqd
-+ */
-+void __raise_softirq_irqoff_ksoft(unsigned int nr)
-+{
-+ unsigned int mask;
++ if (flags & HIST_FIELD_FL_EXPR || flags & HIST_FIELD_FL_ALIAS)
++ goto out; /* caller will populate */
+
-+ if (WARN_ON_ONCE(!__this_cpu_read(ksoftirqd) ||
-+ !__this_cpu_read(ktimer_softirqd)))
-+ return;
-+ mask = 1UL << nr;
++ if (flags & HIST_FIELD_FL_VAR_REF) {
++ hist_field->fn = hist_field_var_ref;
++ goto out;
++ }
+
-+ trace_softirq_raise(nr);
-+ or_softirq_pending(mask);
-+ if (mask & TIMER_SOFTIRQS)
-+ __this_cpu_read(ktimer_softirqd)->softirqs_raised |= mask;
-+ else
-+ __this_cpu_read(ksoftirqd)->softirqs_raised |= mask;
-+ wakeup_proper_softirq(nr);
-+}
+ if (flags & HIST_FIELD_FL_HITCOUNT) {
+ hist_field->fn = hist_field_counter;
++ hist_field->size = sizeof(u64);
++ hist_field->type = kstrdup("u64", GFP_KERNEL);
++ if (!hist_field->type)
++ goto free;
+ goto out;
+ }
+
+@@ -368,7 +2200,31 @@
+ }
+
+ if (flags & HIST_FIELD_FL_LOG2) {
++ unsigned long fl = flags & ~HIST_FIELD_FL_LOG2;
+ hist_field->fn = hist_field_log2;
++ hist_field->operands[0] = create_hist_field(hist_data, field, fl, NULL);
++ hist_field->size = hist_field->operands[0]->size;
++ hist_field->type = kstrdup(hist_field->operands[0]->type, GFP_KERNEL);
++ if (!hist_field->type)
++ goto free;
++ goto out;
++ }
+
-+/*
-+ * This function must run with irqs disabled!
-+ */
-+void raise_softirq_irqoff(unsigned int nr)
-+{
-+ do_raise_softirq_irqoff(nr);
++ if (flags & HIST_FIELD_FL_TIMESTAMP) {
++ hist_field->fn = hist_field_timestamp;
++ hist_field->size = sizeof(u64);
++ hist_field->type = kstrdup("u64", GFP_KERNEL);
++ if (!hist_field->type)
++ goto free;
++ goto out;
++ }
+
-+ /*
-+ * If we're in an hard interrupt we let irq return code deal
-+ * with the wakeup of ksoftirqd.
-+ */
-+ if (in_irq())
-+ return;
-+ /*
-+ * If we are in thread context but outside of a bh disabled
-+ * region, we need to wake ksoftirqd as well.
-+ *
-+ * CHECKME: Some of the places which do that could be wrapped
-+ * into local_bh_disable/enable pairs. Though it's unclear
-+ * whether this is worth the effort. To find those places just
-+ * raise a WARN() if the condition is met.
-+ */
-+ if (!current->softirq_nestcnt)
-+ wakeup_proper_softirq(nr);
-+}
++ if (flags & HIST_FIELD_FL_CPU) {
++ hist_field->fn = hist_field_cpu;
++ hist_field->size = sizeof(int);
++ hist_field->type = kstrdup("unsigned int", GFP_KERNEL);
++ if (!hist_field->type)
++ goto free;
+ goto out;
+ }
+
+@@ -378,6 +2234,11 @@
+ if (is_string_field(field)) {
+ flags |= HIST_FIELD_FL_STRING;
+
++ hist_field->size = MAX_FILTER_STR_VAL;
++ hist_field->type = kstrdup(field->type, GFP_KERNEL);
++ if (!hist_field->type)
++ goto free;
++
+ if (field->filter_type == FILTER_STATIC_STRING)
+ hist_field->fn = hist_field_string;
+ else if (field->filter_type == FILTER_DYN_STRING)
+@@ -385,10 +2246,16 @@
+ else
+ hist_field->fn = hist_field_pstring;
+ } else {
++ hist_field->size = field->size;
++ hist_field->is_signed = field->is_signed;
++ hist_field->type = kstrdup(field->type, GFP_KERNEL);
++ if (!hist_field->type)
++ goto free;
++
+ hist_field->fn = select_value_fn(field->size,
+ field->is_signed);
+ if (!hist_field->fn) {
+- destroy_hist_field(hist_field);
++ destroy_hist_field(hist_field, 0);
+ return NULL;
+ }
+ }
+@@ -396,84 +2263,1636 @@
+ hist_field->field = field;
+ hist_field->flags = flags;
+
++ if (var_name) {
++ hist_field->var.name = kstrdup(var_name, GFP_KERNEL);
++ if (!hist_field->var.name)
++ goto free;
++ }
++
+ return hist_field;
++ free:
++ destroy_hist_field(hist_field, 0);
++ return NULL;
+ }
+
+ static void destroy_hist_fields(struct hist_trigger_data *hist_data)
+ {
+ unsigned int i;
+
+- for (i = 0; i < TRACING_MAP_FIELDS_MAX; i++) {
++ for (i = 0; i < HIST_FIELDS_MAX; i++) {
+ if (hist_data->fields[i]) {
+- destroy_hist_field(hist_data->fields[i]);
++ destroy_hist_field(hist_data->fields[i], 0);
+ hist_data->fields[i] = NULL;
+ }
+ }
+ }
+
+-static int create_hitcount_val(struct hist_trigger_data *hist_data)
++static int init_var_ref(struct hist_field *ref_field,
++ struct hist_field *var_field,
++ char *system, char *event_name)
+ {
+- hist_data->fields[HITCOUNT_IDX] =
+- create_hist_field(NULL, HIST_FIELD_FL_HITCOUNT);
+- if (!hist_data->fields[HITCOUNT_IDX])
+- return -ENOMEM;
++ int err = 0;
+
+- hist_data->n_vals++;
++ ref_field->var.idx = var_field->var.idx;
++ ref_field->var.hist_data = var_field->hist_data;
++ ref_field->size = var_field->size;
++ ref_field->is_signed = var_field->is_signed;
++ ref_field->flags |= var_field->flags &
++ (HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS);
+
+- if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX))
++ if (system) {
++ ref_field->system = kstrdup(system, GFP_KERNEL);
++ if (!ref_field->system)
++ return -ENOMEM;
++ }
+
-+static inline int ksoftirqd_softirq_pending(void)
-+{
-+ return current->softirqs_raised;
-+}
++ if (event_name) {
++ ref_field->event_name = kstrdup(event_name, GFP_KERNEL);
++ if (!ref_field->event_name) {
++ err = -ENOMEM;
++ goto free;
++ }
++ }
+
-+static inline void local_bh_disable_nort(void) { }
-+static inline void _local_bh_enable_nort(void) { }
++ if (var_field->var.name) {
++ ref_field->name = kstrdup(var_field->var.name, GFP_KERNEL);
++ if (!ref_field->name) {
++ err = -ENOMEM;
++ goto free;
++ }
++ } else if (var_field->name) {
++ ref_field->name = kstrdup(var_field->name, GFP_KERNEL);
++ if (!ref_field->name) {
++ err = -ENOMEM;
++ goto free;
++ }
++ }
+
-+static inline void ksoftirqd_set_sched_params(unsigned int cpu)
-+{
-+ /* Take over all but timer pending softirqs when starting */
-+ local_irq_disable();
-+ current->softirqs_raised = local_softirq_pending() & ~TIMER_SOFTIRQS;
-+ local_irq_enable();
++ ref_field->type = kstrdup(var_field->type, GFP_KERNEL);
++ if (!ref_field->type) {
++ err = -ENOMEM;
++ goto free;
++ }
++ out:
++ return err;
++ free:
++ kfree(ref_field->system);
++ kfree(ref_field->event_name);
++ kfree(ref_field->name);
++
++ goto out;
+}
+
-+static inline void ktimer_softirqd_set_sched_params(unsigned int cpu)
++static struct hist_field *create_var_ref(struct hist_field *var_field,
++ char *system, char *event_name)
+{
-+ struct sched_param param = { .sched_priority = 1 };
++ unsigned long flags = HIST_FIELD_FL_VAR_REF;
++ struct hist_field *ref_field;
+
-+ sched_setscheduler(current, SCHED_FIFO, ¶m);
++ ref_field = create_hist_field(var_field->hist_data, NULL, flags, NULL);
++ if (ref_field) {
++ if (init_var_ref(ref_field, var_field, system, event_name)) {
++ destroy_hist_field(ref_field, 0);
++ return NULL;
++ }
++ }
+
-+ /* Take over timer pending softirqs when starting */
-+ local_irq_disable();
-+ current->softirqs_raised = local_softirq_pending() & TIMER_SOFTIRQS;
-+ local_irq_enable();
++ return ref_field;
+}
+
-+static inline void ktimer_softirqd_clr_sched_params(unsigned int cpu,
-+ bool online)
++static bool is_var_ref(char *var_name)
+{
-+ struct sched_param param = { .sched_priority = 0 };
++ if (!var_name || strlen(var_name) < 2 || var_name[0] != '$')
++ return false;
+
-+ sched_setscheduler(current, SCHED_NORMAL, ¶m);
++ return true;
+}
+
-+static int ktimer_softirqd_should_run(unsigned int cpu)
++static char *field_name_from_var(struct hist_trigger_data *hist_data,
++ char *var_name)
+{
-+ return current->softirqs_raised;
-+}
++ char *name, *field;
++ unsigned int i;
+
-+#endif /* PREEMPT_RT_FULL */
-+/*
- * Enter an interrupt context.
- */
- void irq_enter(void)
-@@ -341,9 +784,9 @@ void irq_enter(void)
- * Prevent raise_softirq from needlessly waking up ksoftirqd
- * here, as softirq will be serviced on return from interrupt.
- */
-- local_bh_disable();
-+ local_bh_disable_nort();
- tick_irq_enter();
-- _local_bh_enable();
-+ _local_bh_enable_nort();
- }
-
- __irq_enter();
-@@ -351,6 +794,7 @@ void irq_enter(void)
-
- static inline void invoke_softirq(void)
- {
-+#ifndef CONFIG_PREEMPT_RT_FULL
- if (ksoftirqd_running())
- return;
-
-@@ -373,6 +817,18 @@ static inline void invoke_softirq(void)
- } else {
- wakeup_softirqd();
- }
-+#else /* PREEMPT_RT_FULL */
-+ unsigned long flags;
++ for (i = 0; i < hist_data->attrs->var_defs.n_vars; i++) {
++ name = hist_data->attrs->var_defs.name[i];
+
-+ local_irq_save(flags);
-+ if (__this_cpu_read(ksoftirqd) &&
-+ __this_cpu_read(ksoftirqd)->softirqs_raised)
-+ wakeup_softirqd();
-+ if (__this_cpu_read(ktimer_softirqd) &&
-+ __this_cpu_read(ktimer_softirqd)->softirqs_raised)
-+ wakeup_timer_softirqd();
-+ local_irq_restore(flags);
-+#endif
- }
-
- static inline void tick_irq_exit(void)
-@@ -409,26 +865,6 @@ void irq_exit(void)
- trace_hardirq_exit(); /* must be last! */
- }
-
--/*
-- * This function must run with irqs disabled!
-- */
--inline void raise_softirq_irqoff(unsigned int nr)
--{
-- __raise_softirq_irqoff(nr);
--
-- /*
-- * If we're in an interrupt or softirq, we're done
-- * (this also catches softirq-disabled code). We will
-- * actually run the softirq once we return from
-- * the irq or softirq.
-- *
-- * Otherwise we wake up ksoftirqd to make sure we
-- * schedule the softirq soon.
-- */
-- if (!in_interrupt())
-- wakeup_softirqd();
--}
--
- void raise_softirq(unsigned int nr)
- {
- unsigned long flags;
-@@ -438,12 +874,6 @@ void raise_softirq(unsigned int nr)
- local_irq_restore(flags);
- }
-
--void __raise_softirq_irqoff(unsigned int nr)
--{
-- trace_softirq_raise(nr);
-- or_softirq_pending(1UL << nr);
--}
--
- void open_softirq(int nr, void (*action)(struct softirq_action *))
- {
- softirq_vec[nr].action = action;
-@@ -460,15 +890,45 @@ struct tasklet_head {
- static DEFINE_PER_CPU(struct tasklet_head, tasklet_vec);
- static DEFINE_PER_CPU(struct tasklet_head, tasklet_hi_vec);
-
-+static void inline
-+__tasklet_common_schedule(struct tasklet_struct *t, struct tasklet_head *head, unsigned int nr)
-+{
-+ if (tasklet_trylock(t)) {
-+again:
-+ /* We may have been preempted before tasklet_trylock
-+ * and __tasklet_action may have already run.
-+ * So double check the sched bit while the takslet
-+ * is locked before adding it to the list.
-+ */
-+ if (test_bit(TASKLET_STATE_SCHED, &t->state)) {
-+ t->next = NULL;
-+ *head->tail = t;
-+ head->tail = &(t->next);
-+ raise_softirq_irqoff(nr);
-+ tasklet_unlock(t);
-+ } else {
-+ /* This is subtle. If we hit the corner case above
-+ * It is possible that we get preempted right here,
-+ * and another task has successfully called
-+ * tasklet_schedule(), then this function, and
-+ * failed on the trylock. Thus we must be sure
-+ * before releasing the tasklet lock, that the
-+ * SCHED_BIT is clear. Otherwise the tasklet
-+ * may get its SCHED_BIT set, but not added to the
-+ * list
-+ */
-+ if (!tasklet_tryunlock(t))
-+ goto again;
++ if (strcmp(var_name, name) == 0) {
++ field = hist_data->attrs->var_defs.expr[i];
++ if (contains_operator(field) || is_var_ref(field))
++ continue;
++ return field;
+ }
+ }
++
++ return NULL;
+}
+
- void __tasklet_schedule(struct tasklet_struct *t)
- {
- unsigned long flags;
-
- local_irq_save(flags);
-- t->next = NULL;
-- *__this_cpu_read(tasklet_vec.tail) = t;
-- __this_cpu_write(tasklet_vec.tail, &(t->next));
-- raise_softirq_irqoff(TASKLET_SOFTIRQ);
-+ __tasklet_common_schedule(t, this_cpu_ptr(&tasklet_vec), TASKLET_SOFTIRQ);
- local_irq_restore(flags);
- }
- EXPORT_SYMBOL(__tasklet_schedule);
-@@ -478,10 +938,7 @@ void __tasklet_hi_schedule(struct tasklet_struct *t)
- unsigned long flags;
-
- local_irq_save(flags);
-- t->next = NULL;
-- *__this_cpu_read(tasklet_hi_vec.tail) = t;
-- __this_cpu_write(tasklet_hi_vec.tail, &(t->next));
-- raise_softirq_irqoff(HI_SOFTIRQ);
-+ __tasklet_common_schedule(t, this_cpu_ptr(&tasklet_hi_vec), HI_SOFTIRQ);
- local_irq_restore(flags);
- }
- EXPORT_SYMBOL(__tasklet_hi_schedule);
-@@ -490,82 +947,122 @@ void __tasklet_hi_schedule_first(struct tasklet_struct *t)
- {
- BUG_ON(!irqs_disabled());
-
-- t->next = __this_cpu_read(tasklet_hi_vec.head);
-- __this_cpu_write(tasklet_hi_vec.head, t);
-- __raise_softirq_irqoff(HI_SOFTIRQ);
-+ __tasklet_hi_schedule(t);
- }
- EXPORT_SYMBOL(__tasklet_hi_schedule_first);
-
--static __latent_entropy void tasklet_action(struct softirq_action *a)
-+void tasklet_enable(struct tasklet_struct *t)
- {
-- struct tasklet_struct *list;
-+ if (!atomic_dec_and_test(&t->count))
-+ return;
-+ if (test_and_clear_bit(TASKLET_STATE_PENDING, &t->state))
-+ tasklet_schedule(t);
++static char *local_field_var_ref(struct hist_trigger_data *hist_data,
++ char *system, char *event_name,
++ char *var_name)
++{
++ struct trace_event_call *call;
++
++ if (system && event_name) {
++ call = hist_data->event_file->event_call;
++
++ if (strcmp(system, call->class->system) != 0)
++ return NULL;
++
++ if (strcmp(event_name, trace_event_name(call)) != 0)
++ return NULL;
++ }
++
++ if (!!system != !!event_name)
++ return NULL;
++
++ if (!is_var_ref(var_name))
++ return NULL;
++
++ var_name++;
++
++ return field_name_from_var(hist_data, var_name);
+}
-+EXPORT_SYMBOL(tasklet_enable);
-
-- local_irq_disable();
-- list = __this_cpu_read(tasklet_vec.head);
-- __this_cpu_write(tasklet_vec.head, NULL);
-- __this_cpu_write(tasklet_vec.tail, this_cpu_ptr(&tasklet_vec.head));
-- local_irq_enable();
-+static void __tasklet_action(struct softirq_action *a,
-+ struct tasklet_struct *list)
++
++static struct hist_field *parse_var_ref(struct hist_trigger_data *hist_data,
++ char *system, char *event_name,
++ char *var_name)
+{
-+ int loops = 1000000;
-
- while (list) {
- struct tasklet_struct *t = list;
-
- list = list->next;
-
-- if (tasklet_trylock(t)) {
-- if (!atomic_read(&t->count)) {
-- if (!test_and_clear_bit(TASKLET_STATE_SCHED,
-- &t->state))
-- BUG();
-- t->func(t->data);
-- tasklet_unlock(t);
-- continue;
-- }
-- tasklet_unlock(t);
-+ /*
-+ * Should always succeed - after a tasklist got on the
-+ * list (after getting the SCHED bit set from 0 to 1),
-+ * nothing but the tasklet softirq it got queued to can
-+ * lock it:
-+ */
-+ if (!tasklet_trylock(t)) {
-+ WARN_ON(1);
-+ continue;
- }
-
-- local_irq_disable();
- t->next = NULL;
-- *__this_cpu_read(tasklet_vec.tail) = t;
-- __this_cpu_write(tasklet_vec.tail, &(t->next));
-- __raise_softirq_irqoff(TASKLET_SOFTIRQ);
-- local_irq_enable();
++ struct hist_field *var_field = NULL, *ref_field = NULL;
++
++ if (!is_var_ref(var_name))
++ return NULL;
++
++ var_name++;
++
++ var_field = find_event_var(hist_data, system, event_name, var_name);
++ if (var_field)
++ ref_field = create_var_ref(var_field, system, event_name);
++
++ if (!ref_field)
++ hist_err_event("Couldn't find variable: $",
++ system, event_name, var_name);
++
++ return ref_field;
++}
++
++static struct ftrace_event_field *
++parse_field(struct hist_trigger_data *hist_data, struct trace_event_file *file,
++ char *field_str, unsigned long *flags)
++{
++ struct ftrace_event_field *field = NULL;
++ char *field_name, *modifier, *str;
++
++ modifier = str = kstrdup(field_str, GFP_KERNEL);
++ if (!modifier)
++ return ERR_PTR(-ENOMEM);
+
-+ /*
-+ * If we cannot handle the tasklet because it's disabled,
-+ * mark it as pending. tasklet_enable() will later
-+ * re-schedule the tasklet.
-+ */
-+ if (unlikely(atomic_read(&t->count))) {
-+out_disabled:
-+ /* implicit unlock: */
-+ wmb();
-+ t->state = TASKLET_STATEF_PENDING;
-+ continue;
++ field_name = strsep(&modifier, ".");
++ if (modifier) {
++ if (strcmp(modifier, "hex") == 0)
++ *flags |= HIST_FIELD_FL_HEX;
++ else if (strcmp(modifier, "sym") == 0)
++ *flags |= HIST_FIELD_FL_SYM;
++ else if (strcmp(modifier, "sym-offset") == 0)
++ *flags |= HIST_FIELD_FL_SYM_OFFSET;
++ else if ((strcmp(modifier, "execname") == 0) &&
++ (strcmp(field_name, "common_pid") == 0))
++ *flags |= HIST_FIELD_FL_EXECNAME;
++ else if (strcmp(modifier, "syscall") == 0)
++ *flags |= HIST_FIELD_FL_SYSCALL;
++ else if (strcmp(modifier, "log2") == 0)
++ *flags |= HIST_FIELD_FL_LOG2;
++ else if (strcmp(modifier, "usecs") == 0)
++ *flags |= HIST_FIELD_FL_TIMESTAMP_USECS;
++ else {
++ hist_err("Invalid field modifier: ", modifier);
++ field = ERR_PTR(-EINVAL);
++ goto out;
+ }
++ }
+
-+ /*
-+ * After this point on the tasklet might be rescheduled
-+ * on another CPU, but it can only be added to another
-+ * CPU's tasklet list if we unlock the tasklet (which we
-+ * dont do yet).
-+ */
-+ if (!test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
-+ WARN_ON(1);
++ if (strcmp(field_name, "common_timestamp") == 0) {
++ *flags |= HIST_FIELD_FL_TIMESTAMP;
++ hist_data->enable_timestamps = true;
++ if (*flags & HIST_FIELD_FL_TIMESTAMP_USECS)
++ hist_data->attrs->ts_in_usecs = true;
++ } else if (strcmp(field_name, "cpu") == 0)
++ *flags |= HIST_FIELD_FL_CPU;
++ else {
++ field = trace_find_event_field(file->event_call, field_name);
++ if (!field || !field->size) {
++ hist_err("Couldn't find field: ", field_name);
++ field = ERR_PTR(-EINVAL);
++ goto out;
++ }
++ }
++ out:
++ kfree(str);
+
-+again:
-+ t->func(t->data);
++ return field;
++}
+
-+ /*
-+ * Try to unlock the tasklet. We must use cmpxchg, because
-+ * another CPU might have scheduled or disabled the tasklet.
-+ * We only allow the STATE_RUN -> 0 transition here.
-+ */
-+ while (!tasklet_tryunlock(t)) {
-+ /*
-+ * If it got disabled meanwhile, bail out:
-+ */
-+ if (atomic_read(&t->count))
-+ goto out_disabled;
-+ /*
-+ * If it got scheduled meanwhile, re-execute
-+ * the tasklet function:
-+ */
-+ if (test_and_clear_bit(TASKLET_STATE_SCHED, &t->state))
-+ goto again;
-+ if (!--loops) {
-+ printk("hm, tasklet state: %08lx\n", t->state);
-+ WARN_ON(1);
-+ tasklet_unlock(t);
-+ break;
-+ }
-+ }
- }
- }
-
-+static void tasklet_action(struct softirq_action *a)
++static struct hist_field *create_alias(struct hist_trigger_data *hist_data,
++ struct hist_field *var_ref,
++ char *var_name)
+{
-+ struct tasklet_struct *list;
++ struct hist_field *alias = NULL;
++ unsigned long flags = HIST_FIELD_FL_ALIAS | HIST_FIELD_FL_VAR;
+
-+ local_irq_disable();
++ alias = create_hist_field(hist_data, NULL, flags, var_name);
++ if (!alias)
++ return NULL;
+
-+ list = __this_cpu_read(tasklet_vec.head);
-+ __this_cpu_write(tasklet_vec.head, NULL);
-+ __this_cpu_write(tasklet_vec.tail, this_cpu_ptr(&tasklet_vec.head));
++ alias->fn = var_ref->fn;
++ alias->operands[0] = var_ref;
+
-+ local_irq_enable();
++ if (init_var_ref(alias, var_ref, var_ref->system, var_ref->event_name)) {
++ destroy_hist_field(alias, 0);
++ return NULL;
++ }
+
-+ __tasklet_action(a, list);
++ return alias;
+}
+
- static __latent_entropy void tasklet_hi_action(struct softirq_action *a)
- {
- struct tasklet_struct *list;
-
- local_irq_disable();
++static struct hist_field *parse_atom(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file, char *str,
++ unsigned long *flags, char *var_name)
++{
++ char *s, *ref_system = NULL, *ref_event = NULL, *ref_var = str;
++ struct ftrace_event_field *field = NULL;
++ struct hist_field *hist_field = NULL;
++ int ret = 0;
+
- list = __this_cpu_read(tasklet_hi_vec.head);
- __this_cpu_write(tasklet_hi_vec.head, NULL);
- __this_cpu_write(tasklet_hi_vec.tail, this_cpu_ptr(&tasklet_hi_vec.head));
++ s = strchr(str, '.');
++ if (s) {
++ s = strchr(++s, '.');
++ if (s) {
++ ref_system = strsep(&str, ".");
++ if (!str) {
++ ret = -EINVAL;
++ goto out;
++ }
++ ref_event = strsep(&str, ".");
++ if (!str) {
++ ret = -EINVAL;
++ goto out;
++ }
++ ref_var = str;
++ }
++ }
+
- local_irq_enable();
-
-- while (list) {
-- struct tasklet_struct *t = list;
--
-- list = list->next;
--
-- if (tasklet_trylock(t)) {
-- if (!atomic_read(&t->count)) {
-- if (!test_and_clear_bit(TASKLET_STATE_SCHED,
-- &t->state))
-- BUG();
-- t->func(t->data);
-- tasklet_unlock(t);
-- continue;
-- }
-- tasklet_unlock(t);
-- }
--
-- local_irq_disable();
-- t->next = NULL;
-- *__this_cpu_read(tasklet_hi_vec.tail) = t;
-- __this_cpu_write(tasklet_hi_vec.tail, &(t->next));
-- __raise_softirq_irqoff(HI_SOFTIRQ);
-- local_irq_enable();
-- }
-+ __tasklet_action(a, list);
- }
-
- void tasklet_init(struct tasklet_struct *t,
-@@ -586,7 +1083,7 @@ void tasklet_kill(struct tasklet_struct *t)
-
- while (test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) {
- do {
-- yield();
-+ msleep(1);
- } while (test_bit(TASKLET_STATE_SCHED, &t->state));
- }
- tasklet_unlock_wait(t);
-@@ -660,25 +1157,26 @@ void __init softirq_init(void)
- open_softirq(HI_SOFTIRQ, tasklet_hi_action);
- }
-
-+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
-+void tasklet_unlock_wait(struct tasklet_struct *t)
++ s = local_field_var_ref(hist_data, ref_system, ref_event, ref_var);
++ if (!s) {
++ hist_field = parse_var_ref(hist_data, ref_system, ref_event, ref_var);
++ if (hist_field) {
++ hist_data->var_refs[hist_data->n_var_refs] = hist_field;
++ hist_field->var_ref_idx = hist_data->n_var_refs++;
++ if (var_name) {
++ hist_field = create_alias(hist_data, hist_field, var_name);
++ if (!hist_field) {
++ ret = -ENOMEM;
++ goto out;
++ }
++ }
++ return hist_field;
++ }
++ } else
++ str = s;
++
++ field = parse_field(hist_data, file, str, flags);
++ if (IS_ERR(field)) {
++ ret = PTR_ERR(field);
++ goto out;
++ }
++
++ hist_field = create_hist_field(hist_data, field, *flags, var_name);
++ if (!hist_field) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ return hist_field;
++ out:
++ return ERR_PTR(ret);
++}
++
++static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file,
++ char *str, unsigned long flags,
++ char *var_name, unsigned int level);
++
++static struct hist_field *parse_unary(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file,
++ char *str, unsigned long flags,
++ char *var_name, unsigned int level)
+{
-+ while (test_bit(TASKLET_STATE_RUN, &(t)->state)) {
-+ /*
-+ * Hack for now to avoid this busy-loop:
-+ */
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ msleep(1);
-+#else
-+ barrier();
-+#endif
++ struct hist_field *operand1, *expr = NULL;
++ unsigned long operand_flags;
++ int ret = 0;
++ char *s;
++
++ // we support only -(xxx) i.e. explicit parens required
++
++ if (level > 3) {
++ hist_err("Too many subexpressions (3 max): ", str);
++ ret = -EINVAL;
++ goto free;
+ }
++
++ str++; // skip leading '-'
++
++ s = strchr(str, '(');
++ if (s)
++ str++;
++ else {
++ ret = -EINVAL;
++ goto free;
++ }
++
++ s = strrchr(str, ')');
++ if (s)
++ *s = '\0';
++ else {
++ ret = -EINVAL; // no closing ')'
++ goto free;
++ }
++
++ flags |= HIST_FIELD_FL_EXPR;
++ expr = create_hist_field(hist_data, NULL, flags, var_name);
++ if (!expr) {
++ ret = -ENOMEM;
++ goto free;
++ }
++
++ operand_flags = 0;
++ operand1 = parse_expr(hist_data, file, str, operand_flags, NULL, ++level);
++ if (IS_ERR(operand1)) {
++ ret = PTR_ERR(operand1);
++ goto free;
++ }
++
++ expr->flags |= operand1->flags &
++ (HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS);
++ expr->fn = hist_field_unary_minus;
++ expr->operands[0] = operand1;
++ expr->operator = FIELD_OP_UNARY_MINUS;
++ expr->name = expr_str(expr, 0);
++ expr->type = kstrdup(operand1->type, GFP_KERNEL);
++ if (!expr->type) {
++ ret = -ENOMEM;
++ goto free;
++ }
++
++ return expr;
++ free:
++ destroy_hist_field(expr, 0);
++ return ERR_PTR(ret);
+}
-+EXPORT_SYMBOL(tasklet_unlock_wait);
-+#endif
+
- static int ksoftirqd_should_run(unsigned int cpu)
- {
-- return local_softirq_pending();
--}
--
--static void run_ksoftirqd(unsigned int cpu)
--{
-- local_irq_disable();
-- if (local_softirq_pending()) {
-- /*
-- * We can safely run softirq on inline stack, as we are not deep
-- * in the task stack here.
-- */
-- __do_softirq();
-- local_irq_enable();
-- cond_resched_rcu_qs();
-- return;
-- }
-- local_irq_enable();
-+ return ksoftirqd_softirq_pending();
- }
-
- #ifdef CONFIG_HOTPLUG_CPU
-@@ -745,17 +1243,31 @@ static int takeover_tasklets(unsigned int cpu)
-
- static struct smp_hotplug_thread softirq_threads = {
- .store = &ksoftirqd,
-+ .setup = ksoftirqd_set_sched_params,
- .thread_should_run = ksoftirqd_should_run,
- .thread_fn = run_ksoftirqd,
- .thread_comm = "ksoftirqd/%u",
- };
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static struct smp_hotplug_thread softirq_timer_threads = {
-+ .store = &ktimer_softirqd,
-+ .setup = ktimer_softirqd_set_sched_params,
-+ .cleanup = ktimer_softirqd_clr_sched_params,
-+ .thread_should_run = ktimer_softirqd_should_run,
-+ .thread_fn = run_ksoftirqd,
-+ .thread_comm = "ktimersoftd/%u",
-+};
-+#endif
++static int check_expr_operands(struct hist_field *operand1,
++ struct hist_field *operand2)
++{
++ unsigned long operand1_flags = operand1->flags;
++ unsigned long operand2_flags = operand2->flags;
+
- static __init int spawn_ksoftirqd(void)
- {
- cpuhp_setup_state_nocalls(CPUHP_SOFTIRQ_DEAD, "softirq:dead", NULL,
- takeover_tasklets);
- BUG_ON(smpboot_register_percpu_thread(&softirq_threads));
--
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ BUG_ON(smpboot_register_percpu_thread(&softirq_timer_threads));
-+#endif
- return 0;
- }
- early_initcall(spawn_ksoftirqd);
-diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
-index ec9ab2f01489..8b89dbedeaff 100644
---- a/kernel/stop_machine.c
-+++ b/kernel/stop_machine.c
-@@ -36,7 +36,7 @@ struct cpu_stop_done {
- struct cpu_stopper {
- struct task_struct *thread;
-
-- spinlock_t lock;
-+ raw_spinlock_t lock;
- bool enabled; /* is this stopper enabled? */
- struct list_head works; /* list of pending works */
-
-@@ -78,14 +78,14 @@ static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work)
- unsigned long flags;
- bool enabled;
-
-- spin_lock_irqsave(&stopper->lock, flags);
-+ raw_spin_lock_irqsave(&stopper->lock, flags);
- enabled = stopper->enabled;
- if (enabled)
- __cpu_stop_queue_work(stopper, work);
- else if (work->done)
- cpu_stop_signal_done(work->done);
-- spin_unlock_irqrestore(&stopper->lock, flags);
-
-+ raw_spin_unlock_irqrestore(&stopper->lock, flags);
- return enabled;
- }
-
-@@ -231,8 +231,8 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
- struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2);
- int err;
- retry:
-- spin_lock_irq(&stopper1->lock);
-- spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
-+ raw_spin_lock_irq(&stopper1->lock);
-+ raw_spin_lock_nested(&stopper2->lock, SINGLE_DEPTH_NESTING);
-
- err = -ENOENT;
- if (!stopper1->enabled || !stopper2->enabled)
-@@ -255,8 +255,8 @@ static int cpu_stop_queue_two_works(int cpu1, struct cpu_stop_work *work1,
- __cpu_stop_queue_work(stopper1, work1);
- __cpu_stop_queue_work(stopper2, work2);
- unlock:
-- spin_unlock(&stopper2->lock);
-- spin_unlock_irq(&stopper1->lock);
-+ raw_spin_unlock(&stopper2->lock);
-+ raw_spin_unlock_irq(&stopper1->lock);
-
- if (unlikely(err == -EDEADLK)) {
- while (stop_cpus_in_progress)
-@@ -448,9 +448,9 @@ static int cpu_stop_should_run(unsigned int cpu)
- unsigned long flags;
- int run;
++ if ((operand1_flags & HIST_FIELD_FL_VAR_REF) ||
++ (operand1_flags & HIST_FIELD_FL_ALIAS)) {
++ struct hist_field *var;
++
++ var = find_var_field(operand1->var.hist_data, operand1->name);
++ if (!var)
++ return -EINVAL;
++ operand1_flags = var->flags;
++ }
++
++ if ((operand2_flags & HIST_FIELD_FL_VAR_REF) ||
++ (operand2_flags & HIST_FIELD_FL_ALIAS)) {
++ struct hist_field *var;
++
++ var = find_var_field(operand2->var.hist_data, operand2->name);
++ if (!var)
++ return -EINVAL;
++ operand2_flags = var->flags;
++ }
++
++ if ((operand1_flags & HIST_FIELD_FL_TIMESTAMP_USECS) !=
++ (operand2_flags & HIST_FIELD_FL_TIMESTAMP_USECS)) {
++ hist_err("Timestamp units in expression don't match", NULL);
+ return -EINVAL;
++ }
-- spin_lock_irqsave(&stopper->lock, flags);
-+ raw_spin_lock_irqsave(&stopper->lock, flags);
- run = !list_empty(&stopper->works);
-- spin_unlock_irqrestore(&stopper->lock, flags);
-+ raw_spin_unlock_irqrestore(&stopper->lock, flags);
- return run;
+ return 0;
}
-@@ -461,13 +461,13 @@ static void cpu_stopper_thread(unsigned int cpu)
-
- repeat:
- work = NULL;
-- spin_lock_irq(&stopper->lock);
-+ raw_spin_lock_irq(&stopper->lock);
- if (!list_empty(&stopper->works)) {
- work = list_first_entry(&stopper->works,
- struct cpu_stop_work, list);
- list_del_init(&work->list);
- }
-- spin_unlock_irq(&stopper->lock);
-+ raw_spin_unlock_irq(&stopper->lock);
-
- if (work) {
- cpu_stop_fn_t fn = work->fn;
-@@ -475,6 +475,8 @@ static void cpu_stopper_thread(unsigned int cpu)
- struct cpu_stop_done *done = work->done;
- int ret;
-
-+ /* XXX */
+-static int create_val_field(struct hist_trigger_data *hist_data,
+- unsigned int val_idx,
+- struct trace_event_file *file,
+- char *field_str)
++static struct hist_field *parse_expr(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file,
++ char *str, unsigned long flags,
++ char *var_name, unsigned int level)
+ {
+- struct ftrace_event_field *field = NULL;
+- unsigned long flags = 0;
+- char *field_name;
++ struct hist_field *operand1 = NULL, *operand2 = NULL, *expr = NULL;
++ unsigned long operand_flags;
++ int field_op, ret = -EINVAL;
++ char *sep, *operand1_str;
++
++ if (level > 3) {
++ hist_err("Too many subexpressions (3 max): ", str);
++ return ERR_PTR(-EINVAL);
++ }
++
++ field_op = contains_operator(str);
++
++ if (field_op == FIELD_OP_NONE)
++ return parse_atom(hist_data, file, str, &flags, var_name);
++
++ if (field_op == FIELD_OP_UNARY_MINUS)
++ return parse_unary(hist_data, file, str, flags, var_name, ++level);
++
++ switch (field_op) {
++ case FIELD_OP_MINUS:
++ sep = "-";
++ break;
++ case FIELD_OP_PLUS:
++ sep = "+";
++ break;
++ default:
++ goto free;
++ }
++
++ operand1_str = strsep(&str, sep);
++ if (!operand1_str || !str)
++ goto free;
++
++ operand_flags = 0;
++ operand1 = parse_atom(hist_data, file, operand1_str,
++ &operand_flags, NULL);
++ if (IS_ERR(operand1)) {
++ ret = PTR_ERR(operand1);
++ operand1 = NULL;
++ goto free;
++ }
++
++ // rest of string could be another expression e.g. b+c in a+b+c
++ operand_flags = 0;
++ operand2 = parse_expr(hist_data, file, str, operand_flags, NULL, ++level);
++ if (IS_ERR(operand2)) {
++ ret = PTR_ERR(operand2);
++ operand2 = NULL;
++ goto free;
++ }
++
++ ret = check_expr_operands(operand1, operand2);
++ if (ret)
++ goto free;
++
++ flags |= HIST_FIELD_FL_EXPR;
++
++ flags |= operand1->flags &
++ (HIST_FIELD_FL_TIMESTAMP | HIST_FIELD_FL_TIMESTAMP_USECS);
+
- /* cpu stop callbacks must not sleep, make in_atomic() == T */
- preempt_count_inc();
- ret = fn(arg);
-@@ -541,7 +543,7 @@ static int __init cpu_stop_init(void)
- for_each_possible_cpu(cpu) {
- struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
-
-- spin_lock_init(&stopper->lock);
-+ raw_spin_lock_init(&stopper->lock);
- INIT_LIST_HEAD(&stopper->works);
- }
-
-diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
-index bb5ec425dfe0..8338b14ed3a3 100644
---- a/kernel/time/hrtimer.c
-+++ b/kernel/time/hrtimer.c
-@@ -53,6 +53,7 @@
- #include <asm/uaccess.h>
-
- #include <trace/events/timer.h>
-+#include <trace/events/hist.h>
-
- #include "tick-internal.h"
-
-@@ -695,6 +696,29 @@ static void hrtimer_switch_to_hres(void)
- retrigger_next_event(NULL);
- }
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
++ expr = create_hist_field(hist_data, NULL, flags, var_name);
++ if (!expr) {
++ ret = -ENOMEM;
++ goto free;
++ }
+
-+static struct swork_event clock_set_delay_work;
++ operand1->read_once = true;
++ operand2->read_once = true;
++
++ expr->operands[0] = operand1;
++ expr->operands[1] = operand2;
++ expr->operator = field_op;
++ expr->name = expr_str(expr, 0);
++ expr->type = kstrdup(operand1->type, GFP_KERNEL);
++ if (!expr->type) {
++ ret = -ENOMEM;
++ goto free;
++ }
+
-+static void run_clock_set_delay(struct swork_event *event)
++ switch (field_op) {
++ case FIELD_OP_MINUS:
++ expr->fn = hist_field_minus;
++ break;
++ case FIELD_OP_PLUS:
++ expr->fn = hist_field_plus;
++ break;
++ default:
++ ret = -EINVAL;
++ goto free;
++ }
++
++ return expr;
++ free:
++ destroy_hist_field(operand1, 0);
++ destroy_hist_field(operand2, 0);
++ destroy_hist_field(expr, 0);
++
++ return ERR_PTR(ret);
++}
++
++static char *find_trigger_filter(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file)
+{
-+ clock_was_set();
++ struct event_trigger_data *test;
++
++ list_for_each_entry_rcu(test, &file->triggers, list) {
++ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ if (test->private_data == hist_data)
++ return test->filter_str;
++ }
++ }
++
++ return NULL;
+}
+
-+void clock_was_set_delayed(void)
++static struct event_command trigger_hist_cmd;
++static int event_hist_trigger_func(struct event_command *cmd_ops,
++ struct trace_event_file *file,
++ char *glob, char *cmd, char *param);
++
++static bool compatible_keys(struct hist_trigger_data *target_hist_data,
++ struct hist_trigger_data *hist_data,
++ unsigned int n_keys)
+{
-+ swork_queue(&clock_set_delay_work);
++ struct hist_field *target_hist_field, *hist_field;
++ unsigned int n, i, j;
++
++ if (hist_data->n_fields - hist_data->n_vals != n_keys)
++ return false;
++
++ i = hist_data->n_vals;
++ j = target_hist_data->n_vals;
++
++ for (n = 0; n < n_keys; n++) {
++ hist_field = hist_data->fields[i + n];
++ target_hist_field = target_hist_data->fields[j + n];
++
++ if (strcmp(hist_field->type, target_hist_field->type) != 0)
++ return false;
++ if (hist_field->size != target_hist_field->size)
++ return false;
++ if (hist_field->is_signed != target_hist_field->is_signed)
++ return false;
++ }
++
++ return true;
+}
+
-+static __init int create_clock_set_delay_thread(void)
++static struct hist_trigger_data *
++find_compatible_hist(struct hist_trigger_data *target_hist_data,
++ struct trace_event_file *file)
+{
-+ WARN_ON(swork_get());
-+ INIT_SWORK(&clock_set_delay_work, run_clock_set_delay);
-+ return 0;
++ struct hist_trigger_data *hist_data;
++ struct event_trigger_data *test;
++ unsigned int n_keys;
++
++ n_keys = target_hist_data->n_fields - target_hist_data->n_vals;
++
++ list_for_each_entry_rcu(test, &file->triggers, list) {
++ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ hist_data = test->private_data;
++
++ if (compatible_keys(target_hist_data, hist_data, n_keys))
++ return hist_data;
++ }
++ }
++
++ return NULL;
+}
-+early_initcall(create_clock_set_delay_thread);
-+#else /* PREEMPT_RT_FULL */
+
- static void clock_was_set_work(struct work_struct *work)
- {
- clock_was_set();
-@@ -710,6 +734,7 @@ void clock_was_set_delayed(void)
- {
- schedule_work(&hrtimer_work);
- }
-+#endif
-
- #else
-
-@@ -719,11 +744,8 @@ static inline int hrtimer_is_hres_enabled(void) { return 0; }
- static inline void hrtimer_switch_to_hres(void) { }
- static inline void
- hrtimer_force_reprogram(struct hrtimer_cpu_base *base, int skip_equal) { }
--static inline int hrtimer_reprogram(struct hrtimer *timer,
-- struct hrtimer_clock_base *base)
--{
-- return 0;
--}
-+static inline void hrtimer_reprogram(struct hrtimer *timer,
-+ struct hrtimer_clock_base *base) { }
- static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) { }
- static inline void retrigger_next_event(void *arg) { }
-
-@@ -855,6 +877,32 @@ u64 hrtimer_forward(struct hrtimer *timer, ktime_t now, ktime_t interval)
- }
- EXPORT_SYMBOL_GPL(hrtimer_forward);
-
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+# define wake_up_timer_waiters(b) wake_up(&(b)->wait)
++static struct trace_event_file *event_file(struct trace_array *tr,
++ char *system, char *event_name)
++{
++ struct trace_event_file *file;
++
++ file = find_event_file(tr, system, event_name);
++ if (!file)
++ return ERR_PTR(-EINVAL);
++
++ return file;
++}
++
++static struct hist_field *
++find_synthetic_field_var(struct hist_trigger_data *target_hist_data,
++ char *system, char *event_name, char *field_name)
++{
++ struct hist_field *event_var;
++ char *synthetic_name;
++
++ synthetic_name = kzalloc(MAX_FILTER_STR_VAL, GFP_KERNEL);
++ if (!synthetic_name)
++ return ERR_PTR(-ENOMEM);
++
++ strcpy(synthetic_name, "synthetic_");
++ strcat(synthetic_name, field_name);
++
++ event_var = find_event_var(target_hist_data, system, event_name, synthetic_name);
++
++ kfree(synthetic_name);
++
++ return event_var;
++}
+
+/**
-+ * hrtimer_wait_for_timer - Wait for a running timer
++ * create_field_var_hist - Automatically create a histogram and var for a field
++ * @target_hist_data: The target hist trigger
++ * @subsys_name: Optional subsystem name
++ * @event_name: Optional event name
++ * @field_name: The name of the field (and the resulting variable)
+ *
-+ * @timer: timer to wait for
++ * Hist trigger actions fetch data from variables, not directly from
++ * events. However, for convenience, users are allowed to directly
++ * specify an event field in an action, which will be automatically
++ * converted into a variable on their behalf.
++
++ * If a user specifies a field on an event that isn't the event the
++ * histogram currently being defined (the target event histogram), the
++ * only way that can be accomplished is if a new hist trigger is
++ * created and the field variable defined on that.
+ *
-+ * The function waits in case the timers callback function is
-+ * currently executed on the waitqueue of the timer base. The
-+ * waitqueue is woken up after the timer callback function has
-+ * finished execution.
++ * This function creates a new histogram compatible with the target
++ * event (meaning a histogram with the same key as the target
++ * histogram), and creates a variable for the specified field, but
++ * with 'synthetic_' prepended to the variable name in order to avoid
++ * collision with normal field variables.
++ *
++ * Return: The variable created for the field.
+ */
-+void hrtimer_wait_for_timer(const struct hrtimer *timer)
-+{
-+ struct hrtimer_clock_base *base = timer->base;
++static struct hist_field *
++create_field_var_hist(struct hist_trigger_data *target_hist_data,
++ char *subsys_name, char *event_name, char *field_name)
++{
++ struct trace_array *tr = target_hist_data->event_file->tr;
++ struct hist_field *event_var = ERR_PTR(-EINVAL);
++ struct hist_trigger_data *hist_data;
++ unsigned int i, n, first = true;
++ struct field_var_hist *var_hist;
++ struct trace_event_file *file;
++ struct hist_field *key_field;
++ char *saved_filter;
++ char *cmd;
++ int ret;
+
-+ if (base && base->cpu_base && !timer->irqsafe)
-+ wait_event(base->cpu_base->wait,
-+ !(hrtimer_callback_running(timer)));
-+}
++ if (target_hist_data->n_field_var_hists >= SYNTH_FIELDS_MAX) {
++ hist_err_event("onmatch: Too many field variables defined: ",
++ subsys_name, event_name, field_name);
++ return ERR_PTR(-EINVAL);
++ }
+
-+#else
-+# define wake_up_timer_waiters(b) do { } while (0)
-+#endif
++ file = event_file(tr, subsys_name, event_name);
+
- /*
- * enqueue_hrtimer - internal function to (re)start a timer
- *
-@@ -896,6 +944,11 @@ static void __remove_hrtimer(struct hrtimer *timer,
- if (!(state & HRTIMER_STATE_ENQUEUED))
- return;
-
-+ if (unlikely(!list_empty(&timer->cb_entry))) {
-+ list_del_init(&timer->cb_entry);
-+ return;
++ if (IS_ERR(file)) {
++ hist_err_event("onmatch: Event file not found: ",
++ subsys_name, event_name, field_name);
++ ret = PTR_ERR(file);
++ return ERR_PTR(ret);
+ }
+
- if (!timerqueue_del(&base->active, &timer->node))
- cpu_base->active_bases &= ~(1 << base->index);
-
-@@ -991,7 +1044,16 @@ void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
- new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
-
- timer_stats_hrtimer_set_start_info(timer);
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ {
-+ ktime_t now = new_base->get_time();
-
-+ if (ktime_to_ns(tim) < ktime_to_ns(now))
-+ timer->praecox = now;
-+ else
-+ timer->praecox = ktime_set(0, 0);
++ /*
++ * Look for a histogram compatible with target. We'll use the
++ * found histogram specification to create a new matching
++ * histogram with our variable on it. target_hist_data is not
++ * yet a registered histogram so we can't use that.
++ */
++ hist_data = find_compatible_hist(target_hist_data, file);
++ if (!hist_data) {
++ hist_err_event("onmatch: Matching event histogram not found: ",
++ subsys_name, event_name, field_name);
++ return ERR_PTR(-EINVAL);
+ }
-+#endif
- leftmost = enqueue_hrtimer(timer, new_base);
- if (!leftmost)
- goto unlock;
-@@ -1063,7 +1125,7 @@ int hrtimer_cancel(struct hrtimer *timer)
-
- if (ret >= 0)
- return ret;
-- cpu_relax();
-+ hrtimer_wait_for_timer(timer);
- }
- }
- EXPORT_SYMBOL_GPL(hrtimer_cancel);
-@@ -1127,6 +1189,7 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
-
- base = hrtimer_clockid_to_base(clock_id);
- timer->base = &cpu_base->clock_base[base];
-+ INIT_LIST_HEAD(&timer->cb_entry);
- timerqueue_init(&timer->node);
-
- #ifdef CONFIG_TIMER_STATS
-@@ -1167,6 +1230,7 @@ bool hrtimer_active(const struct hrtimer *timer)
- seq = raw_read_seqcount_begin(&cpu_base->seq);
-
- if (timer->state != HRTIMER_STATE_INACTIVE ||
-+ cpu_base->running_soft == timer ||
- cpu_base->running == timer)
- return true;
-
-@@ -1265,10 +1329,112 @@ static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base,
- cpu_base->running = NULL;
- }
-
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+static void hrtimer_rt_reprogram(int restart, struct hrtimer *timer,
-+ struct hrtimer_clock_base *base)
-+{
-+ int leftmost;
+
-+ if (restart != HRTIMER_NORESTART &&
-+ !(timer->state & HRTIMER_STATE_ENQUEUED)) {
++ /* See if a synthetic field variable has already been created */
++ event_var = find_synthetic_field_var(target_hist_data, subsys_name,
++ event_name, field_name);
++ if (!IS_ERR_OR_NULL(event_var))
++ return event_var;
+
-+ leftmost = enqueue_hrtimer(timer, base);
-+ if (!leftmost)
-+ return;
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+ if (!hrtimer_is_hres_active(timer)) {
-+ /*
-+ * Kick to reschedule the next tick to handle the new timer
-+ * on dynticks target.
-+ */
-+ if (base->cpu_base->nohz_active)
-+ wake_up_nohz_cpu(base->cpu_base->cpu);
-+ } else {
++ var_hist = kzalloc(sizeof(*var_hist), GFP_KERNEL);
++ if (!var_hist)
++ return ERR_PTR(-ENOMEM);
+
-+ hrtimer_reprogram(timer, base);
-+ }
-+#endif
++ cmd = kzalloc(MAX_FILTER_STR_VAL, GFP_KERNEL);
++ if (!cmd) {
++ kfree(var_hist);
++ return ERR_PTR(-ENOMEM);
++ }
++
++ /* Use the same keys as the compatible histogram */
++ strcat(cmd, "keys=");
++
++ for_each_hist_key_field(i, hist_data) {
++ key_field = hist_data->fields[i];
++ if (!first)
++ strcat(cmd, ",");
++ strcat(cmd, key_field->field->name);
++ first = false;
++ }
++
++ /* Create the synthetic field variable specification */
++ strcat(cmd, ":synthetic_");
++ strcat(cmd, field_name);
++ strcat(cmd, "=");
++ strcat(cmd, field_name);
++
++ /* Use the same filter as the compatible histogram */
++ saved_filter = find_trigger_filter(hist_data, file);
++ if (saved_filter) {
++ strcat(cmd, " if ");
++ strcat(cmd, saved_filter);
++ }
++
++ var_hist->cmd = kstrdup(cmd, GFP_KERNEL);
++ if (!var_hist->cmd) {
++ kfree(cmd);
++ kfree(var_hist);
++ return ERR_PTR(-ENOMEM);
++ }
++
++ /* Save the compatible histogram information */
++ var_hist->hist_data = hist_data;
++
++ /* Create the new histogram with our variable */
++ ret = event_hist_trigger_func(&trigger_hist_cmd, file,
++ "", "hist", cmd);
++ if (ret) {
++ kfree(cmd);
++ kfree(var_hist->cmd);
++ kfree(var_hist);
++ hist_err_event("onmatch: Couldn't create histogram for field: ",
++ subsys_name, event_name, field_name);
++ return ERR_PTR(ret);
++ }
++
++ kfree(cmd);
++
++ /* If we can't find the variable, something went wrong */
++ event_var = find_synthetic_field_var(target_hist_data, subsys_name,
++ event_name, field_name);
++ if (IS_ERR_OR_NULL(event_var)) {
++ kfree(var_hist->cmd);
++ kfree(var_hist);
++ hist_err_event("onmatch: Couldn't find synthetic variable: ",
++ subsys_name, event_name, field_name);
++ return ERR_PTR(-EINVAL);
+ }
++
++ n = target_hist_data->n_field_var_hists;
++ target_hist_data->field_var_hists[n] = var_hist;
++ target_hist_data->n_field_var_hists++;
++
++ return event_var;
+}
+
-+/*
-+ * The changes in mainline which removed the callback modes from
-+ * hrtimer are not yet working with -rt. The non wakeup_process()
-+ * based callbacks which involve sleeping locks need to be treated
-+ * seperately.
-+ */
-+static void hrtimer_rt_run_pending(void)
++static struct hist_field *
++find_target_event_var(struct hist_trigger_data *hist_data,
++ char *subsys_name, char *event_name, char *var_name)
+{
-+ enum hrtimer_restart (*fn)(struct hrtimer *);
-+ struct hrtimer_cpu_base *cpu_base;
-+ struct hrtimer_clock_base *base;
-+ struct hrtimer *timer;
-+ int index, restart;
++ struct trace_event_file *file = hist_data->event_file;
++ struct hist_field *hist_field = NULL;
+
-+ local_irq_disable();
-+ cpu_base = &per_cpu(hrtimer_bases, smp_processor_id());
++ if (subsys_name) {
++ struct trace_event_call *call;
+
-+ raw_spin_lock(&cpu_base->lock);
++ if (!event_name)
++ return NULL;
+
-+ for (index = 0; index < HRTIMER_MAX_CLOCK_BASES; index++) {
-+ base = &cpu_base->clock_base[index];
++ call = file->event_call;
+
-+ while (!list_empty(&base->expired)) {
-+ timer = list_first_entry(&base->expired,
-+ struct hrtimer, cb_entry);
++ if (strcmp(subsys_name, call->class->system) != 0)
++ return NULL;
+
-+ /*
-+ * Same as the above __run_hrtimer function
-+ * just we run with interrupts enabled.
-+ */
-+ debug_deactivate(timer);
-+ cpu_base->running_soft = timer;
-+ raw_write_seqcount_barrier(&cpu_base->seq);
++ if (strcmp(event_name, trace_event_name(call)) != 0)
++ return NULL;
++ }
++
++ hist_field = find_var_field(hist_data, var_name);
++
++ return hist_field;
++}
++
++static inline void __update_field_vars(struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *rec,
++ struct field_var **field_vars,
++ unsigned int n_field_vars,
++ unsigned int field_var_str_start)
++{
++ struct hist_elt_data *elt_data = elt->private_data;
++ unsigned int i, j, var_idx;
++ u64 var_val;
+
-+ __remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0);
-+ timer_stats_account_hrtimer(timer);
-+ fn = timer->function;
++ for (i = 0, j = field_var_str_start; i < n_field_vars; i++) {
++ struct field_var *field_var = field_vars[i];
++ struct hist_field *var = field_var->var;
++ struct hist_field *val = field_var->val;
+
-+ raw_spin_unlock_irq(&cpu_base->lock);
-+ restart = fn(timer);
-+ raw_spin_lock_irq(&cpu_base->lock);
++ var_val = val->fn(val, elt, rbe, rec);
++ var_idx = var->var.idx;
+
-+ hrtimer_rt_reprogram(restart, timer, base);
-+ raw_write_seqcount_barrier(&cpu_base->seq);
++ if (val->flags & HIST_FIELD_FL_STRING) {
++ char *str = elt_data->field_var_str[j++];
++ char *val_str = (char *)(uintptr_t)var_val;
+
-+ WARN_ON_ONCE(cpu_base->running_soft != timer);
-+ cpu_base->running_soft = NULL;
++ strscpy(str, val_str, STR_VAR_LEN_MAX);
++ var_val = (u64)(uintptr_t)str;
+ }
++ tracing_map_set_var(elt, var_idx, var_val);
+ }
-+
-+ raw_spin_unlock_irq(&cpu_base->lock);
-+
-+ wake_up_timer_waiters(cpu_base);
+}
+
-+static int hrtimer_rt_defer(struct hrtimer *timer)
++static void update_field_vars(struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *rec)
+{
-+ if (timer->irqsafe)
-+ return 0;
++ __update_field_vars(elt, rbe, rec, hist_data->field_vars,
++ hist_data->n_field_vars, 0);
++}
+
-+ __remove_hrtimer(timer, timer->base, timer->state, 0);
-+ list_add_tail(&timer->cb_entry, &timer->base->expired);
-+ return 1;
++static void update_max_vars(struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt,
++ struct ring_buffer_event *rbe,
++ void *rec)
++{
++ __update_field_vars(elt, rbe, rec, hist_data->max_vars,
++ hist_data->n_max_vars, hist_data->n_field_var_str);
+}
+
-+#else
++static struct hist_field *create_var(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file,
++ char *name, int size, const char *type)
++{
++ struct hist_field *var;
++ int idx;
+
-+static inline int hrtimer_rt_defer(struct hrtimer *timer) { return 0; }
++ if (find_var(hist_data, file, name) && !hist_data->remove) {
++ var = ERR_PTR(-EINVAL);
++ goto out;
++ }
+
-+#endif
++ var = kzalloc(sizeof(struct hist_field), GFP_KERNEL);
++ if (!var) {
++ var = ERR_PTR(-ENOMEM);
++ goto out;
++ }
+
-+static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer);
++ idx = tracing_map_add_var(hist_data->map);
++ if (idx < 0) {
++ kfree(var);
++ var = ERR_PTR(-EINVAL);
++ goto out;
++ }
+
- static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
- {
- struct hrtimer_clock_base *base = cpu_base->clock_base;
- unsigned int active = cpu_base->active_bases;
-+ int raise = 0;
-
- for (; active; base++, active >>= 1) {
- struct timerqueue_node *node;
-@@ -1284,6 +1450,15 @@ static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
-
- timer = container_of(node, struct hrtimer, node);
-
-+ trace_hrtimer_interrupt(raw_smp_processor_id(),
-+ ktime_to_ns(ktime_sub(ktime_to_ns(timer->praecox) ?
-+ timer->praecox : hrtimer_get_expires(timer),
-+ basenow)),
-+ current,
-+ timer->function == hrtimer_wakeup ?
-+ container_of(timer, struct hrtimer_sleeper,
-+ timer)->task : NULL);
++ var->flags = HIST_FIELD_FL_VAR;
++ var->var.idx = idx;
++ var->var.hist_data = var->hist_data = hist_data;
++ var->size = size;
++ var->var.name = kstrdup(name, GFP_KERNEL);
++ var->type = kstrdup(type, GFP_KERNEL);
++ if (!var->var.name || !var->type) {
++ kfree(var->var.name);
++ kfree(var->type);
++ kfree(var);
++ var = ERR_PTR(-ENOMEM);
++ }
++ out:
++ return var;
++}
+
- /*
- * The immediate goal for using the softexpires is
- * minimizing wakeups, not running timers at the
-@@ -1299,9 +1474,14 @@ static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
- if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer))
- break;
-
-- __run_hrtimer(cpu_base, base, timer, &basenow);
-+ if (!hrtimer_rt_defer(timer))
-+ __run_hrtimer(cpu_base, base, timer, &basenow);
-+ else
-+ raise = 1;
- }
- }
-+ if (raise)
-+ raise_softirq_irqoff(HRTIMER_SOFTIRQ);
- }
-
- #ifdef CONFIG_HIGH_RES_TIMERS
-@@ -1464,16 +1644,18 @@ static enum hrtimer_restart hrtimer_wakeup(struct hrtimer *timer)
- void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
- {
- sl->timer.function = hrtimer_wakeup;
-+ sl->timer.irqsafe = 1;
- sl->task = task;
- }
- EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
-
--static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mode)
-+static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mode,
-+ unsigned long state)
- {
- hrtimer_init_sleeper(t, current);
-
- do {
-- set_current_state(TASK_INTERRUPTIBLE);
-+ set_current_state(state);
- hrtimer_start_expires(&t->timer, mode);
-
- if (likely(t->task))
-@@ -1515,7 +1697,8 @@ long __sched hrtimer_nanosleep_restart(struct restart_block *restart)
- HRTIMER_MODE_ABS);
- hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires);
-
-- if (do_nanosleep(&t, HRTIMER_MODE_ABS))
-+ /* cpu_chill() does not care about restart state. */
-+ if (do_nanosleep(&t, HRTIMER_MODE_ABS, TASK_INTERRUPTIBLE))
- goto out;
-
- rmtp = restart->nanosleep.rmtp;
-@@ -1532,8 +1715,10 @@ long __sched hrtimer_nanosleep_restart(struct restart_block *restart)
- return ret;
- }
-
--long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
-- const enum hrtimer_mode mode, const clockid_t clockid)
-+static long
-+__hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
-+ const enum hrtimer_mode mode, const clockid_t clockid,
-+ unsigned long state)
- {
- struct restart_block *restart;
- struct hrtimer_sleeper t;
-@@ -1546,7 +1731,7 @@ long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
-
- hrtimer_init_on_stack(&t.timer, clockid, mode);
- hrtimer_set_expires_range_ns(&t.timer, timespec_to_ktime(*rqtp), slack);
-- if (do_nanosleep(&t, mode))
-+ if (do_nanosleep(&t, mode, state))
- goto out;
-
- /* Absolute timers do not update the rmtp value and restart: */
-@@ -1573,6 +1758,12 @@ long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
- return ret;
- }
-
-+long hrtimer_nanosleep(struct timespec *rqtp, struct timespec __user *rmtp,
-+ const enum hrtimer_mode mode, const clockid_t clockid)
++static struct field_var *create_field_var(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file,
++ char *field_name)
+{
-+ return __hrtimer_nanosleep(rqtp, rmtp, mode, clockid, TASK_INTERRUPTIBLE);
++ struct hist_field *val = NULL, *var = NULL;
++ unsigned long flags = HIST_FIELD_FL_VAR;
++ struct field_var *field_var;
+ int ret = 0;
+
+- if (WARN_ON(val_idx >= TRACING_MAP_VALS_MAX))
++ if (hist_data->n_field_vars >= SYNTH_FIELDS_MAX) {
++ hist_err("Too many field variables defined: ", field_name);
++ ret = -EINVAL;
++ goto err;
++ }
++
++ val = parse_atom(hist_data, file, field_name, &flags, NULL);
++ if (IS_ERR(val)) {
++ hist_err("Couldn't parse field variable: ", field_name);
++ ret = PTR_ERR(val);
++ goto err;
++ }
++
++ var = create_var(hist_data, file, field_name, val->size, val->type);
++ if (IS_ERR(var)) {
++ hist_err("Couldn't create or find variable: ", field_name);
++ kfree(val);
++ ret = PTR_ERR(var);
++ goto err;
++ }
++
++ field_var = kzalloc(sizeof(struct field_var), GFP_KERNEL);
++ if (!field_var) {
++ kfree(val);
++ kfree(var);
++ ret = -ENOMEM;
++ goto err;
++ }
++
++ field_var->var = var;
++ field_var->val = val;
++ out:
++ return field_var;
++ err:
++ field_var = ERR_PTR(ret);
++ goto out;
+}
+
- SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp,
- struct timespec __user *, rmtp)
- {
-@@ -1587,6 +1778,26 @@ SYSCALL_DEFINE2(nanosleep, struct timespec __user *, rqtp,
- return hrtimer_nanosleep(&tu, rmtp, HRTIMER_MODE_REL, CLOCK_MONOTONIC);
- }
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+/*
-+ * Sleep for 1 ms in hope whoever holds what we want will let it go.
++/**
++ * create_target_field_var - Automatically create a variable for a field
++ * @target_hist_data: The target hist trigger
++ * @subsys_name: Optional subsystem name
++ * @event_name: Optional event name
++ * @var_name: The name of the field (and the resulting variable)
++ *
++ * Hist trigger actions fetch data from variables, not directly from
++ * events. However, for convenience, users are allowed to directly
++ * specify an event field in an action, which will be automatically
++ * converted into a variable on their behalf.
++
++ * This function creates a field variable with the name var_name on
++ * the hist trigger currently being defined on the target event. If
++ * subsys_name and event_name are specified, this function simply
++ * verifies that they do in fact match the target event subsystem and
++ * event name.
++ *
++ * Return: The variable created for the field.
+ */
-+void cpu_chill(void)
++static struct field_var *
++create_target_field_var(struct hist_trigger_data *target_hist_data,
++ char *subsys_name, char *event_name, char *var_name)
+{
-+ struct timespec tu = {
-+ .tv_nsec = NSEC_PER_MSEC,
-+ };
-+ unsigned int freeze_flag = current->flags & PF_NOFREEZE;
++ struct trace_event_file *file = target_hist_data->event_file;
+
-+ current->flags |= PF_NOFREEZE;
-+ __hrtimer_nanosleep(&tu, NULL, HRTIMER_MODE_REL, CLOCK_MONOTONIC,
-+ TASK_UNINTERRUPTIBLE);
-+ if (!freeze_flag)
-+ current->flags &= ~PF_NOFREEZE;
-+}
-+EXPORT_SYMBOL(cpu_chill);
-+#endif
++ if (subsys_name) {
++ struct trace_event_call *call;
+
- /*
- * Functions related to boot-time initialization:
- */
-@@ -1598,10 +1809,14 @@ int hrtimers_prepare_cpu(unsigned int cpu)
- for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
- cpu_base->clock_base[i].cpu_base = cpu_base;
- timerqueue_init_head(&cpu_base->clock_base[i].active);
-+ INIT_LIST_HEAD(&cpu_base->clock_base[i].expired);
- }
-
- cpu_base->cpu = cpu;
- hrtimer_init_hres(cpu_base);
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+ init_waitqueue_head(&cpu_base->wait);
-+#endif
- return 0;
- }
-
-@@ -1671,9 +1886,26 @@ int hrtimers_dead_cpu(unsigned int scpu)
-
- #endif /* CONFIG_HOTPLUG_CPU */
-
-+#ifdef CONFIG_PREEMPT_RT_BASE
++ if (!event_name)
++ return NULL;
+
-+static void run_hrtimer_softirq(struct softirq_action *h)
-+{
-+ hrtimer_rt_run_pending();
++ call = file->event_call;
++
++ if (strcmp(subsys_name, call->class->system) != 0)
++ return NULL;
++
++ if (strcmp(event_name, trace_event_name(call)) != 0)
++ return NULL;
++ }
++
++ return create_field_var(target_hist_data, file, var_name);
+}
+
-+static void hrtimers_open_softirq(void)
++static void onmax_print(struct seq_file *m,
++ struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt,
++ struct action_data *data)
+{
-+ open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq);
-+}
++ unsigned int i, save_var_idx, max_idx = data->onmax.max_var->var.idx;
+
-+#else
-+static void hrtimers_open_softirq(void) { }
-+#endif
++ seq_printf(m, "\n\tmax: %10llu", tracing_map_read_var(elt, max_idx));
+
- void __init hrtimers_init(void)
- {
- hrtimers_prepare_cpu(smp_processor_id());
-+ hrtimers_open_softirq();
- }
-
- /**
-diff --git a/kernel/time/itimer.c b/kernel/time/itimer.c
-index 1d5c7204ddc9..184de6751180 100644
---- a/kernel/time/itimer.c
-+++ b/kernel/time/itimer.c
-@@ -213,6 +213,7 @@ int do_setitimer(int which, struct itimerval *value, struct itimerval *ovalue)
- /* We are sharing ->siglock with it_real_fn() */
- if (hrtimer_try_to_cancel(timer) < 0) {
- spin_unlock_irq(&tsk->sighand->siglock);
-+ hrtimer_wait_for_timer(&tsk->signal->real_timer);
- goto again;
- }
- expires = timeval_to_ktime(value->it_value);
-diff --git a/kernel/time/jiffies.c b/kernel/time/jiffies.c
-index 555e21f7b966..a5d6435fabbb 100644
---- a/kernel/time/jiffies.c
-+++ b/kernel/time/jiffies.c
-@@ -74,7 +74,8 @@ static struct clocksource clocksource_jiffies = {
- .max_cycles = 10,
- };
-
--__cacheline_aligned_in_smp DEFINE_SEQLOCK(jiffies_lock);
-+__cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock);
-+__cacheline_aligned_in_smp seqcount_t jiffies_seq;
-
- #if (BITS_PER_LONG < 64)
- u64 get_jiffies_64(void)
-@@ -83,9 +84,9 @@ u64 get_jiffies_64(void)
- u64 ret;
-
- do {
-- seq = read_seqbegin(&jiffies_lock);
-+ seq = read_seqcount_begin(&jiffies_seq);
- ret = jiffies_64;
-- } while (read_seqretry(&jiffies_lock, seq));
-+ } while (read_seqcount_retry(&jiffies_seq, seq));
- return ret;
- }
- EXPORT_SYMBOL(get_jiffies_64);
-diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
-index 6df8927c58a5..05b7391bf9bd 100644
---- a/kernel/time/ntp.c
-+++ b/kernel/time/ntp.c
-@@ -17,6 +17,7 @@
- #include <linux/module.h>
- #include <linux/rtc.h>
- #include <linux/math64.h>
-+#include <linux/swork.h>
-
- #include "ntp_internal.h"
- #include "timekeeping_internal.h"
-@@ -568,10 +569,35 @@ static void sync_cmos_clock(struct work_struct *work)
- &sync_cmos_work, timespec64_to_jiffies(&next));
- }
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
++ for (i = 0; i < hist_data->n_max_vars; i++) {
++ struct hist_field *save_val = hist_data->max_vars[i]->val;
++ struct hist_field *save_var = hist_data->max_vars[i]->var;
++ u64 val;
+
-+static void run_clock_set_delay(struct swork_event *event)
-+{
-+ queue_delayed_work(system_power_efficient_wq, &sync_cmos_work, 0);
-+}
++ save_var_idx = save_var->var.idx;
+
-+static struct swork_event ntp_cmos_swork;
++ val = tracing_map_read_var(elt, save_var_idx);
+
-+void ntp_notify_cmos_timer(void)
-+{
-+ swork_queue(&ntp_cmos_swork);
++ if (save_val->flags & HIST_FIELD_FL_STRING) {
++ seq_printf(m, " %s: %-32s", save_var->var.name,
++ (char *)(uintptr_t)(val));
++ } else
++ seq_printf(m, " %s: %10llu", save_var->var.name, val);
++ }
+}
+
-+static __init int create_cmos_delay_thread(void)
++static void onmax_save(struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt, void *rec,
++ struct ring_buffer_event *rbe,
++ struct action_data *data, u64 *var_ref_vals)
+{
-+ WARN_ON(swork_get());
-+ INIT_SWORK(&ntp_cmos_swork, run_clock_set_delay);
-+ return 0;
-+}
-+early_initcall(create_cmos_delay_thread);
++ unsigned int max_idx = data->onmax.max_var->var.idx;
++ unsigned int max_var_ref_idx = data->onmax.max_var_ref_idx;
+
-+#else
++ u64 var_val, max_val;
+
- void ntp_notify_cmos_timer(void)
- {
- queue_delayed_work(system_power_efficient_wq, &sync_cmos_work, 0);
- }
-+#endif /* CONFIG_PREEMPT_RT_FULL */
-
- #else
- void ntp_notify_cmos_timer(void) { }
-diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
-index 39008d78927a..633f4eaca9e7 100644
---- a/kernel/time/posix-cpu-timers.c
-+++ b/kernel/time/posix-cpu-timers.c
-@@ -3,6 +3,7 @@
- */
-
- #include <linux/sched.h>
-+#include <linux/sched/rt.h>
- #include <linux/posix-timers.h>
- #include <linux/errno.h>
- #include <linux/math64.h>
-@@ -620,7 +621,7 @@ static int posix_cpu_timer_set(struct k_itimer *timer, int timer_flags,
- /*
- * Disarm any old timer after extracting its expiry time.
- */
-- WARN_ON_ONCE(!irqs_disabled());
-+ WARN_ON_ONCE_NONRT(!irqs_disabled());
-
- ret = 0;
- old_incr = timer->it.cpu.incr;
-@@ -1064,7 +1065,7 @@ void posix_cpu_timer_schedule(struct k_itimer *timer)
- /*
- * Now re-arm for the new expiry time.
- */
-- WARN_ON_ONCE(!irqs_disabled());
-+ WARN_ON_ONCE_NONRT(!irqs_disabled());
- arm_timer(timer);
- unlock_task_sighand(p, &flags);
-
-@@ -1153,13 +1154,13 @@ static inline int fastpath_timer_check(struct task_struct *tsk)
- * already updated our counts. We need to check if any timers fire now.
- * Interrupts are disabled.
- */
--void run_posix_cpu_timers(struct task_struct *tsk)
-+static void __run_posix_cpu_timers(struct task_struct *tsk)
- {
- LIST_HEAD(firing);
- struct k_itimer *timer, *next;
- unsigned long flags;
-
-- WARN_ON_ONCE(!irqs_disabled());
-+ WARN_ON_ONCE_NONRT(!irqs_disabled());
-
- /*
- * The fast path checks that there are no expired thread or thread
-@@ -1213,6 +1214,190 @@ void run_posix_cpu_timers(struct task_struct *tsk)
- }
- }
-
-+#ifdef CONFIG_PREEMPT_RT_BASE
-+#include <linux/kthread.h>
-+#include <linux/cpu.h>
-+DEFINE_PER_CPU(struct task_struct *, posix_timer_task);
-+DEFINE_PER_CPU(struct task_struct *, posix_timer_tasklist);
++ var_val = var_ref_vals[max_var_ref_idx];
++ max_val = tracing_map_read_var(elt, max_idx);
++
++ if (var_val <= max_val)
++ return;
++
++ tracing_map_set_var(elt, max_idx, var_val);
++
++ update_max_vars(hist_data, elt, rbe, rec);
++}
+
-+static int posix_cpu_timers_thread(void *data)
++static void onmax_destroy(struct action_data *data)
+{
-+ int cpu = (long)data;
++ unsigned int i;
+
-+ BUG_ON(per_cpu(posix_timer_task,cpu) != current);
++ destroy_hist_field(data->onmax.max_var, 0);
++ destroy_hist_field(data->onmax.var, 0);
+
-+ while (!kthread_should_stop()) {
-+ struct task_struct *tsk = NULL;
-+ struct task_struct *next = NULL;
++ kfree(data->onmax.var_str);
++ kfree(data->onmax.fn_name);
+
-+ if (cpu_is_offline(cpu))
-+ goto wait_to_die;
++ for (i = 0; i < data->n_params; i++)
++ kfree(data->params[i]);
+
-+ /* grab task list */
-+ raw_local_irq_disable();
-+ tsk = per_cpu(posix_timer_tasklist, cpu);
-+ per_cpu(posix_timer_tasklist, cpu) = NULL;
-+ raw_local_irq_enable();
++ kfree(data);
++}
+
-+ /* its possible the list is empty, just return */
-+ if (!tsk) {
-+ set_current_state(TASK_INTERRUPTIBLE);
-+ schedule();
-+ __set_current_state(TASK_RUNNING);
-+ continue;
-+ }
++static int onmax_create(struct hist_trigger_data *hist_data,
++ struct action_data *data)
++{
++ struct trace_event_file *file = hist_data->event_file;
++ struct hist_field *var_field, *ref_field, *max_var;
++ unsigned int var_ref_idx = hist_data->n_var_refs;
++ struct field_var *field_var;
++ char *onmax_var_str, *param;
++ unsigned long flags;
++ unsigned int i;
++ int ret = 0;
+
-+ /* Process task list */
-+ while (1) {
-+ /* save next */
-+ next = tsk->posix_timer_list;
++ onmax_var_str = data->onmax.var_str;
++ if (onmax_var_str[0] != '$') {
++ hist_err("onmax: For onmax(x), x must be a variable: ", onmax_var_str);
+ return -EINVAL;
++ }
++ onmax_var_str++;
+
+- field_name = strsep(&field_str, ".");
+- if (field_str) {
+- if (strcmp(field_str, "hex") == 0)
+- flags |= HIST_FIELD_FL_HEX;
+- else {
++ var_field = find_target_event_var(hist_data, NULL, NULL, onmax_var_str);
++ if (!var_field) {
++ hist_err("onmax: Couldn't find onmax variable: ", onmax_var_str);
++ return -EINVAL;
++ }
+
-+ /* run the task timers, clear its ptr and
-+ * unreference it
-+ */
-+ __run_posix_cpu_timers(tsk);
-+ tsk->posix_timer_list = NULL;
-+ put_task_struct(tsk);
++ flags = HIST_FIELD_FL_VAR_REF;
++ ref_field = create_hist_field(hist_data, NULL, flags, NULL);
++ if (!ref_field)
++ return -ENOMEM;
+
-+ /* check if this is the last on the list */
-+ if (next == tsk)
-+ break;
-+ tsk = next;
-+ }
++ if (init_var_ref(ref_field, var_field, NULL, NULL)) {
++ destroy_hist_field(ref_field, 0);
++ ret = -ENOMEM;
++ goto out;
+ }
-+ return 0;
++ hist_data->var_refs[hist_data->n_var_refs] = ref_field;
++ ref_field->var_ref_idx = hist_data->n_var_refs++;
++ data->onmax.var = ref_field;
++
++ data->fn = onmax_save;
++ data->onmax.max_var_ref_idx = var_ref_idx;
++ max_var = create_var(hist_data, file, "max", sizeof(u64), "u64");
++ if (IS_ERR(max_var)) {
++ hist_err("onmax: Couldn't create onmax variable: ", "max");
++ ret = PTR_ERR(max_var);
++ goto out;
++ }
++ data->onmax.max_var = max_var;
+
-+wait_to_die:
-+ /* Wait for kthread_stop */
-+ set_current_state(TASK_INTERRUPTIBLE);
-+ while (!kthread_should_stop()) {
-+ schedule();
-+ set_current_state(TASK_INTERRUPTIBLE);
++ for (i = 0; i < data->n_params; i++) {
++ param = kstrdup(data->params[i], GFP_KERNEL);
++ if (!param) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ field_var = create_target_field_var(hist_data, NULL, NULL, param);
++ if (IS_ERR(field_var)) {
++ hist_err("onmax: Couldn't create field variable: ", param);
++ ret = PTR_ERR(field_var);
++ kfree(param);
++ goto out;
++ }
++
++ hist_data->max_vars[hist_data->n_max_vars++] = field_var;
++ if (field_var->val->flags & HIST_FIELD_FL_STRING)
++ hist_data->n_max_var_str++;
++
++ kfree(param);
+ }
-+ __set_current_state(TASK_RUNNING);
-+ return 0;
++ out:
++ return ret;
+}
+
-+static inline int __fastpath_timer_check(struct task_struct *tsk)
++static int parse_action_params(char *params, struct action_data *data)
+{
-+ /* tsk == current, ensure it is safe to use ->signal/sighand */
-+ if (unlikely(tsk->exit_state))
-+ return 0;
++ char *param, *saved_param;
++ int ret = 0;
+
-+ if (!task_cputime_zero(&tsk->cputime_expires))
-+ return 1;
++ while (params) {
++ if (data->n_params >= SYNTH_FIELDS_MAX)
++ goto out;
+
-+ if (!task_cputime_zero(&tsk->signal->cputime_expires))
-+ return 1;
++ param = strsep(¶ms, ",");
++ if (!param) {
++ ret = -EINVAL;
++ goto out;
++ }
+
-+ return 0;
-+}
++ param = strstrip(param);
++ if (strlen(param) < 2) {
++ hist_err("Invalid action param: ", param);
+ ret = -EINVAL;
+ goto out;
+ }
+
-+void run_posix_cpu_timers(struct task_struct *tsk)
++ saved_param = kstrdup(param, GFP_KERNEL);
++ if (!saved_param) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ data->params[data->n_params++] = saved_param;
+ }
++ out:
++ return ret;
++}
+
+- field = trace_find_event_field(file->event_call, field_name);
+- if (!field || !field->size) {
++static struct action_data *onmax_parse(char *str)
+{
-+ unsigned long cpu = smp_processor_id();
-+ struct task_struct *tasklist;
++ char *onmax_fn_name, *onmax_var_str;
++ struct action_data *data;
++ int ret = -EINVAL;
+
-+ BUG_ON(!irqs_disabled());
-+ if(!per_cpu(posix_timer_task, cpu))
-+ return;
-+ /* get per-cpu references */
-+ tasklist = per_cpu(posix_timer_tasklist, cpu);
++ data = kzalloc(sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return ERR_PTR(-ENOMEM);
+
-+ /* check to see if we're already queued */
-+ if (!tsk->posix_timer_list && __fastpath_timer_check(tsk)) {
-+ get_task_struct(tsk);
-+ if (tasklist) {
-+ tsk->posix_timer_list = tasklist;
-+ } else {
-+ /*
-+ * The list is terminated by a self-pointing
-+ * task_struct
-+ */
-+ tsk->posix_timer_list = tsk;
++ onmax_var_str = strsep(&str, ")");
++ if (!onmax_var_str || !str) {
+ ret = -EINVAL;
+- goto out;
++ goto free;
+ }
+
+- hist_data->fields[val_idx] = create_hist_field(field, flags);
+- if (!hist_data->fields[val_idx]) {
++ data->onmax.var_str = kstrdup(onmax_var_str, GFP_KERNEL);
++ if (!data->onmax.var_str) {
++ ret = -ENOMEM;
++ goto free;
++ }
++
++ strsep(&str, ".");
++ if (!str)
++ goto free;
++
++ onmax_fn_name = strsep(&str, "(");
++ if (!onmax_fn_name || !str)
++ goto free;
++
++ if (strncmp(onmax_fn_name, "save", strlen("save")) == 0) {
++ char *params = strsep(&str, ")");
++
++ if (!params) {
++ ret = -EINVAL;
++ goto free;
+ }
-+ per_cpu(posix_timer_tasklist, cpu) = tsk;
+
-+ wake_up_process(per_cpu(posix_timer_task, cpu));
++ ret = parse_action_params(params, data);
++ if (ret)
++ goto free;
++ } else
++ goto free;
++
++ data->onmax.fn_name = kstrdup(onmax_fn_name, GFP_KERNEL);
++ if (!data->onmax.fn_name) {
++ ret = -ENOMEM;
++ goto free;
+ }
++ out:
++ return data;
++ free:
++ onmax_destroy(data);
++ data = ERR_PTR(ret);
++ goto out;
+}
+
-+/*
-+ * posix_cpu_thread_call - callback that gets triggered when a CPU is added.
-+ * Here we can start up the necessary migration thread for the new CPU.
-+ */
-+static int posix_cpu_thread_call(struct notifier_block *nfb,
-+ unsigned long action, void *hcpu)
-+{
-+ int cpu = (long)hcpu;
-+ struct task_struct *p;
-+ struct sched_param param;
-+
-+ switch (action) {
-+ case CPU_UP_PREPARE:
-+ p = kthread_create(posix_cpu_timers_thread, hcpu,
-+ "posixcputmr/%d",cpu);
-+ if (IS_ERR(p))
-+ return NOTIFY_BAD;
-+ p->flags |= PF_NOFREEZE;
-+ kthread_bind(p, cpu);
-+ /* Must be high prio to avoid getting starved */
-+ param.sched_priority = MAX_RT_PRIO-1;
-+ sched_setscheduler(p, SCHED_FIFO, ¶m);
-+ per_cpu(posix_timer_task,cpu) = p;
-+ break;
-+ case CPU_ONLINE:
-+ /* Strictly unneccessary, as first user will wake it. */
-+ wake_up_process(per_cpu(posix_timer_task,cpu));
-+ break;
-+#ifdef CONFIG_HOTPLUG_CPU
-+ case CPU_UP_CANCELED:
-+ /* Unbind it from offline cpu so it can run. Fall thru. */
-+ kthread_bind(per_cpu(posix_timer_task, cpu),
-+ cpumask_any(cpu_online_mask));
-+ kthread_stop(per_cpu(posix_timer_task,cpu));
-+ per_cpu(posix_timer_task,cpu) = NULL;
-+ break;
-+ case CPU_DEAD:
-+ kthread_stop(per_cpu(posix_timer_task,cpu));
-+ per_cpu(posix_timer_task,cpu) = NULL;
-+ break;
-+#endif
-+ }
-+ return NOTIFY_OK;
++static void onmatch_destroy(struct action_data *data)
++{
++ unsigned int i;
++
++ mutex_lock(&synth_event_mutex);
++
++ kfree(data->onmatch.match_event);
++ kfree(data->onmatch.match_event_system);
++ kfree(data->onmatch.synth_event_name);
++
++ for (i = 0; i < data->n_params; i++)
++ kfree(data->params[i]);
++
++ if (data->onmatch.synth_event)
++ data->onmatch.synth_event->ref--;
++
++ kfree(data);
++
++ mutex_unlock(&synth_event_mutex);
++}
++
++static void destroy_field_var(struct field_var *field_var)
++{
++ if (!field_var)
++ return;
++
++ destroy_hist_field(field_var->var, 0);
++ destroy_hist_field(field_var->val, 0);
++
++ kfree(field_var);
+}
+
-+/* Register at highest priority so that task migration (migrate_all_tasks)
-+ * happens before everything else.
-+ */
-+static struct notifier_block posix_cpu_thread_notifier = {
-+ .notifier_call = posix_cpu_thread_call,
-+ .priority = 10
-+};
++static void destroy_field_vars(struct hist_trigger_data *hist_data)
++{
++ unsigned int i;
+
-+static int __init posix_cpu_thread_init(void)
++ for (i = 0; i < hist_data->n_field_vars; i++)
++ destroy_field_var(hist_data->field_vars[i]);
++}
++
++static void save_field_var(struct hist_trigger_data *hist_data,
++ struct field_var *field_var)
+{
-+ void *hcpu = (void *)(long)smp_processor_id();
-+ /* Start one for boot CPU. */
-+ unsigned long cpu;
++ hist_data->field_vars[hist_data->n_field_vars++] = field_var;
+
-+ /* init the per-cpu posix_timer_tasklets */
-+ for_each_possible_cpu(cpu)
-+ per_cpu(posix_timer_tasklist, cpu) = NULL;
++ if (field_var->val->flags & HIST_FIELD_FL_STRING)
++ hist_data->n_field_var_str++;
++}
+
-+ posix_cpu_thread_call(&posix_cpu_thread_notifier, CPU_UP_PREPARE, hcpu);
-+ posix_cpu_thread_call(&posix_cpu_thread_notifier, CPU_ONLINE, hcpu);
-+ register_cpu_notifier(&posix_cpu_thread_notifier);
-+ return 0;
++
++static void destroy_synth_var_refs(struct hist_trigger_data *hist_data)
++{
++ unsigned int i;
++
++ for (i = 0; i < hist_data->n_synth_var_refs; i++)
++ destroy_hist_field(hist_data->synth_var_refs[i], 0);
+}
-+early_initcall(posix_cpu_thread_init);
-+#else /* CONFIG_PREEMPT_RT_BASE */
-+void run_posix_cpu_timers(struct task_struct *tsk)
++
++static void save_synth_var_ref(struct hist_trigger_data *hist_data,
++ struct hist_field *var_ref)
+{
-+ __run_posix_cpu_timers(tsk);
++ hist_data->synth_var_refs[hist_data->n_synth_var_refs++] = var_ref;
++
++ hist_data->var_refs[hist_data->n_var_refs] = var_ref;
++ var_ref->var_ref_idx = hist_data->n_var_refs++;
+}
-+#endif /* CONFIG_PREEMPT_RT_BASE */
+
- /*
- * Set one of the process-wide special case CPU timers or RLIMIT_CPU.
- * The tsk->sighand->siglock must be held by the caller.
-diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
-index f2826c35e918..464a98155a0e 100644
---- a/kernel/time/posix-timers.c
-+++ b/kernel/time/posix-timers.c
-@@ -506,6 +506,7 @@ static enum hrtimer_restart posix_timer_fn(struct hrtimer *timer)
- static struct pid *good_sigevent(sigevent_t * event)
- {
- struct task_struct *rtn = current->group_leader;
-+ int sig = event->sigev_signo;
-
- if ((event->sigev_notify & SIGEV_THREAD_ID ) &&
- (!(rtn = find_task_by_vpid(event->sigev_notify_thread_id)) ||
-@@ -514,7 +515,8 @@ static struct pid *good_sigevent(sigevent_t * event)
- return NULL;
-
- if (((event->sigev_notify & ~SIGEV_THREAD_ID) != SIGEV_NONE) &&
-- ((event->sigev_signo <= 0) || (event->sigev_signo > SIGRTMAX)))
-+ (sig <= 0 || sig > SIGRTMAX || sig_kernel_only(sig) ||
-+ sig_kernel_coredump(sig)))
- return NULL;
-
- return task_pid(rtn);
-@@ -826,6 +828,20 @@ SYSCALL_DEFINE1(timer_getoverrun, timer_t, timer_id)
- return overrun;
- }
-
-+/*
-+ * Protected by RCU!
-+ */
-+static void timer_wait_for_callback(struct k_clock *kc, struct k_itimer *timr)
++static int check_synth_field(struct synth_event *event,
++ struct hist_field *hist_field,
++ unsigned int field_pos)
+{
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ if (kc->timer_set == common_timer_set)
-+ hrtimer_wait_for_timer(&timr->it.real.timer);
-+ else
-+ /* FIXME: Whacky hack for posix-cpu-timers */
-+ schedule_timeout(1);
-+#endif
++ struct synth_field *field;
++
++ if (field_pos >= event->n_fields)
++ return -EINVAL;
++
++ field = event->fields[field_pos];
++
++ if (strcmp(field->type, hist_field->type) != 0)
++ return -EINVAL;
++
++ return 0;
+}
+
- /* Set a POSIX.1b interval timer. */
- /* timr->it_lock is taken. */
- static int
-@@ -903,6 +919,7 @@ SYSCALL_DEFINE4(timer_settime, timer_t, timer_id, int, flags,
- if (!timr)
- return -EINVAL;
-
-+ rcu_read_lock();
- kc = clockid_to_kclock(timr->it_clock);
- if (WARN_ON_ONCE(!kc || !kc->timer_set))
- error = -EINVAL;
-@@ -911,9 +928,12 @@ SYSCALL_DEFINE4(timer_settime, timer_t, timer_id, int, flags,
-
- unlock_timer(timr, flag);
- if (error == TIMER_RETRY) {
-+ timer_wait_for_callback(kc, timr);
- rtn = NULL; // We already got the old time...
-+ rcu_read_unlock();
- goto retry;
- }
-+ rcu_read_unlock();
-
- if (old_setting && !error &&
- copy_to_user(old_setting, &old_spec, sizeof (old_spec)))
-@@ -951,10 +971,15 @@ SYSCALL_DEFINE1(timer_delete, timer_t, timer_id)
- if (!timer)
- return -EINVAL;
-
-+ rcu_read_lock();
- if (timer_delete_hook(timer) == TIMER_RETRY) {
- unlock_timer(timer, flags);
-+ timer_wait_for_callback(clockid_to_kclock(timer->it_clock),
-+ timer);
-+ rcu_read_unlock();
- goto retry_delete;
- }
-+ rcu_read_unlock();
-
- spin_lock(¤t->sighand->siglock);
- list_del(&timer->list);
-@@ -980,8 +1005,18 @@ static void itimer_delete(struct k_itimer *timer)
- retry_delete:
- spin_lock_irqsave(&timer->it_lock, flags);
-
-- if (timer_delete_hook(timer) == TIMER_RETRY) {
-+ /* On RT we can race with a deletion */
-+ if (!timer->it_signal) {
- unlock_timer(timer, flags);
-+ return;
++static struct hist_field *
++onmatch_find_var(struct hist_trigger_data *hist_data, struct action_data *data,
++ char *system, char *event, char *var)
++{
++ struct hist_field *hist_field;
++
++ var++; /* skip '$' */
++
++ hist_field = find_target_event_var(hist_data, system, event, var);
++ if (!hist_field) {
++ if (!system) {
++ system = data->onmatch.match_event_system;
++ event = data->onmatch.match_event;
++ }
++
++ hist_field = find_event_var(hist_data, system, event, var);
+ }
+
-+ if (timer_delete_hook(timer) == TIMER_RETRY) {
-+ rcu_read_lock();
-+ unlock_timer(timer, flags);
-+ timer_wait_for_callback(clockid_to_kclock(timer->it_clock),
-+ timer);
-+ rcu_read_unlock();
- goto retry_delete;
- }
- list_del(&timer->list);
-diff --git a/kernel/time/tick-broadcast-hrtimer.c b/kernel/time/tick-broadcast-hrtimer.c
-index 690b797f522e..fe8ba1619879 100644
---- a/kernel/time/tick-broadcast-hrtimer.c
-+++ b/kernel/time/tick-broadcast-hrtimer.c
-@@ -107,5 +107,6 @@ void tick_setup_hrtimer_broadcast(void)
- {
- hrtimer_init(&bctimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
- bctimer.function = bc_handler;
-+ bctimer.irqsafe = true;
- clockevents_register_device(&ce_broadcast_hrtimer);
- }
-diff --git a/kernel/time/tick-common.c b/kernel/time/tick-common.c
-index 4fcd99e12aa0..5a47f2e98faf 100644
---- a/kernel/time/tick-common.c
-+++ b/kernel/time/tick-common.c
-@@ -79,13 +79,15 @@ int tick_is_oneshot_available(void)
- static void tick_periodic(int cpu)
- {
- if (tick_do_timer_cpu == cpu) {
-- write_seqlock(&jiffies_lock);
-+ raw_spin_lock(&jiffies_lock);
-+ write_seqcount_begin(&jiffies_seq);
-
- /* Keep track of the next tick event */
- tick_next_period = ktime_add(tick_next_period, tick_period);
-
- do_timer(1);
-- write_sequnlock(&jiffies_lock);
-+ write_seqcount_end(&jiffies_seq);
-+ raw_spin_unlock(&jiffies_lock);
- update_wall_time();
- }
-
-@@ -157,9 +159,9 @@ void tick_setup_periodic(struct clock_event_device *dev, int broadcast)
- ktime_t next;
-
- do {
-- seq = read_seqbegin(&jiffies_lock);
-+ seq = read_seqcount_begin(&jiffies_seq);
- next = tick_next_period;
-- } while (read_seqretry(&jiffies_lock, seq));
-+ } while (read_seqcount_retry(&jiffies_seq, seq));
-
- clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT);
-
-diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
-index 3bcb61b52f6c..66d85482a96e 100644
---- a/kernel/time/tick-sched.c
-+++ b/kernel/time/tick-sched.c
-@@ -62,7 +62,8 @@ static void tick_do_update_jiffies64(ktime_t now)
- return;
-
- /* Reevaluate with jiffies_lock held */
-- write_seqlock(&jiffies_lock);
-+ raw_spin_lock(&jiffies_lock);
-+ write_seqcount_begin(&jiffies_seq);
-
- delta = ktime_sub(now, last_jiffies_update);
- if (delta.tv64 >= tick_period.tv64) {
-@@ -85,10 +86,12 @@ static void tick_do_update_jiffies64(ktime_t now)
- /* Keep the tick_next_period variable up to date */
- tick_next_period = ktime_add(last_jiffies_update, tick_period);
- } else {
-- write_sequnlock(&jiffies_lock);
-+ write_seqcount_end(&jiffies_seq);
-+ raw_spin_unlock(&jiffies_lock);
- return;
- }
-- write_sequnlock(&jiffies_lock);
-+ write_seqcount_end(&jiffies_seq);
-+ raw_spin_unlock(&jiffies_lock);
- update_wall_time();
- }
-
-@@ -99,12 +102,14 @@ static ktime_t tick_init_jiffy_update(void)
- {
- ktime_t period;
-
-- write_seqlock(&jiffies_lock);
-+ raw_spin_lock(&jiffies_lock);
-+ write_seqcount_begin(&jiffies_seq);
- /* Did we start the jiffies update yet ? */
- if (last_jiffies_update.tv64 == 0)
- last_jiffies_update = tick_next_period;
- period = last_jiffies_update;
-- write_sequnlock(&jiffies_lock);
-+ write_seqcount_end(&jiffies_seq);
-+ raw_spin_unlock(&jiffies_lock);
- return period;
- }
-
-@@ -215,6 +220,7 @@ static void nohz_full_kick_func(struct irq_work *work)
-
- static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = {
- .func = nohz_full_kick_func,
-+ .flags = IRQ_WORK_HARD_IRQ,
- };
-
- /*
-@@ -673,10 +679,10 @@ static ktime_t tick_nohz_stop_sched_tick(struct tick_sched *ts,
-
- /* Read jiffies and the time when jiffies were updated last */
- do {
-- seq = read_seqbegin(&jiffies_lock);
-+ seq = read_seqcount_begin(&jiffies_seq);
- basemono = last_jiffies_update.tv64;
- basejiff = jiffies;
-- } while (read_seqretry(&jiffies_lock, seq));
-+ } while (read_seqcount_retry(&jiffies_seq, seq));
- ts->last_jiffies = basejiff;
-
- if (rcu_needs_cpu(basemono, &next_rcu) ||
-@@ -877,14 +883,7 @@ static bool can_stop_idle_tick(int cpu, struct tick_sched *ts)
- return false;
-
- if (unlikely(local_softirq_pending() && cpu_online(cpu))) {
-- static int ratelimit;
--
-- if (ratelimit < 10 &&
-- (local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK)) {
-- pr_warn("NOHZ: local_softirq_pending %02x\n",
-- (unsigned int) local_softirq_pending());
-- ratelimit++;
-- }
-+ softirq_check_pending_idle();
- return false;
- }
-
-@@ -1193,6 +1192,7 @@ void tick_setup_sched_timer(void)
- * Emulate tick processing via per-CPU hrtimers:
- */
- hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
-+ ts->sched_timer.irqsafe = 1;
- ts->sched_timer.function = tick_sched_timer;
-
- /* Get the next period (per-CPU) */
-diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
-index 46e312e9be38..fa75cf5d9253 100644
---- a/kernel/time/timekeeping.c
-+++ b/kernel/time/timekeeping.c
-@@ -2328,8 +2328,10 @@ EXPORT_SYMBOL(hardpps);
- */
- void xtime_update(unsigned long ticks)
- {
-- write_seqlock(&jiffies_lock);
-+ raw_spin_lock(&jiffies_lock);
-+ write_seqcount_begin(&jiffies_seq);
- do_timer(ticks);
-- write_sequnlock(&jiffies_lock);
-+ write_seqcount_end(&jiffies_seq);
-+ raw_spin_unlock(&jiffies_lock);
- update_wall_time();
- }
-diff --git a/kernel/time/timekeeping.h b/kernel/time/timekeeping.h
-index 704f595ce83f..763a3e5121ff 100644
---- a/kernel/time/timekeeping.h
-+++ b/kernel/time/timekeeping.h
-@@ -19,7 +19,8 @@ extern void timekeeping_resume(void);
- extern void do_timer(unsigned long ticks);
- extern void update_wall_time(void);
-
--extern seqlock_t jiffies_lock;
-+extern raw_spinlock_t jiffies_lock;
-+extern seqcount_t jiffies_seq;
-
- #define CS_NAME_LEN 32
-
-diff --git a/kernel/time/timer.c b/kernel/time/timer.c
-index c611c47de884..08a5ab762495 100644
---- a/kernel/time/timer.c
-+++ b/kernel/time/timer.c
-@@ -193,8 +193,11 @@ EXPORT_SYMBOL(jiffies_64);
- #endif
-
- struct timer_base {
-- spinlock_t lock;
-+ raw_spinlock_t lock;
- struct timer_list *running_timer;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ struct swait_queue_head wait_for_running_timer;
-+#endif
- unsigned long clk;
- unsigned long next_expiry;
- unsigned int cpu;
-@@ -948,10 +951,10 @@ static struct timer_base *lock_timer_base(struct timer_list *timer,
-
- if (!(tf & TIMER_MIGRATING)) {
- base = get_timer_base(tf);
-- spin_lock_irqsave(&base->lock, *flags);
-+ raw_spin_lock_irqsave(&base->lock, *flags);
- if (timer->flags == tf)
- return base;
-- spin_unlock_irqrestore(&base->lock, *flags);
-+ raw_spin_unlock_irqrestore(&base->lock, *flags);
- }
- cpu_relax();
- }
-@@ -1023,9 +1026,9 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only)
- /* See the comment in lock_timer_base() */
- timer->flags |= TIMER_MIGRATING;
-
-- spin_unlock(&base->lock);
-+ raw_spin_unlock(&base->lock);
- base = new_base;
-- spin_lock(&base->lock);
-+ raw_spin_lock(&base->lock);
- WRITE_ONCE(timer->flags,
- (timer->flags & ~TIMER_BASEMASK) | base->cpu);
- }
-@@ -1050,7 +1053,7 @@ __mod_timer(struct timer_list *timer, unsigned long expires, bool pending_only)
++ if (!hist_field)
++ hist_err_event("onmatch: Couldn't find onmatch param: $", system, event, var);
++
++ return hist_field;
++}
++
++static struct hist_field *
++onmatch_create_field_var(struct hist_trigger_data *hist_data,
++ struct action_data *data, char *system,
++ char *event, char *var)
++{
++ struct hist_field *hist_field = NULL;
++ struct field_var *field_var;
++
++ /*
++ * First try to create a field var on the target event (the
++ * currently being defined). This will create a variable for
++ * unqualified fields on the target event, or if qualified,
++ * target fields that have qualified names matching the target.
++ */
++ field_var = create_target_field_var(hist_data, system, event, var);
++
++ if (field_var && !IS_ERR(field_var)) {
++ save_field_var(hist_data, field_var);
++ hist_field = field_var->var;
++ } else {
++ field_var = NULL;
++ /*
++ * If no explicit system.event is specfied, default to
++ * looking for fields on the onmatch(system.event.xxx)
++ * event.
++ */
++ if (!system) {
++ system = data->onmatch.match_event_system;
++ event = data->onmatch.match_event;
++ }
++
++ /*
++ * At this point, we're looking at a field on another
++ * event. Because we can't modify a hist trigger on
++ * another event to add a variable for a field, we need
++ * to create a new trigger on that event and create the
++ * variable at the same time.
++ */
++ hist_field = create_field_var_hist(hist_data, system, event, var);
++ if (IS_ERR(hist_field))
++ goto free;
++ }
++ out:
++ return hist_field;
++ free:
++ destroy_field_var(field_var);
++ hist_field = NULL;
++ goto out;
++}
++
++static int onmatch_create(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file,
++ struct action_data *data)
++{
++ char *event_name, *param, *system = NULL;
++ struct hist_field *hist_field, *var_ref;
++ unsigned int i, var_ref_idx;
++ unsigned int field_pos = 0;
++ struct synth_event *event;
++ int ret = 0;
++
++ mutex_lock(&synth_event_mutex);
++ event = find_synth_event(data->onmatch.synth_event_name);
++ if (!event) {
++ hist_err("onmatch: Couldn't find synthetic event: ", data->onmatch.synth_event_name);
++ mutex_unlock(&synth_event_mutex);
++ return -EINVAL;
++ }
++ event->ref++;
++ mutex_unlock(&synth_event_mutex);
++
++ var_ref_idx = hist_data->n_var_refs;
++
++ for (i = 0; i < data->n_params; i++) {
++ char *p;
++
++ p = param = kstrdup(data->params[i], GFP_KERNEL);
++ if (!param) {
++ ret = -ENOMEM;
++ goto err;
++ }
++
++ system = strsep(¶m, ".");
++ if (!param) {
++ param = (char *)system;
++ system = event_name = NULL;
++ } else {
++ event_name = strsep(¶m, ".");
++ if (!param) {
++ kfree(p);
++ ret = -EINVAL;
++ goto err;
++ }
++ }
++
++ if (param[0] == '$')
++ hist_field = onmatch_find_var(hist_data, data, system,
++ event_name, param);
++ else
++ hist_field = onmatch_create_field_var(hist_data, data,
++ system,
++ event_name,
++ param);
++
++ if (!hist_field) {
++ kfree(p);
++ ret = -EINVAL;
++ goto err;
++ }
++
++ if (check_synth_field(event, hist_field, field_pos) == 0) {
++ var_ref = create_var_ref(hist_field, system, event_name);
++ if (!var_ref) {
++ kfree(p);
++ ret = -ENOMEM;
++ goto err;
++ }
++
++ save_synth_var_ref(hist_data, var_ref);
++ field_pos++;
++ kfree(p);
++ continue;
++ }
++
++ hist_err_event("onmatch: Param type doesn't match synthetic event field type: ",
++ system, event_name, param);
++ kfree(p);
++ ret = -EINVAL;
++ goto err;
++ }
++
++ if (field_pos != event->n_fields) {
++ hist_err("onmatch: Param count doesn't match synthetic event field count: ", event->name);
++ ret = -EINVAL;
++ goto err;
++ }
++
++ data->fn = action_trace;
++ data->onmatch.synth_event = event;
++ data->onmatch.var_ref_idx = var_ref_idx;
++ out:
++ return ret;
++ err:
++ mutex_lock(&synth_event_mutex);
++ event->ref--;
++ mutex_unlock(&synth_event_mutex);
++
++ goto out;
++}
++
++static struct action_data *onmatch_parse(struct trace_array *tr, char *str)
++{
++ char *match_event, *match_event_system;
++ char *synth_event_name, *params;
++ struct action_data *data;
++ int ret = -EINVAL;
++
++ data = kzalloc(sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return ERR_PTR(-ENOMEM);
++
++ match_event = strsep(&str, ")");
++ if (!match_event || !str) {
++ hist_err("onmatch: Missing closing paren: ", match_event);
++ goto free;
++ }
++
++ match_event_system = strsep(&match_event, ".");
++ if (!match_event) {
++ hist_err("onmatch: Missing subsystem for match event: ", match_event_system);
++ goto free;
++ }
++
++ if (IS_ERR(event_file(tr, match_event_system, match_event))) {
++ hist_err_event("onmatch: Invalid subsystem or event name: ",
++ match_event_system, match_event, NULL);
++ goto free;
++ }
++
++ data->onmatch.match_event = kstrdup(match_event, GFP_KERNEL);
++ if (!data->onmatch.match_event) {
++ ret = -ENOMEM;
++ goto free;
++ }
++
++ data->onmatch.match_event_system = kstrdup(match_event_system, GFP_KERNEL);
++ if (!data->onmatch.match_event_system) {
++ ret = -ENOMEM;
++ goto free;
++ }
++
++ strsep(&str, ".");
++ if (!str) {
++ hist_err("onmatch: Missing . after onmatch(): ", str);
++ goto free;
++ }
++
++ synth_event_name = strsep(&str, "(");
++ if (!synth_event_name || !str) {
++ hist_err("onmatch: Missing opening paramlist paren: ", synth_event_name);
++ goto free;
++ }
++
++ data->onmatch.synth_event_name = kstrdup(synth_event_name, GFP_KERNEL);
++ if (!data->onmatch.synth_event_name) {
+ ret = -ENOMEM;
++ goto free;
++ }
++
++ params = strsep(&str, ")");
++ if (!params || !str || (str && strlen(str))) {
++ hist_err("onmatch: Missing closing paramlist paren: ", params);
++ goto free;
++ }
++
++ ret = parse_action_params(params, data);
++ if (ret)
++ goto free;
++ out:
++ return data;
++ free:
++ onmatch_destroy(data);
++ data = ERR_PTR(ret);
++ goto out;
++}
++
++static int create_hitcount_val(struct hist_trigger_data *hist_data)
++{
++ hist_data->fields[HITCOUNT_IDX] =
++ create_hist_field(hist_data, NULL, HIST_FIELD_FL_HITCOUNT, NULL);
++ if (!hist_data->fields[HITCOUNT_IDX])
++ return -ENOMEM;
++
++ hist_data->n_vals++;
++ hist_data->n_fields++;
++
++ if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX))
++ return -EINVAL;
++
++ return 0;
++}
++
++static int __create_val_field(struct hist_trigger_data *hist_data,
++ unsigned int val_idx,
++ struct trace_event_file *file,
++ char *var_name, char *field_str,
++ unsigned long flags)
++{
++ struct hist_field *hist_field;
++ int ret = 0;
++
++ hist_field = parse_expr(hist_data, file, field_str, flags, var_name, 0);
++ if (IS_ERR(hist_field)) {
++ ret = PTR_ERR(hist_field);
+ goto out;
}
- out_unlock:
-- spin_unlock_irqrestore(&base->lock, flags);
-+ raw_spin_unlock_irqrestore(&base->lock, flags);
++ hist_data->fields[val_idx] = hist_field;
++
+ ++hist_data->n_vals;
++ ++hist_data->n_fields;
+- if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX))
++ if (WARN_ON(hist_data->n_vals > TRACING_MAP_VALS_MAX + TRACING_MAP_VARS_MAX))
+ ret = -EINVAL;
+ out:
return ret;
}
-@@ -1144,19 +1147,46 @@ void add_timer_on(struct timer_list *timer, int cpu)
- if (base != new_base) {
- timer->flags |= TIMER_MIGRATING;
-
-- spin_unlock(&base->lock);
-+ raw_spin_unlock(&base->lock);
- base = new_base;
-- spin_lock(&base->lock);
-+ raw_spin_lock(&base->lock);
- WRITE_ONCE(timer->flags,
- (timer->flags & ~TIMER_BASEMASK) | cpu);
- }
-
- debug_activate(timer, timer->expires);
- internal_add_timer(base, timer);
-- spin_unlock_irqrestore(&base->lock, flags);
-+ raw_spin_unlock_irqrestore(&base->lock, flags);
- }
- EXPORT_SYMBOL_GPL(add_timer_on);
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+/*
-+ * Wait for a running timer
-+ */
-+static void wait_for_running_timer(struct timer_list *timer)
++static int create_val_field(struct hist_trigger_data *hist_data,
++ unsigned int val_idx,
++ struct trace_event_file *file,
++ char *field_str)
+{
-+ struct timer_base *base;
-+ u32 tf = timer->flags;
-+
-+ if (tf & TIMER_MIGRATING)
-+ return;
++ if (WARN_ON(val_idx >= TRACING_MAP_VALS_MAX))
++ return -EINVAL;
+
-+ base = get_timer_base(tf);
-+ swait_event(base->wait_for_running_timer,
-+ base->running_timer != timer);
++ return __create_val_field(hist_data, val_idx, file, NULL, field_str, 0);
+}
+
-+# define wakeup_timer_waiters(b) swake_up_all(&(b)->wait_for_running_timer)
-+#else
-+static inline void wait_for_running_timer(struct timer_list *timer)
++static int create_var_field(struct hist_trigger_data *hist_data,
++ unsigned int val_idx,
++ struct trace_event_file *file,
++ char *var_name, char *expr_str)
+{
-+ cpu_relax();
-+}
++ unsigned long flags = 0;
+
-+# define wakeup_timer_waiters(b) do { } while (0)
-+#endif
++ if (WARN_ON(val_idx >= TRACING_MAP_VALS_MAX + TRACING_MAP_VARS_MAX))
++ return -EINVAL;
+
- /**
- * del_timer - deactive a timer.
- * @timer: the timer to be deactivated
-@@ -1180,7 +1210,7 @@ int del_timer(struct timer_list *timer)
- if (timer_pending(timer)) {
- base = lock_timer_base(timer, &flags);
- ret = detach_if_pending(timer, base, true);
-- spin_unlock_irqrestore(&base->lock, flags);
-+ raw_spin_unlock_irqrestore(&base->lock, flags);
- }
++ if (find_var(hist_data, file, var_name) && !hist_data->remove) {
++ hist_err("Variable already defined: ", var_name);
++ return -EINVAL;
++ }
++
++ flags |= HIST_FIELD_FL_VAR;
++ hist_data->n_vars++;
++ if (WARN_ON(hist_data->n_vars > TRACING_MAP_VARS_MAX))
++ return -EINVAL;
++
++ return __create_val_field(hist_data, val_idx, file, var_name, expr_str, flags);
++}
++
+ static int create_val_fields(struct hist_trigger_data *hist_data,
+ struct trace_event_file *file)
+ {
+ char *fields_str, *field_str;
+- unsigned int i, j;
++ unsigned int i, j = 1;
+ int ret;
- return ret;
-@@ -1208,13 +1238,13 @@ int try_to_del_timer_sync(struct timer_list *timer)
- timer_stats_timer_clear_start_info(timer);
- ret = detach_if_pending(timer, base, true);
+ ret = create_hitcount_val(hist_data);
+@@ -493,12 +3912,15 @@
+ field_str = strsep(&fields_str, ",");
+ if (!field_str)
+ break;
++
+ if (strcmp(field_str, "hitcount") == 0)
+ continue;
++
+ ret = create_val_field(hist_data, j++, file, field_str);
+ if (ret)
+ goto out;
}
-- spin_unlock_irqrestore(&base->lock, flags);
-+ raw_spin_unlock_irqrestore(&base->lock, flags);
++
+ if (fields_str && (strcmp(fields_str, "hitcount") != 0))
+ ret = -EINVAL;
+ out:
+@@ -511,12 +3933,13 @@
+ struct trace_event_file *file,
+ char *field_str)
+ {
+- struct ftrace_event_field *field = NULL;
++ struct hist_field *hist_field = NULL;
++
+ unsigned long flags = 0;
+ unsigned int key_size;
+ int ret = 0;
- return ret;
- }
- EXPORT_SYMBOL(try_to_del_timer_sync);
+- if (WARN_ON(key_idx >= TRACING_MAP_FIELDS_MAX))
++ if (WARN_ON(key_idx >= HIST_FIELDS_MAX))
+ return -EINVAL;
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL)
- /**
- * del_timer_sync - deactivate a timer and wait for the handler to finish.
- * @timer: the timer to be deactivated
-@@ -1274,7 +1304,7 @@ int del_timer_sync(struct timer_list *timer)
- int ret = try_to_del_timer_sync(timer);
- if (ret >= 0)
- return ret;
-- cpu_relax();
-+ wait_for_running_timer(timer);
- }
- }
- EXPORT_SYMBOL(del_timer_sync);
-@@ -1339,14 +1369,17 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head)
- fn = timer->function;
- data = timer->data;
+ flags |= HIST_FIELD_FL_KEY;
+@@ -524,57 +3947,40 @@
+ if (strcmp(field_str, "stacktrace") == 0) {
+ flags |= HIST_FIELD_FL_STACKTRACE;
+ key_size = sizeof(unsigned long) * HIST_STACKTRACE_DEPTH;
++ hist_field = create_hist_field(hist_data, NULL, flags, NULL);
+ } else {
+- char *field_name = strsep(&field_str, ".");
+-
+- if (field_str) {
+- if (strcmp(field_str, "hex") == 0)
+- flags |= HIST_FIELD_FL_HEX;
+- else if (strcmp(field_str, "sym") == 0)
+- flags |= HIST_FIELD_FL_SYM;
+- else if (strcmp(field_str, "sym-offset") == 0)
+- flags |= HIST_FIELD_FL_SYM_OFFSET;
+- else if ((strcmp(field_str, "execname") == 0) &&
+- (strcmp(field_name, "common_pid") == 0))
+- flags |= HIST_FIELD_FL_EXECNAME;
+- else if (strcmp(field_str, "syscall") == 0)
+- flags |= HIST_FIELD_FL_SYSCALL;
+- else if (strcmp(field_str, "log2") == 0)
+- flags |= HIST_FIELD_FL_LOG2;
+- else {
+- ret = -EINVAL;
+- goto out;
+- }
++ hist_field = parse_expr(hist_data, file, field_str, flags,
++ NULL, 0);
++ if (IS_ERR(hist_field)) {
++ ret = PTR_ERR(hist_field);
++ goto out;
+ }
-- if (timer->flags & TIMER_IRQSAFE) {
-- spin_unlock(&base->lock);
-+ if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL) &&
-+ timer->flags & TIMER_IRQSAFE) {
-+ raw_spin_unlock(&base->lock);
- call_timer_fn(timer, fn, data);
-- spin_lock(&base->lock);
-+ base->running_timer = NULL;
-+ raw_spin_lock(&base->lock);
- } else {
-- spin_unlock_irq(&base->lock);
-+ raw_spin_unlock_irq(&base->lock);
- call_timer_fn(timer, fn, data);
-- spin_lock_irq(&base->lock);
-+ base->running_timer = NULL;
-+ raw_spin_lock_irq(&base->lock);
+- field = trace_find_event_field(file->event_call, field_name);
+- if (!field || !field->size) {
++ if (hist_field->flags & HIST_FIELD_FL_VAR_REF) {
++ hist_err("Using variable references as keys not supported: ", field_str);
++ destroy_hist_field(hist_field, 0);
+ ret = -EINVAL;
+ goto out;
}
+
+- if (is_string_field(field))
+- key_size = MAX_FILTER_STR_VAL;
+- else
+- key_size = field->size;
++ key_size = hist_field->size;
}
- }
-@@ -1515,7 +1548,7 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
- if (cpu_is_offline(smp_processor_id()))
- return expires;
-- spin_lock(&base->lock);
-+ raw_spin_lock(&base->lock);
- nextevt = __next_timer_interrupt(base);
- is_max_delta = (nextevt == base->clk + NEXT_TIMER_MAX_DELTA);
- base->next_expiry = nextevt;
-@@ -1543,7 +1576,7 @@ u64 get_next_timer_interrupt(unsigned long basej, u64 basem)
- if ((expires - basem) > TICK_NSEC)
- base->is_idle = true;
+- hist_data->fields[key_idx] = create_hist_field(field, flags);
+- if (!hist_data->fields[key_idx]) {
+- ret = -ENOMEM;
+- goto out;
+- }
++ hist_data->fields[key_idx] = hist_field;
+
+ key_size = ALIGN(key_size, sizeof(u64));
+ hist_data->fields[key_idx]->size = key_size;
+ hist_data->fields[key_idx]->offset = key_offset;
++
+ hist_data->key_size += key_size;
++
+ if (hist_data->key_size > HIST_KEY_SIZE_MAX) {
+ ret = -EINVAL;
+ goto out;
}
-- spin_unlock(&base->lock);
-+ raw_spin_unlock(&base->lock);
- return cmp_next_hrtimer_event(basem, expires);
- }
-@@ -1608,13 +1641,13 @@ void update_process_times(int user_tick)
+ hist_data->n_keys++;
++ hist_data->n_fields++;
- /* Note: this timer irq context must be accounted for as well. */
- account_process_tick(p, user_tick);
-+ scheduler_tick();
- run_local_timers();
- rcu_check_callbacks(user_tick);
--#ifdef CONFIG_IRQ_WORK
-+#if defined(CONFIG_IRQ_WORK)
- if (in_irq())
- irq_work_tick();
- #endif
-- scheduler_tick();
- run_posix_cpu_timers(p);
+ if (WARN_ON(hist_data->n_keys > TRACING_MAP_KEYS_MAX))
+ return -EINVAL;
+@@ -618,21 +4024,113 @@
+ return ret;
}
-@@ -1630,7 +1663,7 @@ static inline void __run_timers(struct timer_base *base)
- if (!time_after_eq(jiffies, base->clk))
- return;
++static int create_var_fields(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file)
++{
++ unsigned int i, j = hist_data->n_vals;
++ int ret = 0;
++
++ unsigned int n_vars = hist_data->attrs->var_defs.n_vars;
++
++ for (i = 0; i < n_vars; i++) {
++ char *var_name = hist_data->attrs->var_defs.name[i];
++ char *expr = hist_data->attrs->var_defs.expr[i];
++
++ ret = create_var_field(hist_data, j++, file, var_name, expr);
++ if (ret)
++ goto out;
++ }
++ out:
++ return ret;
++}
++
++static void free_var_defs(struct hist_trigger_data *hist_data)
++{
++ unsigned int i;
++
++ for (i = 0; i < hist_data->attrs->var_defs.n_vars; i++) {
++ kfree(hist_data->attrs->var_defs.name[i]);
++ kfree(hist_data->attrs->var_defs.expr[i]);
++ }
++
++ hist_data->attrs->var_defs.n_vars = 0;
++}
++
++static int parse_var_defs(struct hist_trigger_data *hist_data)
++{
++ char *s, *str, *var_name, *field_str;
++ unsigned int i, j, n_vars = 0;
++ int ret = 0;
++
++ for (i = 0; i < hist_data->attrs->n_assignments; i++) {
++ str = hist_data->attrs->assignment_str[i];
++ for (j = 0; j < TRACING_MAP_VARS_MAX; j++) {
++ field_str = strsep(&str, ",");
++ if (!field_str)
++ break;
++
++ var_name = strsep(&field_str, "=");
++ if (!var_name || !field_str) {
++ hist_err("Malformed assignment: ", var_name);
++ ret = -EINVAL;
++ goto free;
++ }
++
++ if (n_vars == TRACING_MAP_VARS_MAX) {
++ hist_err("Too many variables defined: ", var_name);
++ ret = -EINVAL;
++ goto free;
++ }
++
++ s = kstrdup(var_name, GFP_KERNEL);
++ if (!s) {
++ ret = -ENOMEM;
++ goto free;
++ }
++ hist_data->attrs->var_defs.name[n_vars] = s;
++
++ s = kstrdup(field_str, GFP_KERNEL);
++ if (!s) {
++ kfree(hist_data->attrs->var_defs.name[n_vars]);
++ ret = -ENOMEM;
++ goto free;
++ }
++ hist_data->attrs->var_defs.expr[n_vars++] = s;
++
++ hist_data->attrs->var_defs.n_vars = n_vars;
++ }
++ }
++
++ return ret;
++ free:
++ free_var_defs(hist_data);
++
++ return ret;
++}
++
+ static int create_hist_fields(struct hist_trigger_data *hist_data,
+ struct trace_event_file *file)
+ {
+ int ret;
-- spin_lock_irq(&base->lock);
-+ raw_spin_lock_irq(&base->lock);
++ ret = parse_var_defs(hist_data);
++ if (ret)
++ goto out;
++
+ ret = create_val_fields(hist_data, file);
+ if (ret)
+ goto out;
- while (time_after_eq(jiffies, base->clk)) {
+- ret = create_key_fields(hist_data, file);
++ ret = create_var_fields(hist_data, file);
+ if (ret)
+ goto out;
-@@ -1640,8 +1673,8 @@ static inline void __run_timers(struct timer_base *base)
- while (levels--)
- expire_timers(base, heads + levels);
- }
-- base->running_timer = NULL;
-- spin_unlock_irq(&base->lock);
-+ raw_spin_unlock_irq(&base->lock);
-+ wakeup_timer_waiters(base);
+- hist_data->n_fields = hist_data->n_vals + hist_data->n_keys;
++ ret = create_key_fields(hist_data, file);
++ if (ret)
++ goto out;
+ out:
++ free_var_defs(hist_data);
++
+ return ret;
}
- /*
-@@ -1651,6 +1684,8 @@ static __latent_entropy void run_timer_softirq(struct softirq_action *h)
+@@ -653,10 +4151,9 @@
+ static int create_sort_keys(struct hist_trigger_data *hist_data)
{
- struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
+ char *fields_str = hist_data->attrs->sort_key_str;
+- struct ftrace_event_field *field = NULL;
+ struct tracing_map_sort_key *sort_key;
+ int descending, ret = 0;
+- unsigned int i, j;
++ unsigned int i, j, k;
-+ irq_work_tick_soft();
-+
- __run_timers(base);
- if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active)
- __run_timers(this_cpu_ptr(&timer_bases[BASE_DEF]));
-@@ -1836,16 +1871,16 @@ int timers_dead_cpu(unsigned int cpu)
- * The caller is globally serialized and nobody else
- * takes two locks at once, deadlock is not possible.
- */
-- spin_lock_irq(&new_base->lock);
-- spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
-+ raw_spin_lock_irq(&new_base->lock);
-+ raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
+ hist_data->n_sort_keys = 1; /* we always have at least one, hitcount */
- BUG_ON(old_base->running_timer);
+@@ -670,7 +4167,9 @@
+ }
- for (i = 0; i < WHEEL_SIZE; i++)
- migrate_timer_list(new_base, old_base->vectors + i);
+ for (i = 0; i < TRACING_MAP_SORT_KEYS_MAX; i++) {
++ struct hist_field *hist_field;
+ char *field_str, *field_name;
++ const char *test_name;
-- spin_unlock(&old_base->lock);
-- spin_unlock_irq(&new_base->lock);
-+ raw_spin_unlock(&old_base->lock);
-+ raw_spin_unlock_irq(&new_base->lock);
- put_cpu_ptr(&timer_bases);
- }
- return 0;
-@@ -1861,8 +1896,11 @@ static void __init init_timer_cpu(int cpu)
- for (i = 0; i < NR_BASES; i++) {
- base = per_cpu_ptr(&timer_bases[i], cpu);
- base->cpu = cpu;
-- spin_lock_init(&base->lock);
-+ raw_spin_lock_init(&base->lock);
- base->clk = jiffies;
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+ init_swait_queue_head(&base->wait_for_running_timer);
-+#endif
- }
- }
+ sort_key = &hist_data->sort_keys[i];
-diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
-index 2a96b063d659..812e37237eb8 100644
---- a/kernel/trace/Kconfig
-+++ b/kernel/trace/Kconfig
-@@ -182,6 +182,24 @@ config IRQSOFF_TRACER
- enabled. This option and the preempt-off timing option can be
- used together or separately.)
+@@ -702,10 +4201,19 @@
+ continue;
+ }
-+config INTERRUPT_OFF_HIST
-+ bool "Interrupts-off Latency Histogram"
-+ depends on IRQSOFF_TRACER
-+ help
-+ This option generates continuously updated histograms (one per cpu)
-+ of the duration of time periods with interrupts disabled. The
-+ histograms are disabled by default. To enable them, write a non-zero
-+ number to
+- for (j = 1; j < hist_data->n_fields; j++) {
+- field = hist_data->fields[j]->field;
+- if (field && (strcmp(field_name, field->name) == 0)) {
+- sort_key->field_idx = j;
++ for (j = 1, k = 1; j < hist_data->n_fields; j++) {
++ unsigned int idx;
+
-+ /sys/kernel/debug/tracing/latency_hist/enable/preemptirqsoff
++ hist_field = hist_data->fields[j];
++ if (hist_field->flags & HIST_FIELD_FL_VAR)
++ continue;
+
-+ If PREEMPT_OFF_HIST is also selected, additional histograms (one
-+ per cpu) are generated that accumulate the duration of time periods
-+ when both interrupts and preemption are disabled. The histogram data
-+ will be located in the debug file system at
++ idx = k++;
+
-+ /sys/kernel/debug/tracing/latency_hist/irqsoff
++ test_name = hist_field_name(hist_field, 0);
+
- config PREEMPT_TRACER
- bool "Preemption-off Latency Tracer"
- default n
-@@ -206,6 +224,24 @@ config PREEMPT_TRACER
- enabled. This option and the irqs-off timing option can be
- used together or separately.)
++ if (strcmp(field_name, test_name) == 0) {
++ sort_key->field_idx = idx;
+ descending = is_descending(field_str);
+ if (descending < 0) {
+ ret = descending;
+@@ -720,16 +4228,230 @@
+ break;
+ }
+ }
++
+ hist_data->n_sort_keys = i;
+ out:
+ return ret;
+ }
-+config PREEMPT_OFF_HIST
-+ bool "Preemption-off Latency Histogram"
-+ depends on PREEMPT_TRACER
-+ help
-+ This option generates continuously updated histograms (one per cpu)
-+ of the duration of time periods with preemption disabled. The
-+ histograms are disabled by default. To enable them, write a non-zero
-+ number to
-+
-+ /sys/kernel/debug/tracing/latency_hist/enable/preemptirqsoff
-+
-+ If INTERRUPT_OFF_HIST is also selected, additional histograms (one
-+ per cpu) are generated that accumulate the duration of time periods
-+ when both interrupts and preemption are disabled. The histogram data
-+ will be located in the debug file system at
-+
-+ /sys/kernel/debug/tracing/latency_hist/preemptoff
-+
- config SCHED_TRACER
- bool "Scheduling Latency Tracer"
- select GENERIC_TRACER
-@@ -251,6 +287,74 @@ config HWLAT_TRACER
- file. Every time a latency is greater than tracing_thresh, it will
- be recorded into the ring buffer.
-
-+config WAKEUP_LATENCY_HIST
-+ bool "Scheduling Latency Histogram"
-+ depends on SCHED_TRACER
-+ help
-+ This option generates continuously updated histograms (one per cpu)
-+ of the scheduling latency of the highest priority task.
-+ The histograms are disabled by default. To enable them, write a
-+ non-zero number to
++static void destroy_actions(struct hist_trigger_data *hist_data)
++{
++ unsigned int i;
+
-+ /sys/kernel/debug/tracing/latency_hist/enable/wakeup
++ for (i = 0; i < hist_data->n_actions; i++) {
++ struct action_data *data = hist_data->actions[i];
+
-+ Two different algorithms are used, one to determine the latency of
-+ processes that exclusively use the highest priority of the system and
-+ another one to determine the latency of processes that share the
-+ highest system priority with other processes. The former is used to
-+ improve hardware and system software, the latter to optimize the
-+ priority design of a given system. The histogram data will be
-+ located in the debug file system at
++ if (data->fn == action_trace)
++ onmatch_destroy(data);
++ else if (data->fn == onmax_save)
++ onmax_destroy(data);
++ else
++ kfree(data);
++ }
++}
+
-+ /sys/kernel/debug/tracing/latency_hist/wakeup
++static int parse_actions(struct hist_trigger_data *hist_data)
++{
++ struct trace_array *tr = hist_data->event_file->tr;
++ struct action_data *data;
++ unsigned int i;
++ int ret = 0;
++ char *str;
+
-+ and
++ for (i = 0; i < hist_data->attrs->n_actions; i++) {
++ str = hist_data->attrs->action_str[i];
+
-+ /sys/kernel/debug/tracing/latency_hist/wakeup/sharedprio
++ if (strncmp(str, "onmatch(", strlen("onmatch(")) == 0) {
++ char *action_str = str + strlen("onmatch(");
+
-+ If both Scheduling Latency Histogram and Missed Timer Offsets
-+ Histogram are selected, additional histogram data will be collected
-+ that contain, in addition to the wakeup latency, the timer latency, in
-+ case the wakeup was triggered by an expired timer. These histograms
-+ are available in the
++ data = onmatch_parse(tr, action_str);
++ if (IS_ERR(data)) {
++ ret = PTR_ERR(data);
++ break;
++ }
++ data->fn = action_trace;
++ } else if (strncmp(str, "onmax(", strlen("onmax(")) == 0) {
++ char *action_str = str + strlen("onmax(");
+
-+ /sys/kernel/debug/tracing/latency_hist/timerandwakeup
++ data = onmax_parse(action_str);
++ if (IS_ERR(data)) {
++ ret = PTR_ERR(data);
++ break;
++ }
++ data->fn = onmax_save;
++ } else {
++ ret = -EINVAL;
++ break;
++ }
+
-+ directory. They reflect the apparent interrupt and scheduling latency
-+ and are best suitable to determine the worst-case latency of a given
-+ system. To enable these histograms, write a non-zero number to
++ hist_data->actions[hist_data->n_actions++] = data;
++ }
+
-+ /sys/kernel/debug/tracing/latency_hist/enable/timerandwakeup
++ return ret;
++}
+
-+config MISSED_TIMER_OFFSETS_HIST
-+ depends on HIGH_RES_TIMERS
-+ select GENERIC_TRACER
-+ bool "Missed Timer Offsets Histogram"
-+ help
-+ Generate a histogram of missed timer offsets in microseconds. The
-+ histograms are disabled by default. To enable them, write a non-zero
-+ number to
-+
-+ /sys/kernel/debug/tracing/latency_hist/enable/missed_timer_offsets
-+
-+ The histogram data will be located in the debug file system at
-+
-+ /sys/kernel/debug/tracing/latency_hist/missed_timer_offsets
-+
-+ If both Scheduling Latency Histogram and Missed Timer Offsets
-+ Histogram are selected, additional histogram data will be collected
-+ that contain, in addition to the wakeup latency, the timer latency, in
-+ case the wakeup was triggered by an expired timer. These histograms
-+ are available in the
-+
-+ /sys/kernel/debug/tracing/latency_hist/timerandwakeup
-+
-+ directory. They reflect the apparent interrupt and scheduling latency
-+ and are best suitable to determine the worst-case latency of a given
-+ system. To enable these histograms, write a non-zero number to
-+
-+ /sys/kernel/debug/tracing/latency_hist/enable/timerandwakeup
-+
- config ENABLE_DEFAULT_TRACERS
- bool "Trace process context switches and events"
- depends on !GENERIC_TRACER
-diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
-index e57980845549..83af000b783c 100644
---- a/kernel/trace/Makefile
-+++ b/kernel/trace/Makefile
-@@ -38,6 +38,10 @@ obj-$(CONFIG_IRQSOFF_TRACER) += trace_irqsoff.o
- obj-$(CONFIG_PREEMPT_TRACER) += trace_irqsoff.o
- obj-$(CONFIG_SCHED_TRACER) += trace_sched_wakeup.o
- obj-$(CONFIG_HWLAT_TRACER) += trace_hwlat.o
-+obj-$(CONFIG_INTERRUPT_OFF_HIST) += latency_hist.o
-+obj-$(CONFIG_PREEMPT_OFF_HIST) += latency_hist.o
-+obj-$(CONFIG_WAKEUP_LATENCY_HIST) += latency_hist.o
-+obj-$(CONFIG_MISSED_TIMER_OFFSETS_HIST) += latency_hist.o
- obj-$(CONFIG_NOP_TRACER) += trace_nop.o
- obj-$(CONFIG_STACK_TRACER) += trace_stack.o
- obj-$(CONFIG_MMIOTRACE) += trace_mmiotrace.o
-diff --git a/kernel/trace/latency_hist.c b/kernel/trace/latency_hist.c
-new file mode 100644
-index 000000000000..7f6ee70dea41
---- /dev/null
-+++ b/kernel/trace/latency_hist.c
-@@ -0,0 +1,1178 @@
-+/*
-+ * kernel/trace/latency_hist.c
-+ *
-+ * Add support for histograms of preemption-off latency and
-+ * interrupt-off latency and wakeup latency, it depends on
-+ * Real-Time Preemption Support.
-+ *
-+ * Copyright (C) 2005 MontaVista Software, Inc.
-+ * Yi Yang <yyang@ch.mvista.com>
-+ *
-+ * Converted to work with the new latency tracer.
-+ * Copyright (C) 2008 Red Hat, Inc.
-+ * Steven Rostedt <srostedt@redhat.com>
-+ *
-+ */
-+#include <linux/module.h>
-+#include <linux/debugfs.h>
-+#include <linux/seq_file.h>
-+#include <linux/percpu.h>
-+#include <linux/kallsyms.h>
-+#include <linux/uaccess.h>
-+#include <linux/sched.h>
-+#include <linux/sched/rt.h>
-+#include <linux/slab.h>
-+#include <linux/atomic.h>
-+#include <asm/div64.h>
++static int create_actions(struct hist_trigger_data *hist_data,
++ struct trace_event_file *file)
++{
++ struct action_data *data;
++ unsigned int i;
++ int ret = 0;
+
-+#include "trace.h"
-+#include <trace/events/sched.h>
++ for (i = 0; i < hist_data->attrs->n_actions; i++) {
++ data = hist_data->actions[i];
+
-+#define NSECS_PER_USECS 1000L
++ if (data->fn == action_trace) {
++ ret = onmatch_create(hist_data, file, data);
++ if (ret)
++ return ret;
++ } else if (data->fn == onmax_save) {
++ ret = onmax_create(hist_data, data);
++ if (ret)
++ return ret;
++ }
++ }
+
-+#define CREATE_TRACE_POINTS
-+#include <trace/events/hist.h>
++ return ret;
++}
+
-+enum {
-+ IRQSOFF_LATENCY = 0,
-+ PREEMPTOFF_LATENCY,
-+ PREEMPTIRQSOFF_LATENCY,
-+ WAKEUP_LATENCY,
-+ WAKEUP_LATENCY_SHAREDPRIO,
-+ MISSED_TIMER_OFFSETS,
-+ TIMERANDWAKEUP_LATENCY,
-+ MAX_LATENCY_TYPE,
-+};
++static void print_actions(struct seq_file *m,
++ struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt)
++{
++ unsigned int i;
+
-+#define MAX_ENTRY_NUM 10240
-+
-+struct hist_data {
-+ atomic_t hist_mode; /* 0 log, 1 don't log */
-+ long offset; /* set it to MAX_ENTRY_NUM/2 for a bipolar scale */
-+ long min_lat;
-+ long max_lat;
-+ unsigned long long below_hist_bound_samples;
-+ unsigned long long above_hist_bound_samples;
-+ long long accumulate_lat;
-+ unsigned long long total_samples;
-+ unsigned long long hist_array[MAX_ENTRY_NUM];
-+};
++ for (i = 0; i < hist_data->n_actions; i++) {
++ struct action_data *data = hist_data->actions[i];
+
-+struct enable_data {
-+ int latency_type;
-+ int enabled;
-+};
++ if (data->fn == onmax_save)
++ onmax_print(m, hist_data, elt, data);
++ }
++}
+
-+static char *latency_hist_dir_root = "latency_hist";
++static void print_onmax_spec(struct seq_file *m,
++ struct hist_trigger_data *hist_data,
++ struct action_data *data)
++{
++ unsigned int i;
+
-+#ifdef CONFIG_INTERRUPT_OFF_HIST
-+static DEFINE_PER_CPU(struct hist_data, irqsoff_hist);
-+static char *irqsoff_hist_dir = "irqsoff";
-+static DEFINE_PER_CPU(cycles_t, hist_irqsoff_start);
-+static DEFINE_PER_CPU(int, hist_irqsoff_counting);
-+#endif
++ seq_puts(m, ":onmax(");
++ seq_printf(m, "%s", data->onmax.var_str);
++ seq_printf(m, ").%s(", data->onmax.fn_name);
+
-+#ifdef CONFIG_PREEMPT_OFF_HIST
-+static DEFINE_PER_CPU(struct hist_data, preemptoff_hist);
-+static char *preemptoff_hist_dir = "preemptoff";
-+static DEFINE_PER_CPU(cycles_t, hist_preemptoff_start);
-+static DEFINE_PER_CPU(int, hist_preemptoff_counting);
-+#endif
++ for (i = 0; i < hist_data->n_max_vars; i++) {
++ seq_printf(m, "%s", hist_data->max_vars[i]->var->var.name);
++ if (i < hist_data->n_max_vars - 1)
++ seq_puts(m, ",");
++ }
++ seq_puts(m, ")");
++}
+
-+#if defined(CONFIG_PREEMPT_OFF_HIST) && defined(CONFIG_INTERRUPT_OFF_HIST)
-+static DEFINE_PER_CPU(struct hist_data, preemptirqsoff_hist);
-+static char *preemptirqsoff_hist_dir = "preemptirqsoff";
-+static DEFINE_PER_CPU(cycles_t, hist_preemptirqsoff_start);
-+static DEFINE_PER_CPU(int, hist_preemptirqsoff_counting);
-+#endif
++static void print_onmatch_spec(struct seq_file *m,
++ struct hist_trigger_data *hist_data,
++ struct action_data *data)
++{
++ unsigned int i;
+
-+#if defined(CONFIG_PREEMPT_OFF_HIST) || defined(CONFIG_INTERRUPT_OFF_HIST)
-+static notrace void probe_preemptirqsoff_hist(void *v, int reason, int start);
-+static struct enable_data preemptirqsoff_enabled_data = {
-+ .latency_type = PREEMPTIRQSOFF_LATENCY,
-+ .enabled = 0,
-+};
-+#endif
++ seq_printf(m, ":onmatch(%s.%s).", data->onmatch.match_event_system,
++ data->onmatch.match_event);
+
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+struct maxlatproc_data {
-+ char comm[FIELD_SIZEOF(struct task_struct, comm)];
-+ char current_comm[FIELD_SIZEOF(struct task_struct, comm)];
-+ int pid;
-+ int current_pid;
-+ int prio;
-+ int current_prio;
-+ long latency;
-+ long timeroffset;
-+ cycle_t timestamp;
-+};
-+#endif
++ seq_printf(m, "%s(", data->onmatch.synth_event->name);
+
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+static DEFINE_PER_CPU(struct hist_data, wakeup_latency_hist);
-+static DEFINE_PER_CPU(struct hist_data, wakeup_latency_hist_sharedprio);
-+static char *wakeup_latency_hist_dir = "wakeup";
-+static char *wakeup_latency_hist_dir_sharedprio = "sharedprio";
-+static notrace void probe_wakeup_latency_hist_start(void *v,
-+ struct task_struct *p);
-+static notrace void probe_wakeup_latency_hist_stop(void *v,
-+ bool preempt, struct task_struct *prev, struct task_struct *next);
-+static notrace void probe_sched_migrate_task(void *,
-+ struct task_struct *task, int cpu);
-+static struct enable_data wakeup_latency_enabled_data = {
-+ .latency_type = WAKEUP_LATENCY,
-+ .enabled = 0,
-+};
-+static DEFINE_PER_CPU(struct maxlatproc_data, wakeup_maxlatproc);
-+static DEFINE_PER_CPU(struct maxlatproc_data, wakeup_maxlatproc_sharedprio);
-+static DEFINE_PER_CPU(struct task_struct *, wakeup_task);
-+static DEFINE_PER_CPU(int, wakeup_sharedprio);
-+static unsigned long wakeup_pid;
-+#endif
-+
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+static DEFINE_PER_CPU(struct hist_data, missed_timer_offsets);
-+static char *missed_timer_offsets_dir = "missed_timer_offsets";
-+static notrace void probe_hrtimer_interrupt(void *v, int cpu,
-+ long long offset, struct task_struct *curr, struct task_struct *task);
-+static struct enable_data missed_timer_offsets_enabled_data = {
-+ .latency_type = MISSED_TIMER_OFFSETS,
-+ .enabled = 0,
-+};
-+static DEFINE_PER_CPU(struct maxlatproc_data, missed_timer_offsets_maxlatproc);
-+static unsigned long missed_timer_offsets_pid;
-+#endif
++ for (i = 0; i < data->n_params; i++) {
++ if (i)
++ seq_puts(m, ",");
++ seq_printf(m, "%s", data->params[i]);
++ }
+
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+static DEFINE_PER_CPU(struct hist_data, timerandwakeup_latency_hist);
-+static char *timerandwakeup_latency_hist_dir = "timerandwakeup";
-+static struct enable_data timerandwakeup_enabled_data = {
-+ .latency_type = TIMERANDWAKEUP_LATENCY,
-+ .enabled = 0,
-+};
-+static DEFINE_PER_CPU(struct maxlatproc_data, timerandwakeup_maxlatproc);
-+#endif
++ seq_puts(m, ")");
++}
+
-+void notrace latency_hist(int latency_type, int cpu, long latency,
-+ long timeroffset, cycle_t stop,
-+ struct task_struct *p)
++static bool actions_match(struct hist_trigger_data *hist_data,
++ struct hist_trigger_data *hist_data_test)
+{
-+ struct hist_data *my_hist;
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ struct maxlatproc_data *mp = NULL;
-+#endif
++ unsigned int i, j;
+
-+ if (!cpu_possible(cpu) || latency_type < 0 ||
-+ latency_type >= MAX_LATENCY_TYPE)
-+ return;
++ if (hist_data->n_actions != hist_data_test->n_actions)
++ return false;
+
-+ switch (latency_type) {
-+#ifdef CONFIG_INTERRUPT_OFF_HIST
-+ case IRQSOFF_LATENCY:
-+ my_hist = &per_cpu(irqsoff_hist, cpu);
-+ break;
-+#endif
-+#ifdef CONFIG_PREEMPT_OFF_HIST
-+ case PREEMPTOFF_LATENCY:
-+ my_hist = &per_cpu(preemptoff_hist, cpu);
-+ break;
-+#endif
-+#if defined(CONFIG_PREEMPT_OFF_HIST) && defined(CONFIG_INTERRUPT_OFF_HIST)
-+ case PREEMPTIRQSOFF_LATENCY:
-+ my_hist = &per_cpu(preemptirqsoff_hist, cpu);
-+ break;
-+#endif
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ case WAKEUP_LATENCY:
-+ my_hist = &per_cpu(wakeup_latency_hist, cpu);
-+ mp = &per_cpu(wakeup_maxlatproc, cpu);
-+ break;
-+ case WAKEUP_LATENCY_SHAREDPRIO:
-+ my_hist = &per_cpu(wakeup_latency_hist_sharedprio, cpu);
-+ mp = &per_cpu(wakeup_maxlatproc_sharedprio, cpu);
-+ break;
-+#endif
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ case MISSED_TIMER_OFFSETS:
-+ my_hist = &per_cpu(missed_timer_offsets, cpu);
-+ mp = &per_cpu(missed_timer_offsets_maxlatproc, cpu);
-+ break;
-+#endif
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ case TIMERANDWAKEUP_LATENCY:
-+ my_hist = &per_cpu(timerandwakeup_latency_hist, cpu);
-+ mp = &per_cpu(timerandwakeup_maxlatproc, cpu);
-+ break;
-+#endif
++ for (i = 0; i < hist_data->n_actions; i++) {
++ struct action_data *data = hist_data->actions[i];
++ struct action_data *data_test = hist_data_test->actions[i];
+
-+ default:
-+ return;
-+ }
++ if (data->fn != data_test->fn)
++ return false;
+
-+ latency += my_hist->offset;
++ if (data->n_params != data_test->n_params)
++ return false;
+
-+ if (atomic_read(&my_hist->hist_mode) == 0)
-+ return;
++ for (j = 0; j < data->n_params; j++) {
++ if (strcmp(data->params[j], data_test->params[j]) != 0)
++ return false;
++ }
+
-+ if (latency < 0 || latency >= MAX_ENTRY_NUM) {
-+ if (latency < 0)
-+ my_hist->below_hist_bound_samples++;
-+ else
-+ my_hist->above_hist_bound_samples++;
-+ } else
-+ my_hist->hist_array[latency]++;
-+
-+ if (unlikely(latency > my_hist->max_lat ||
-+ my_hist->min_lat == LONG_MAX)) {
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ if (latency_type == WAKEUP_LATENCY ||
-+ latency_type == WAKEUP_LATENCY_SHAREDPRIO ||
-+ latency_type == MISSED_TIMER_OFFSETS ||
-+ latency_type == TIMERANDWAKEUP_LATENCY) {
-+ strncpy(mp->comm, p->comm, sizeof(mp->comm));
-+ strncpy(mp->current_comm, current->comm,
-+ sizeof(mp->current_comm));
-+ mp->pid = task_pid_nr(p);
-+ mp->current_pid = task_pid_nr(current);
-+ mp->prio = p->prio;
-+ mp->current_prio = current->prio;
-+ mp->latency = latency;
-+ mp->timeroffset = timeroffset;
-+ mp->timestamp = stop;
++ if (data->fn == action_trace) {
++ if (strcmp(data->onmatch.synth_event_name,
++ data_test->onmatch.synth_event_name) != 0)
++ return false;
++ if (strcmp(data->onmatch.match_event_system,
++ data_test->onmatch.match_event_system) != 0)
++ return false;
++ if (strcmp(data->onmatch.match_event,
++ data_test->onmatch.match_event) != 0)
++ return false;
++ } else if (data->fn == onmax_save) {
++ if (strcmp(data->onmax.var_str,
++ data_test->onmax.var_str) != 0)
++ return false;
++ if (strcmp(data->onmax.fn_name,
++ data_test->onmax.fn_name) != 0)
++ return false;
+ }
-+#endif
-+ my_hist->max_lat = latency;
+ }
-+ if (unlikely(latency < my_hist->min_lat))
-+ my_hist->min_lat = latency;
-+ my_hist->total_samples++;
-+ my_hist->accumulate_lat += latency;
-+}
+
-+static void *l_start(struct seq_file *m, loff_t *pos)
-+{
-+ loff_t *index_ptr = NULL;
-+ loff_t index = *pos;
-+ struct hist_data *my_hist = m->private;
++ return true;
++}
+
-+ if (index == 0) {
-+ char minstr[32], avgstr[32], maxstr[32];
+
-+ atomic_dec(&my_hist->hist_mode);
++static void print_actions_spec(struct seq_file *m,
++ struct hist_trigger_data *hist_data)
++{
++ unsigned int i;
+
-+ if (likely(my_hist->total_samples)) {
-+ long avg = (long) div64_s64(my_hist->accumulate_lat,
-+ my_hist->total_samples);
-+ snprintf(minstr, sizeof(minstr), "%ld",
-+ my_hist->min_lat - my_hist->offset);
-+ snprintf(avgstr, sizeof(avgstr), "%ld",
-+ avg - my_hist->offset);
-+ snprintf(maxstr, sizeof(maxstr), "%ld",
-+ my_hist->max_lat - my_hist->offset);
-+ } else {
-+ strcpy(minstr, "<undef>");
-+ strcpy(avgstr, minstr);
-+ strcpy(maxstr, minstr);
-+ }
++ for (i = 0; i < hist_data->n_actions; i++) {
++ struct action_data *data = hist_data->actions[i];
+
-+ seq_printf(m, "#Minimum latency: %s microseconds\n"
-+ "#Average latency: %s microseconds\n"
-+ "#Maximum latency: %s microseconds\n"
-+ "#Total samples: %llu\n"
-+ "#There are %llu samples lower than %ld"
-+ " microseconds.\n"
-+ "#There are %llu samples greater or equal"
-+ " than %ld microseconds.\n"
-+ "#usecs\t%16s\n",
-+ minstr, avgstr, maxstr,
-+ my_hist->total_samples,
-+ my_hist->below_hist_bound_samples,
-+ -my_hist->offset,
-+ my_hist->above_hist_bound_samples,
-+ MAX_ENTRY_NUM - my_hist->offset,
-+ "samples");
++ if (data->fn == action_trace)
++ print_onmatch_spec(m, hist_data, data);
++ else if (data->fn == onmax_save)
++ print_onmax_spec(m, hist_data, data);
+ }
-+ if (index < MAX_ENTRY_NUM) {
-+ index_ptr = kmalloc(sizeof(loff_t), GFP_KERNEL);
-+ if (index_ptr)
-+ *index_ptr = index;
-+ }
-+
-+ return index_ptr;
+}
+
-+static void *l_next(struct seq_file *m, void *p, loff_t *pos)
++static void destroy_field_var_hists(struct hist_trigger_data *hist_data)
+{
-+ loff_t *index_ptr = p;
-+ struct hist_data *my_hist = m->private;
++ unsigned int i;
+
-+ if (++*pos >= MAX_ENTRY_NUM) {
-+ atomic_inc(&my_hist->hist_mode);
-+ return NULL;
++ for (i = 0; i < hist_data->n_field_var_hists; i++) {
++ kfree(hist_data->field_var_hists[i]->cmd);
++ kfree(hist_data->field_var_hists[i]);
+ }
-+ *index_ptr = *pos;
-+ return index_ptr;
+}
+
-+static void l_stop(struct seq_file *m, void *p)
-+{
-+ kfree(p);
-+}
+ static void destroy_hist_data(struct hist_trigger_data *hist_data)
+ {
++ if (!hist_data)
++ return;
+
-+static int l_show(struct seq_file *m, void *p)
-+{
-+ int index = *(loff_t *) p;
-+ struct hist_data *my_hist = m->private;
+ destroy_hist_trigger_attrs(hist_data->attrs);
+ destroy_hist_fields(hist_data);
+ tracing_map_destroy(hist_data->map);
+
-+ seq_printf(m, "%6ld\t%16llu\n", index - my_hist->offset,
-+ my_hist->hist_array[index]);
++ destroy_actions(hist_data);
++ destroy_field_vars(hist_data);
++ destroy_field_var_hists(hist_data);
++ destroy_synth_var_refs(hist_data);
++
+ kfree(hist_data);
+ }
+
+@@ -738,7 +4460,7 @@
+ struct tracing_map *map = hist_data->map;
+ struct ftrace_event_field *field;
+ struct hist_field *hist_field;
+- int i, idx;
++ int i, idx = 0;
+
+ for_each_hist_field(i, hist_data) {
+ hist_field = hist_data->fields[i];
+@@ -749,6 +4471,9 @@
+
+ if (hist_field->flags & HIST_FIELD_FL_STACKTRACE)
+ cmp_fn = tracing_map_cmp_none;
++ else if (!field)
++ cmp_fn = tracing_map_cmp_num(hist_field->size,
++ hist_field->is_signed);
+ else if (is_string_field(field))
+ cmp_fn = tracing_map_cmp_string;
+ else
+@@ -757,36 +4482,29 @@
+ idx = tracing_map_add_key_field(map,
+ hist_field->offset,
+ cmp_fn);
+-
+- } else
++ } else if (!(hist_field->flags & HIST_FIELD_FL_VAR))
+ idx = tracing_map_add_sum_field(map);
+
+ if (idx < 0)
+ return idx;
+- }
+-
+- return 0;
+-}
+-
+-static bool need_tracing_map_ops(struct hist_trigger_data *hist_data)
+-{
+- struct hist_field *key_field;
+- unsigned int i;
+-
+- for_each_hist_key_field(i, hist_data) {
+- key_field = hist_data->fields[i];
+
+- if (key_field->flags & HIST_FIELD_FL_EXECNAME)
+- return true;
++ if (hist_field->flags & HIST_FIELD_FL_VAR) {
++ idx = tracing_map_add_var(map);
++ if (idx < 0)
++ return idx;
++ hist_field->var.idx = idx;
++ hist_field->var.hist_data = hist_data;
++ }
+ }
+
+- return false;
+ return 0;
-+}
+ }
+
+ static struct hist_trigger_data *
+ create_hist_data(unsigned int map_bits,
+ struct hist_trigger_attrs *attrs,
+- struct trace_event_file *file)
++ struct trace_event_file *file,
++ bool remove)
+ {
+ const struct tracing_map_ops *map_ops = NULL;
+ struct hist_trigger_data *hist_data;
+@@ -797,6 +4515,12 @@
+ return ERR_PTR(-ENOMEM);
+
+ hist_data->attrs = attrs;
++ hist_data->remove = remove;
++ hist_data->event_file = file;
+
-+static const struct seq_operations latency_hist_seq_op = {
-+ .start = l_start,
-+ .next = l_next,
-+ .stop = l_stop,
-+ .show = l_show
-+};
++ ret = parse_actions(hist_data);
++ if (ret)
++ goto free;
+
+ ret = create_hist_fields(hist_data, file);
+ if (ret)
+@@ -806,8 +4530,7 @@
+ if (ret)
+ goto free;
+
+- if (need_tracing_map_ops(hist_data))
+- map_ops = &hist_trigger_elt_comm_ops;
++ map_ops = &hist_trigger_elt_data_ops;
+
+ hist_data->map = tracing_map_create(map_bits, hist_data->key_size,
+ map_ops, hist_data);
+@@ -820,12 +4543,6 @@
+ ret = create_tracing_map_fields(hist_data);
+ if (ret)
+ goto free;
+-
+- ret = tracing_map_init(hist_data->map);
+- if (ret)
+- goto free;
+-
+- hist_data->event_file = file;
+ out:
+ return hist_data;
+ free:
+@@ -839,18 +4556,39 @@
+ }
+
+ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
+- struct tracing_map_elt *elt,
+- void *rec)
++ struct tracing_map_elt *elt, void *rec,
++ struct ring_buffer_event *rbe,
++ u64 *var_ref_vals)
+ {
++ struct hist_elt_data *elt_data;
+ struct hist_field *hist_field;
+- unsigned int i;
++ unsigned int i, var_idx;
+ u64 hist_val;
+
++ elt_data = elt->private_data;
++ elt_data->var_ref_vals = var_ref_vals;
++
+ for_each_hist_val_field(i, hist_data) {
+ hist_field = hist_data->fields[i];
+- hist_val = hist_field->fn(hist_field, rec);
++ hist_val = hist_field->fn(hist_field, elt, rbe, rec);
++ if (hist_field->flags & HIST_FIELD_FL_VAR) {
++ var_idx = hist_field->var.idx;
++ tracing_map_set_var(elt, var_idx, hist_val);
++ continue;
++ }
+ tracing_map_update_sum(elt, i, hist_val);
+ }
++
++ for_each_hist_key_field(i, hist_data) {
++ hist_field = hist_data->fields[i];
++ if (hist_field->flags & HIST_FIELD_FL_VAR) {
++ hist_val = hist_field->fn(hist_field, elt, rbe, rec);
++ var_idx = hist_field->var.idx;
++ tracing_map_set_var(elt, var_idx, hist_val);
++ }
++ }
+
-+static int latency_hist_open(struct inode *inode, struct file *file)
++ update_field_vars(hist_data, elt, rbe, rec);
+ }
+
+ static inline void add_to_key(char *compound_key, void *key,
+@@ -877,15 +4615,31 @@
+ memcpy(compound_key + key_field->offset, key, size);
+ }
+
+-static void event_hist_trigger(struct event_trigger_data *data, void *rec)
++static void
++hist_trigger_actions(struct hist_trigger_data *hist_data,
++ struct tracing_map_elt *elt, void *rec,
++ struct ring_buffer_event *rbe, u64 *var_ref_vals)
+{
-+ int ret;
++ struct action_data *data;
++ unsigned int i;
+
-+ ret = seq_open(file, &latency_hist_seq_op);
-+ if (!ret) {
-+ struct seq_file *seq = file->private_data;
-+ seq->private = inode->i_private;
++ for (i = 0; i < hist_data->n_actions; i++) {
++ data = hist_data->actions[i];
++ data->fn(hist_data, elt, rec, rbe, data, var_ref_vals);
+ }
-+ return ret;
+}
+
-+static const struct file_operations latency_hist_fops = {
-+ .open = latency_hist_open,
-+ .read = seq_read,
-+ .llseek = seq_lseek,
-+ .release = seq_release,
-+};
++static void event_hist_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *rbe)
+ {
+ struct hist_trigger_data *hist_data = data->private_data;
+ bool use_compound_key = (hist_data->n_keys > 1);
+ unsigned long entries[HIST_STACKTRACE_DEPTH];
++ u64 var_ref_vals[TRACING_MAP_VARS_MAX];
+ char compound_key[HIST_KEY_SIZE_MAX];
++ struct tracing_map_elt *elt = NULL;
+ struct stack_trace stacktrace;
+ struct hist_field *key_field;
+- struct tracing_map_elt *elt;
+ u64 field_contents;
+ void *key = NULL;
+ unsigned int i;
+@@ -906,7 +4660,7 @@
+
+ key = entries;
+ } else {
+- field_contents = key_field->fn(key_field, rec);
++ field_contents = key_field->fn(key_field, elt, rbe, rec);
+ if (key_field->flags & HIST_FIELD_FL_STRING) {
+ key = (void *)(unsigned long)field_contents;
+ use_compound_key = true;
+@@ -921,9 +4675,18 @@
+ if (use_compound_key)
+ key = compound_key;
+
++ if (hist_data->n_var_refs &&
++ !resolve_var_refs(hist_data, key, var_ref_vals, false))
++ return;
+
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+static void clear_maxlatprocdata(struct maxlatproc_data *mp)
-+{
-+ mp->comm[0] = mp->current_comm[0] = '\0';
-+ mp->prio = mp->current_prio = mp->pid = mp->current_pid =
-+ mp->latency = mp->timeroffset = -1;
-+ mp->timestamp = 0;
-+}
-+#endif
+ elt = tracing_map_insert(hist_data->map, key);
+- if (elt)
+- hist_trigger_elt_update(hist_data, elt, rec);
++ if (!elt)
++ return;
+
-+static void hist_reset(struct hist_data *hist)
-+{
-+ atomic_dec(&hist->hist_mode);
++ hist_trigger_elt_update(hist_data, elt, rec, rbe, var_ref_vals);
++
++ if (resolve_var_refs(hist_data, key, var_ref_vals, true))
++ hist_trigger_actions(hist_data, elt, rec, rbe, var_ref_vals);
+ }
+
+ static void hist_trigger_stacktrace_print(struct seq_file *m,
+@@ -952,6 +4715,7 @@
+ struct hist_field *key_field;
+ char str[KSYM_SYMBOL_LEN];
+ bool multiline = false;
++ const char *field_name;
+ unsigned int i;
+ u64 uval;
+
+@@ -963,26 +4727,33 @@
+ if (i > hist_data->n_vals)
+ seq_puts(m, ", ");
+
++ field_name = hist_field_name(key_field, 0);
++
+ if (key_field->flags & HIST_FIELD_FL_HEX) {
+ uval = *(u64 *)(key + key_field->offset);
+- seq_printf(m, "%s: %llx",
+- key_field->field->name, uval);
++ seq_printf(m, "%s: %llx", field_name, uval);
+ } else if (key_field->flags & HIST_FIELD_FL_SYM) {
+ uval = *(u64 *)(key + key_field->offset);
+ sprint_symbol_no_offset(str, uval);
+- seq_printf(m, "%s: [%llx] %-45s",
+- key_field->field->name, uval, str);
++ seq_printf(m, "%s: [%llx] %-45s", field_name,
++ uval, str);
+ } else if (key_field->flags & HIST_FIELD_FL_SYM_OFFSET) {
+ uval = *(u64 *)(key + key_field->offset);
+ sprint_symbol(str, uval);
+- seq_printf(m, "%s: [%llx] %-55s",
+- key_field->field->name, uval, str);
++ seq_printf(m, "%s: [%llx] %-55s", field_name,
++ uval, str);
+ } else if (key_field->flags & HIST_FIELD_FL_EXECNAME) {
+- char *comm = elt->private_data;
++ struct hist_elt_data *elt_data = elt->private_data;
++ char *comm;
++
++ if (WARN_ON_ONCE(!elt_data))
++ return;
+
-+ memset(hist->hist_array, 0, sizeof(hist->hist_array));
-+ hist->below_hist_bound_samples = 0ULL;
-+ hist->above_hist_bound_samples = 0ULL;
-+ hist->min_lat = LONG_MAX;
-+ hist->max_lat = LONG_MIN;
-+ hist->total_samples = 0ULL;
-+ hist->accumulate_lat = 0LL;
++ comm = elt_data->comm;
+
+ uval = *(u64 *)(key + key_field->offset);
+- seq_printf(m, "%s: %-16s[%10llu]",
+- key_field->field->name, comm, uval);
++ seq_printf(m, "%s: %-16s[%10llu]", field_name,
++ comm, uval);
+ } else if (key_field->flags & HIST_FIELD_FL_SYSCALL) {
+ const char *syscall_name;
+
+@@ -991,8 +4762,8 @@
+ if (!syscall_name)
+ syscall_name = "unknown_syscall";
+
+- seq_printf(m, "%s: %-30s[%3llu]",
+- key_field->field->name, syscall_name, uval);
++ seq_printf(m, "%s: %-30s[%3llu]", field_name,
++ syscall_name, uval);
+ } else if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
+ seq_puts(m, "stacktrace:\n");
+ hist_trigger_stacktrace_print(m,
+@@ -1000,15 +4771,14 @@
+ HIST_STACKTRACE_DEPTH);
+ multiline = true;
+ } else if (key_field->flags & HIST_FIELD_FL_LOG2) {
+- seq_printf(m, "%s: ~ 2^%-2llu", key_field->field->name,
++ seq_printf(m, "%s: ~ 2^%-2llu", field_name,
+ *(u64 *)(key + key_field->offset));
+ } else if (key_field->flags & HIST_FIELD_FL_STRING) {
+- seq_printf(m, "%s: %-50s", key_field->field->name,
++ seq_printf(m, "%s: %-50s", field_name,
+ (char *)(key + key_field->offset));
+ } else {
+ uval = *(u64 *)(key + key_field->offset);
+- seq_printf(m, "%s: %10llu", key_field->field->name,
+- uval);
++ seq_printf(m, "%s: %10llu", field_name, uval);
+ }
+ }
+
+@@ -1021,17 +4791,23 @@
+ tracing_map_read_sum(elt, HITCOUNT_IDX));
+
+ for (i = 1; i < hist_data->n_vals; i++) {
++ field_name = hist_field_name(hist_data->fields[i], 0);
+
-+ atomic_inc(&hist->hist_mode);
-+}
++ if (hist_data->fields[i]->flags & HIST_FIELD_FL_VAR ||
++ hist_data->fields[i]->flags & HIST_FIELD_FL_EXPR)
++ continue;
+
-+static ssize_t
-+latency_hist_reset(struct file *file, const char __user *a,
-+ size_t size, loff_t *off)
-+{
-+ int cpu;
-+ struct hist_data *hist = NULL;
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ struct maxlatproc_data *mp = NULL;
-+#endif
-+ off_t latency_type = (off_t) file->private_data;
+ if (hist_data->fields[i]->flags & HIST_FIELD_FL_HEX) {
+- seq_printf(m, " %s: %10llx",
+- hist_data->fields[i]->field->name,
++ seq_printf(m, " %s: %10llx", field_name,
+ tracing_map_read_sum(elt, i));
+ } else {
+- seq_printf(m, " %s: %10llu",
+- hist_data->fields[i]->field->name,
++ seq_printf(m, " %s: %10llu", field_name,
+ tracing_map_read_sum(elt, i));
+ }
+ }
+
++ print_actions(m, hist_data, elt);
+
-+ for_each_online_cpu(cpu) {
+ seq_puts(m, "\n");
+ }
+
+@@ -1102,6 +4878,11 @@
+ hist_trigger_show(m, data, n++);
+ }
+
++ if (have_hist_err()) {
++ seq_printf(m, "\nERROR: %s\n", hist_err_str);
++ seq_printf(m, " Last command: %s\n", last_hist_cmd);
++ }
+
-+ switch (latency_type) {
-+#ifdef CONFIG_PREEMPT_OFF_HIST
-+ case PREEMPTOFF_LATENCY:
-+ hist = &per_cpu(preemptoff_hist, cpu);
-+ break;
-+#endif
-+#ifdef CONFIG_INTERRUPT_OFF_HIST
-+ case IRQSOFF_LATENCY:
-+ hist = &per_cpu(irqsoff_hist, cpu);
-+ break;
-+#endif
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
-+ case PREEMPTIRQSOFF_LATENCY:
-+ hist = &per_cpu(preemptirqsoff_hist, cpu);
-+ break;
-+#endif
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ case WAKEUP_LATENCY:
-+ hist = &per_cpu(wakeup_latency_hist, cpu);
-+ mp = &per_cpu(wakeup_maxlatproc, cpu);
-+ break;
-+ case WAKEUP_LATENCY_SHAREDPRIO:
-+ hist = &per_cpu(wakeup_latency_hist_sharedprio, cpu);
-+ mp = &per_cpu(wakeup_maxlatproc_sharedprio, cpu);
-+ break;
-+#endif
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ case MISSED_TIMER_OFFSETS:
-+ hist = &per_cpu(missed_timer_offsets, cpu);
-+ mp = &per_cpu(missed_timer_offsets_maxlatproc, cpu);
-+ break;
-+#endif
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ case TIMERANDWAKEUP_LATENCY:
-+ hist = &per_cpu(timerandwakeup_latency_hist, cpu);
-+ mp = &per_cpu(timerandwakeup_maxlatproc, cpu);
-+ break;
-+#endif
+ out_unlock:
+ mutex_unlock(&event_mutex);
+
+@@ -1120,34 +4901,31 @@
+ .release = single_release,
+ };
+
+-static const char *get_hist_field_flags(struct hist_field *hist_field)
++static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
+ {
+- const char *flags_str = NULL;
++ const char *field_name = hist_field_name(hist_field, 0);
+
+- if (hist_field->flags & HIST_FIELD_FL_HEX)
+- flags_str = "hex";
+- else if (hist_field->flags & HIST_FIELD_FL_SYM)
+- flags_str = "sym";
+- else if (hist_field->flags & HIST_FIELD_FL_SYM_OFFSET)
+- flags_str = "sym-offset";
+- else if (hist_field->flags & HIST_FIELD_FL_EXECNAME)
+- flags_str = "execname";
+- else if (hist_field->flags & HIST_FIELD_FL_SYSCALL)
+- flags_str = "syscall";
+- else if (hist_field->flags & HIST_FIELD_FL_LOG2)
+- flags_str = "log2";
++ if (hist_field->var.name)
++ seq_printf(m, "%s=", hist_field->var.name);
+
+- return flags_str;
+-}
++ if (hist_field->flags & HIST_FIELD_FL_CPU)
++ seq_puts(m, "cpu");
++ else if (field_name) {
++ if (hist_field->flags & HIST_FIELD_FL_VAR_REF ||
++ hist_field->flags & HIST_FIELD_FL_ALIAS)
++ seq_putc(m, '$');
++ seq_printf(m, "%s", field_name);
++ } else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP)
++ seq_puts(m, "common_timestamp");
+
+-static void hist_field_print(struct seq_file *m, struct hist_field *hist_field)
+-{
+- seq_printf(m, "%s", hist_field->field->name);
+ if (hist_field->flags) {
+- const char *flags_str = get_hist_field_flags(hist_field);
++ if (!(hist_field->flags & HIST_FIELD_FL_VAR_REF) &&
++ !(hist_field->flags & HIST_FIELD_FL_EXPR)) {
++ const char *flags = get_hist_field_flags(hist_field);
+
+- if (flags_str)
+- seq_printf(m, ".%s", flags_str);
++ if (flags)
++ seq_printf(m, ".%s", flags);
++ }
+ }
+ }
+
+@@ -1156,7 +4934,8 @@
+ struct event_trigger_data *data)
+ {
+ struct hist_trigger_data *hist_data = data->private_data;
+- struct hist_field *key_field;
++ struct hist_field *field;
++ bool have_var = false;
+ unsigned int i;
+
+ seq_puts(m, "hist:");
+@@ -1167,25 +4946,47 @@
+ seq_puts(m, "keys=");
+
+ for_each_hist_key_field(i, hist_data) {
+- key_field = hist_data->fields[i];
++ field = hist_data->fields[i];
+
+ if (i > hist_data->n_vals)
+ seq_puts(m, ",");
+
+- if (key_field->flags & HIST_FIELD_FL_STACKTRACE)
++ if (field->flags & HIST_FIELD_FL_STACKTRACE)
+ seq_puts(m, "stacktrace");
+ else
+- hist_field_print(m, key_field);
++ hist_field_print(m, field);
+ }
+
+ seq_puts(m, ":vals=");
+
+ for_each_hist_val_field(i, hist_data) {
++ field = hist_data->fields[i];
++ if (field->flags & HIST_FIELD_FL_VAR) {
++ have_var = true;
++ continue;
+ }
+
-+ hist_reset(hist);
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ if (latency_type == WAKEUP_LATENCY ||
-+ latency_type == WAKEUP_LATENCY_SHAREDPRIO ||
-+ latency_type == MISSED_TIMER_OFFSETS ||
-+ latency_type == TIMERANDWAKEUP_LATENCY)
-+ clear_maxlatprocdata(mp);
-+#endif
+ if (i == HITCOUNT_IDX)
+ seq_puts(m, "hitcount");
+ else {
+ seq_puts(m, ",");
+- hist_field_print(m, hist_data->fields[i]);
++ hist_field_print(m, field);
++ }
+ }
+
-+ return size;
-+}
++ if (have_var) {
++ unsigned int n = 0;
++
++ seq_puts(m, ":");
++
++ for_each_hist_val_field(i, hist_data) {
++ field = hist_data->fields[i];
++
++ if (field->flags & HIST_FIELD_FL_VAR) {
++ if (n++)
++ seq_puts(m, ",");
++ hist_field_print(m, field);
++ }
+ }
+ }
+
+@@ -1193,28 +4994,36 @@
+
+ for (i = 0; i < hist_data->n_sort_keys; i++) {
+ struct tracing_map_sort_key *sort_key;
++ unsigned int idx, first_key_idx;
++
++ /* skip VAR vals */
++ first_key_idx = hist_data->n_vals - hist_data->n_vars;
+
+ sort_key = &hist_data->sort_keys[i];
++ idx = sort_key->field_idx;
++
++ if (WARN_ON(idx >= HIST_FIELDS_MAX))
++ return -EINVAL;
+
+ if (i > 0)
+ seq_puts(m, ",");
+
+- if (sort_key->field_idx == HITCOUNT_IDX)
++ if (idx == HITCOUNT_IDX)
+ seq_puts(m, "hitcount");
+ else {
+- unsigned int idx = sort_key->field_idx;
+-
+- if (WARN_ON(idx >= TRACING_MAP_FIELDS_MAX))
+- return -EINVAL;
+-
++ if (idx >= first_key_idx)
++ idx += hist_data->n_vars;
+ hist_field_print(m, hist_data->fields[idx]);
+ }
+
+ if (sort_key->descending)
+ seq_puts(m, ".descending");
+ }
+-
+ seq_printf(m, ":size=%u", (1 << hist_data->map->map_bits));
++ if (hist_data->enable_timestamps)
++ seq_printf(m, ":clock=%s", hist_data->attrs->clock);
+
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+static ssize_t
-+show_pid(struct file *file, char __user *ubuf, size_t cnt, loff_t *ppos)
++ print_actions_spec(m, hist_data);
+
+ if (data->filter_str)
+ seq_printf(m, " if %s", data->filter_str);
+@@ -1242,6 +5051,21 @@
+ return 0;
+ }
+
++static void unregister_field_var_hists(struct hist_trigger_data *hist_data)
+{
-+ char buf[64];
-+ int r;
-+ unsigned long *this_pid = file->private_data;
++ struct trace_event_file *file;
++ unsigned int i;
++ char *cmd;
++ int ret;
+
-+ r = snprintf(buf, sizeof(buf), "%lu\n", *this_pid);
-+ return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
++ for (i = 0; i < hist_data->n_field_var_hists; i++) {
++ file = hist_data->field_var_hists[i]->hist_data->event_file;
++ cmd = hist_data->field_var_hists[i]->cmd;
++ ret = event_hist_trigger_func(&trigger_hist_cmd, file,
++ "!hist", "hist", cmd);
++ }
+}
+
-+static ssize_t do_pid(struct file *file, const char __user *ubuf,
-+ size_t cnt, loff_t *ppos)
-+{
-+ char buf[64];
-+ unsigned long pid;
-+ unsigned long *this_pid = file->private_data;
-+
-+ if (cnt >= sizeof(buf))
-+ return -EINVAL;
+ static void event_hist_trigger_free(struct event_trigger_ops *ops,
+ struct event_trigger_data *data)
+ {
+@@ -1254,7 +5078,13 @@
+ if (!data->ref) {
+ if (data->name)
+ del_named_trigger(data);
+
-+ if (copy_from_user(&buf, ubuf, cnt))
-+ return -EFAULT;
+ trigger_data_free(data);
+
-+ buf[cnt] = '\0';
++ remove_hist_vars(hist_data);
+
-+ if (kstrtoul(buf, 10, &pid))
-+ return -EINVAL;
++ unregister_field_var_hists(hist_data);
+
-+ *this_pid = pid;
+ destroy_hist_data(hist_data);
+ }
+ }
+@@ -1381,6 +5211,15 @@
+ return false;
+ if (key_field->offset != key_field_test->offset)
+ return false;
++ if (key_field->size != key_field_test->size)
++ return false;
++ if (key_field->is_signed != key_field_test->is_signed)
++ return false;
++ if (!!key_field->var.name != !!key_field_test->var.name)
++ return false;
++ if (key_field->var.name &&
++ strcmp(key_field->var.name, key_field_test->var.name) != 0)
++ return false;
+ }
+
+ for (i = 0; i < hist_data->n_sort_keys; i++) {
+@@ -1396,6 +5235,9 @@
+ (strcmp(data->filter_str, data_test->filter_str) != 0))
+ return false;
+
++ if (!actions_match(hist_data, hist_data_test))
++ return false;
+
-+ return cnt;
-+}
-+#endif
+ return true;
+ }
+
+@@ -1412,6 +5254,7 @@
+ if (named_data) {
+ if (!hist_trigger_match(data, named_data, named_data,
+ true)) {
++ hist_err("Named hist trigger doesn't match existing named trigger (includes variables): ", hist_data->attrs->name);
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -1431,13 +5274,16 @@
+ test->paused = false;
+ else if (hist_data->attrs->clear)
+ hist_clear(test);
+- else
++ else {
++ hist_err("Hist trigger already exists", NULL);
+ ret = -EEXIST;
++ }
+ goto out;
+ }
+ }
+ new:
+ if (hist_data->attrs->cont || hist_data->attrs->clear) {
++ hist_err("Can't clear or continue a nonexistent hist trigger", NULL);
+ ret = -ENOENT;
+ goto out;
+ }
+@@ -1446,7 +5292,6 @@
+ data->paused = true;
+
+ if (named_data) {
+- destroy_hist_data(data->private_data);
+ data->private_data = named_data->private_data;
+ set_named_trigger_data(data, named_data);
+ data->ops = &event_hist_trigger_named_ops;
+@@ -1458,8 +5303,32 @@
+ goto out;
+ }
+
+- list_add_rcu(&data->list, &file->triggers);
++ if (hist_data->enable_timestamps) {
++ char *clock = hist_data->attrs->clock;
+
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+static ssize_t
-+show_maxlatproc(struct file *file, char __user *ubuf, size_t cnt, loff_t *ppos)
-+{
-+ int r;
-+ struct maxlatproc_data *mp = file->private_data;
-+ int strmaxlen = (TASK_COMM_LEN * 2) + (8 * 8);
-+ unsigned long long t;
-+ unsigned long usecs, secs;
-+ char *buf;
++ ret = tracing_set_clock(file->tr, hist_data->attrs->clock);
++ if (ret) {
++ hist_err("Couldn't set trace_clock: ", clock);
++ goto out;
++ }
+
-+ if (mp->pid == -1 || mp->current_pid == -1) {
-+ buf = "(none)\n";
-+ return simple_read_from_buffer(ubuf, cnt, ppos, buf,
-+ strlen(buf));
++ tracing_set_time_stamp_abs(file->tr, true);
+ }
+
-+ buf = kmalloc(strmaxlen, GFP_KERNEL);
-+ if (buf == NULL)
-+ return -ENOMEM;
++ if (named_data)
++ destroy_hist_data(hist_data);
+
-+ t = ns2usecs(mp->timestamp);
-+ usecs = do_div(t, USEC_PER_SEC);
-+ secs = (unsigned long) t;
-+ r = snprintf(buf, strmaxlen,
-+ "%d %d %ld (%ld) %s <- %d %d %s %lu.%06lu\n", mp->pid,
-+ MAX_RT_PRIO-1 - mp->prio, mp->latency, mp->timeroffset, mp->comm,
-+ mp->current_pid, MAX_RT_PRIO-1 - mp->current_prio, mp->current_comm,
-+ secs, usecs);
-+ r = simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
-+ kfree(buf);
-+ return r;
+ ret++;
++ out:
++ return ret;
+}
-+#endif
+
-+static ssize_t
-+show_enable(struct file *file, char __user *ubuf, size_t cnt, loff_t *ppos)
++static int hist_trigger_enable(struct event_trigger_data *data,
++ struct trace_event_file *file)
+{
-+ char buf[64];
-+ struct enable_data *ed = file->private_data;
-+ int r;
++ int ret = 0;
+
-+ r = snprintf(buf, sizeof(buf), "%d\n", ed->enabled);
-+ return simple_read_from_buffer(ubuf, cnt, ppos, buf, r);
-+}
++ list_add_tail_rcu(&data->list, &file->triggers);
+
+ update_cond_flag(file);
+
+@@ -1468,10 +5337,55 @@
+ update_cond_flag(file);
+ ret--;
+ }
+- out:
+
-+static ssize_t
-+do_enable(struct file *file, const char __user *ubuf, size_t cnt, loff_t *ppos)
+ return ret;
+ }
+
++static bool have_hist_trigger_match(struct event_trigger_data *data,
++ struct trace_event_file *file)
+{
-+ char buf[64];
-+ long enable;
-+ struct enable_data *ed = file->private_data;
-+
-+ if (cnt >= sizeof(buf))
-+ return -EINVAL;
-+
-+ if (copy_from_user(&buf, ubuf, cnt))
-+ return -EFAULT;
-+
-+ buf[cnt] = 0;
++ struct hist_trigger_data *hist_data = data->private_data;
++ struct event_trigger_data *test, *named_data = NULL;
++ bool match = false;
+
-+ if (kstrtoul(buf, 10, &enable))
-+ return -EINVAL;
++ if (hist_data->attrs->name)
++ named_data = find_named_trigger(hist_data->attrs->name);
+
-+ if ((enable && ed->enabled) || (!enable && !ed->enabled))
-+ return cnt;
-+
-+ if (enable) {
-+ int ret;
-+
-+ switch (ed->latency_type) {
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
-+ case PREEMPTIRQSOFF_LATENCY:
-+ ret = register_trace_preemptirqsoff_hist(
-+ probe_preemptirqsoff_hist, NULL);
-+ if (ret) {
-+ pr_info("wakeup trace: Couldn't assign "
-+ "probe_preemptirqsoff_hist "
-+ "to trace_preemptirqsoff_hist\n");
-+ return ret;
-+ }
-+ break;
-+#endif
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ case WAKEUP_LATENCY:
-+ ret = register_trace_sched_wakeup(
-+ probe_wakeup_latency_hist_start, NULL);
-+ if (ret) {
-+ pr_info("wakeup trace: Couldn't assign "
-+ "probe_wakeup_latency_hist_start "
-+ "to trace_sched_wakeup\n");
-+ return ret;
-+ }
-+ ret = register_trace_sched_wakeup_new(
-+ probe_wakeup_latency_hist_start, NULL);
-+ if (ret) {
-+ pr_info("wakeup trace: Couldn't assign "
-+ "probe_wakeup_latency_hist_start "
-+ "to trace_sched_wakeup_new\n");
-+ unregister_trace_sched_wakeup(
-+ probe_wakeup_latency_hist_start, NULL);
-+ return ret;
-+ }
-+ ret = register_trace_sched_switch(
-+ probe_wakeup_latency_hist_stop, NULL);
-+ if (ret) {
-+ pr_info("wakeup trace: Couldn't assign "
-+ "probe_wakeup_latency_hist_stop "
-+ "to trace_sched_switch\n");
-+ unregister_trace_sched_wakeup(
-+ probe_wakeup_latency_hist_start, NULL);
-+ unregister_trace_sched_wakeup_new(
-+ probe_wakeup_latency_hist_start, NULL);
-+ return ret;
-+ }
-+ ret = register_trace_sched_migrate_task(
-+ probe_sched_migrate_task, NULL);
-+ if (ret) {
-+ pr_info("wakeup trace: Couldn't assign "
-+ "probe_sched_migrate_task "
-+ "to trace_sched_migrate_task\n");
-+ unregister_trace_sched_wakeup(
-+ probe_wakeup_latency_hist_start, NULL);
-+ unregister_trace_sched_wakeup_new(
-+ probe_wakeup_latency_hist_start, NULL);
-+ unregister_trace_sched_switch(
-+ probe_wakeup_latency_hist_stop, NULL);
-+ return ret;
-+ }
-+ break;
-+#endif
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ case MISSED_TIMER_OFFSETS:
-+ ret = register_trace_hrtimer_interrupt(
-+ probe_hrtimer_interrupt, NULL);
-+ if (ret) {
-+ pr_info("wakeup trace: Couldn't assign "
-+ "probe_hrtimer_interrupt "
-+ "to trace_hrtimer_interrupt\n");
-+ return ret;
-+ }
-+ break;
-+#endif
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ case TIMERANDWAKEUP_LATENCY:
-+ if (!wakeup_latency_enabled_data.enabled ||
-+ !missed_timer_offsets_enabled_data.enabled)
-+ return -EINVAL;
-+ break;
-+#endif
-+ default:
-+ break;
-+ }
-+ } else {
-+ switch (ed->latency_type) {
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
-+ case PREEMPTIRQSOFF_LATENCY:
-+ {
-+ int cpu;
-+
-+ unregister_trace_preemptirqsoff_hist(
-+ probe_preemptirqsoff_hist, NULL);
-+ for_each_online_cpu(cpu) {
-+#ifdef CONFIG_INTERRUPT_OFF_HIST
-+ per_cpu(hist_irqsoff_counting,
-+ cpu) = 0;
-+#endif
-+#ifdef CONFIG_PREEMPT_OFF_HIST
-+ per_cpu(hist_preemptoff_counting,
-+ cpu) = 0;
-+#endif
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
-+ per_cpu(hist_preemptirqsoff_counting,
-+ cpu) = 0;
-+#endif
-+ }
-+ }
-+ break;
-+#endif
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ case WAKEUP_LATENCY:
-+ {
-+ int cpu;
-+
-+ unregister_trace_sched_wakeup(
-+ probe_wakeup_latency_hist_start, NULL);
-+ unregister_trace_sched_wakeup_new(
-+ probe_wakeup_latency_hist_start, NULL);
-+ unregister_trace_sched_switch(
-+ probe_wakeup_latency_hist_stop, NULL);
-+ unregister_trace_sched_migrate_task(
-+ probe_sched_migrate_task, NULL);
-+
-+ for_each_online_cpu(cpu) {
-+ per_cpu(wakeup_task, cpu) = NULL;
-+ per_cpu(wakeup_sharedprio, cpu) = 0;
-+ }
++ list_for_each_entry_rcu(test, &file->triggers, list) {
++ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ if (hist_trigger_match(data, test, named_data, false)) {
++ match = true;
++ break;
+ }
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ timerandwakeup_enabled_data.enabled = 0;
-+#endif
-+ break;
-+#endif
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ case MISSED_TIMER_OFFSETS:
-+ unregister_trace_hrtimer_interrupt(
-+ probe_hrtimer_interrupt, NULL);
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ timerandwakeup_enabled_data.enabled = 0;
-+#endif
-+ break;
-+#endif
-+ default:
-+ break;
+ }
+ }
-+ ed->enabled = enable;
-+ return cnt;
-+}
-+
-+static const struct file_operations latency_hist_reset_fops = {
-+ .open = tracing_open_generic,
-+ .write = latency_hist_reset,
-+};
-+
-+static const struct file_operations enable_fops = {
-+ .open = tracing_open_generic,
-+ .read = show_enable,
-+ .write = do_enable,
-+};
-+
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+static const struct file_operations pid_fops = {
-+ .open = tracing_open_generic,
-+ .read = show_pid,
-+ .write = do_pid,
-+};
+
-+static const struct file_operations maxlatproc_fops = {
-+ .open = tracing_open_generic,
-+ .read = show_maxlatproc,
-+};
-+#endif
++ return match;
++}
+
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
-+static notrace void probe_preemptirqsoff_hist(void *v, int reason,
-+ int starthist)
++static bool hist_trigger_check_refs(struct event_trigger_data *data,
++ struct trace_event_file *file)
+{
-+ int cpu = raw_smp_processor_id();
-+ int time_set = 0;
-+
-+ if (starthist) {
-+ cycle_t uninitialized_var(start);
++ struct hist_trigger_data *hist_data = data->private_data;
++ struct event_trigger_data *test, *named_data = NULL;
+
-+ if (!preempt_count() && !irqs_disabled())
-+ return;
++ if (hist_data->attrs->name)
++ named_data = find_named_trigger(hist_data->attrs->name);
+
-+#ifdef CONFIG_INTERRUPT_OFF_HIST
-+ if ((reason == IRQS_OFF || reason == TRACE_START) &&
-+ !per_cpu(hist_irqsoff_counting, cpu)) {
-+ per_cpu(hist_irqsoff_counting, cpu) = 1;
-+ start = ftrace_now(cpu);
-+ time_set++;
-+ per_cpu(hist_irqsoff_start, cpu) = start;
++ list_for_each_entry_rcu(test, &file->triggers, list) {
++ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ if (!hist_trigger_match(data, test, named_data, false))
++ continue;
++ hist_data = test->private_data;
++ if (check_var_refs(hist_data))
++ return true;
++ break;
+ }
-+#endif
++ }
+
-+#ifdef CONFIG_PREEMPT_OFF_HIST
-+ if ((reason == PREEMPT_OFF || reason == TRACE_START) &&
-+ !per_cpu(hist_preemptoff_counting, cpu)) {
-+ per_cpu(hist_preemptoff_counting, cpu) = 1;
-+ if (!(time_set++))
-+ start = ftrace_now(cpu);
-+ per_cpu(hist_preemptoff_start, cpu) = start;
-+ }
-+#endif
++ return false;
++}
+
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
-+ if (per_cpu(hist_irqsoff_counting, cpu) &&
-+ per_cpu(hist_preemptoff_counting, cpu) &&
-+ !per_cpu(hist_preemptirqsoff_counting, cpu)) {
-+ per_cpu(hist_preemptirqsoff_counting, cpu) = 1;
-+ if (!time_set)
-+ start = ftrace_now(cpu);
-+ per_cpu(hist_preemptirqsoff_start, cpu) = start;
-+ }
-+#endif
-+ } else {
-+ cycle_t uninitialized_var(stop);
+ static void hist_unregister_trigger(char *glob, struct event_trigger_ops *ops,
+ struct event_trigger_data *data,
+ struct trace_event_file *file)
+@@ -1497,17 +5411,55 @@
+
+ if (unregistered && test->ops->free)
+ test->ops->free(test->ops, test);
+
-+#ifdef CONFIG_INTERRUPT_OFF_HIST
-+ if ((reason == IRQS_ON || reason == TRACE_STOP) &&
-+ per_cpu(hist_irqsoff_counting, cpu)) {
-+ cycle_t start = per_cpu(hist_irqsoff_start, cpu);
++ if (hist_data->enable_timestamps) {
++ if (!hist_data->remove || unregistered)
++ tracing_set_time_stamp_abs(file->tr, false);
++ }
++}
+
-+ stop = ftrace_now(cpu);
-+ time_set++;
-+ if (start) {
-+ long latency = ((long) (stop - start)) /
-+ NSECS_PER_USECS;
++static bool hist_file_check_refs(struct trace_event_file *file)
++{
++ struct hist_trigger_data *hist_data;
++ struct event_trigger_data *test;
+
-+ latency_hist(IRQSOFF_LATENCY, cpu, latency, 0,
-+ stop, NULL);
-+ }
-+ per_cpu(hist_irqsoff_counting, cpu) = 0;
++ list_for_each_entry_rcu(test, &file->triggers, list) {
++ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ hist_data = test->private_data;
++ if (check_var_refs(hist_data))
++ return true;
+ }
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_OFF_HIST
-+ if ((reason == PREEMPT_ON || reason == TRACE_STOP) &&
-+ per_cpu(hist_preemptoff_counting, cpu)) {
-+ cycle_t start = per_cpu(hist_preemptoff_start, cpu);
-+
-+ if (!(time_set++))
-+ stop = ftrace_now(cpu);
-+ if (start) {
-+ long latency = ((long) (stop - start)) /
-+ NSECS_PER_USECS;
++ }
+
-+ latency_hist(PREEMPTOFF_LATENCY, cpu, latency,
-+ 0, stop, NULL);
-+ }
-+ per_cpu(hist_preemptoff_counting, cpu) = 0;
-+ }
-+#endif
++ return false;
+ }
+
+ static void hist_unreg_all(struct trace_event_file *file)
+ {
+ struct event_trigger_data *test, *n;
++ struct hist_trigger_data *hist_data;
++ struct synth_event *se;
++ const char *se_name;
+
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
-+ if ((!per_cpu(hist_irqsoff_counting, cpu) ||
-+ !per_cpu(hist_preemptoff_counting, cpu)) &&
-+ per_cpu(hist_preemptirqsoff_counting, cpu)) {
-+ cycle_t start = per_cpu(hist_preemptirqsoff_start, cpu);
++ if (hist_file_check_refs(file))
++ return;
+
+ list_for_each_entry_safe(test, n, &file->triggers, list) {
+ if (test->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ hist_data = test->private_data;
+ list_del_rcu(&test->list);
+ trace_event_trigger_enable_disable(file, 0);
++
++ mutex_lock(&synth_event_mutex);
++ se_name = trace_event_name(file->event_call);
++ se = find_synth_event(se_name);
++ if (se)
++ se->ref--;
++ mutex_unlock(&synth_event_mutex);
++
+ update_cond_flag(file);
++ if (hist_data->enable_timestamps)
++ tracing_set_time_stamp_abs(file->tr, false);
+ if (test->ops->free)
+ test->ops->free(test->ops, test);
+ }
+@@ -1523,16 +5475,54 @@
+ struct hist_trigger_attrs *attrs;
+ struct event_trigger_ops *trigger_ops;
+ struct hist_trigger_data *hist_data;
+- char *trigger;
++ struct synth_event *se;
++ const char *se_name;
++ bool remove = false;
++ char *trigger, *p;
+ int ret = 0;
+
++ if (glob && strlen(glob)) {
++ last_cmd_set(param);
++ hist_err_clear();
++ }
+
-+ if (!time_set)
-+ stop = ftrace_now(cpu);
-+ if (start) {
-+ long latency = ((long) (stop - start)) /
-+ NSECS_PER_USECS;
+ if (!param)
+ return -EINVAL;
+
+- /* separate the trigger from the filter (k:v [if filter]) */
+- trigger = strsep(¶m, " \t");
+- if (!trigger)
+- return -EINVAL;
++ if (glob[0] == '!')
++ remove = true;
+
-+ latency_hist(PREEMPTIRQSOFF_LATENCY, cpu,
-+ latency, 0, stop, NULL);
-+ }
-+ per_cpu(hist_preemptirqsoff_counting, cpu) = 0;
++ /*
++ * separate the trigger from the filter (k:v [if filter])
++ * allowing for whitespace in the trigger
++ */
++ p = trigger = param;
++ do {
++ p = strstr(p, "if");
++ if (!p)
++ break;
++ if (p == param)
++ return -EINVAL;
++ if (*(p - 1) != ' ' && *(p - 1) != '\t') {
++ p++;
++ continue;
+ }
-+#endif
++ if (p >= param + strlen(param) - strlen("if") - 1)
++ return -EINVAL;
++ if (*(p + strlen("if")) != ' ' && *(p + strlen("if")) != '\t') {
++ p++;
++ continue;
++ }
++ break;
++ } while (p);
++
++ if (!p)
++ param = NULL;
++ else {
++ *(p - 1) = '\0';
++ param = strstrip(p);
++ trigger = strstrip(trigger);
+ }
-+}
-+#endif
-+
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+static DEFINE_RAW_SPINLOCK(wakeup_lock);
-+static notrace void probe_sched_migrate_task(void *v, struct task_struct *task,
-+ int cpu)
-+{
-+ int old_cpu = task_cpu(task);
-+
-+ if (cpu != old_cpu) {
-+ unsigned long flags;
-+ struct task_struct *cpu_wakeup_task;
-+
-+ raw_spin_lock_irqsave(&wakeup_lock, flags);
+
+ attrs = parse_hist_trigger_attrs(trigger);
+ if (IS_ERR(attrs))
+@@ -1541,7 +5531,7 @@
+ if (attrs->map_bits)
+ hist_trigger_bits = attrs->map_bits;
+
+- hist_data = create_hist_data(hist_trigger_bits, attrs, file);
++ hist_data = create_hist_data(hist_trigger_bits, attrs, file, remove);
+ if (IS_ERR(hist_data)) {
+ destroy_hist_trigger_attrs(attrs);
+ return PTR_ERR(hist_data);
+@@ -1549,10 +5539,11 @@
+
+ trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
+
+- ret = -ENOMEM;
+ trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
+- if (!trigger_data)
++ if (!trigger_data) {
++ ret = -ENOMEM;
+ goto out_free;
++ }
+
+ trigger_data->count = -1;
+ trigger_data->ops = trigger_ops;
+@@ -1570,8 +5561,24 @@
+ goto out_free;
+ }
+
+- if (glob[0] == '!') {
++ if (remove) {
++ if (!have_hist_trigger_match(trigger_data, file))
++ goto out_free;
+
-+ cpu_wakeup_task = per_cpu(wakeup_task, old_cpu);
-+ if (task == cpu_wakeup_task) {
-+ put_task_struct(cpu_wakeup_task);
-+ per_cpu(wakeup_task, old_cpu) = NULL;
-+ cpu_wakeup_task = per_cpu(wakeup_task, cpu) = task;
-+ get_task_struct(cpu_wakeup_task);
++ if (hist_trigger_check_refs(trigger_data, file)) {
++ ret = -EBUSY;
++ goto out_free;
+ }
+
-+ raw_spin_unlock_irqrestore(&wakeup_lock, flags);
-+ }
-+}
-+
-+static notrace void probe_wakeup_latency_hist_start(void *v,
-+ struct task_struct *p)
-+{
-+ unsigned long flags;
-+ struct task_struct *curr = current;
-+ int cpu = task_cpu(p);
-+ struct task_struct *cpu_wakeup_task;
+ cmd_ops->unreg(glob+1, trigger_ops, trigger_data, file);
+
-+ raw_spin_lock_irqsave(&wakeup_lock, flags);
++ mutex_lock(&synth_event_mutex);
++ se_name = trace_event_name(file->event_call);
++ se = find_synth_event(se_name);
++ if (se)
++ se->ref--;
++ mutex_unlock(&synth_event_mutex);
+
-+ cpu_wakeup_task = per_cpu(wakeup_task, cpu);
+ ret = 0;
+ goto out_free;
+ }
+@@ -1588,14 +5595,47 @@
+ goto out_free;
+ } else if (ret < 0)
+ goto out_free;
+
-+ if (wakeup_pid) {
-+ if ((cpu_wakeup_task && p->prio == cpu_wakeup_task->prio) ||
-+ p->prio == curr->prio)
-+ per_cpu(wakeup_sharedprio, cpu) = 1;
-+ if (likely(wakeup_pid != task_pid_nr(p)))
-+ goto out;
-+ } else {
-+ if (likely(!rt_task(p)) ||
-+ (cpu_wakeup_task && p->prio > cpu_wakeup_task->prio) ||
-+ p->prio > curr->prio)
-+ goto out;
-+ if ((cpu_wakeup_task && p->prio == cpu_wakeup_task->prio) ||
-+ p->prio == curr->prio)
-+ per_cpu(wakeup_sharedprio, cpu) = 1;
-+ }
++ if (get_named_trigger_data(trigger_data))
++ goto enable;
+
-+ if (cpu_wakeup_task)
-+ put_task_struct(cpu_wakeup_task);
-+ cpu_wakeup_task = per_cpu(wakeup_task, cpu) = p;
-+ get_task_struct(cpu_wakeup_task);
-+ cpu_wakeup_task->preempt_timestamp_hist =
-+ ftrace_now(raw_smp_processor_id());
-+out:
-+ raw_spin_unlock_irqrestore(&wakeup_lock, flags);
-+}
++ if (has_hist_vars(hist_data))
++ save_hist_vars(hist_data);
+
-+static notrace void probe_wakeup_latency_hist_stop(void *v,
-+ bool preempt, struct task_struct *prev, struct task_struct *next)
-+{
-+ unsigned long flags;
-+ int cpu = task_cpu(next);
-+ long latency;
-+ cycle_t stop;
-+ struct task_struct *cpu_wakeup_task;
++ ret = create_actions(hist_data, file);
++ if (ret)
++ goto out_unreg;
+
-+ raw_spin_lock_irqsave(&wakeup_lock, flags);
++ ret = tracing_map_init(hist_data->map);
++ if (ret)
++ goto out_unreg;
++enable:
++ ret = hist_trigger_enable(trigger_data, file);
++ if (ret)
++ goto out_unreg;
+
-+ cpu_wakeup_task = per_cpu(wakeup_task, cpu);
++ mutex_lock(&synth_event_mutex);
++ se_name = trace_event_name(file->event_call);
++ se = find_synth_event(se_name);
++ if (se)
++ se->ref++;
++ mutex_unlock(&synth_event_mutex);
+
-+ if (cpu_wakeup_task == NULL)
-+ goto out;
+ /* Just return zero, not the number of registered triggers */
+ ret = 0;
+ out:
++ if (ret == 0)
++ hist_err_clear();
+
-+ /* Already running? */
-+ if (unlikely(current == cpu_wakeup_task))
-+ goto out_reset;
+ return ret;
++ out_unreg:
++ cmd_ops->unreg(glob+1, trigger_ops, trigger_data, file);
+ out_free:
+ if (cmd_ops->set_filter)
+ cmd_ops->set_filter(NULL, trigger_data, NULL);
+
++ remove_hist_vars(hist_data);
+
-+ if (next != cpu_wakeup_task) {
-+ if (next->prio < cpu_wakeup_task->prio)
-+ goto out_reset;
+ kfree(trigger_data);
+
+ destroy_hist_data(hist_data);
+@@ -1625,7 +5665,8 @@
+ }
+
+ static void
+-hist_enable_trigger(struct event_trigger_data *data, void *rec)
++hist_enable_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ struct enable_trigger_data *enable_data = data->private_data;
+ struct event_trigger_data *test;
+@@ -1641,7 +5682,8 @@
+ }
+
+ static void
+-hist_enable_count_trigger(struct event_trigger_data *data, void *rec)
++hist_enable_count_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ if (!data->count)
+ return;
+@@ -1649,7 +5691,7 @@
+ if (data->count != -1)
+ (data->count)--;
+
+- hist_enable_trigger(data, rec);
++ hist_enable_trigger(data, rec, event);
+ }
+
+ static struct event_trigger_ops hist_enable_trigger_ops = {
+@@ -1754,3 +5796,31 @@
+
+ return ret;
+ }
+
-+ if (next->prio == cpu_wakeup_task->prio)
-+ per_cpu(wakeup_sharedprio, cpu) = 1;
++static __init int trace_events_hist_init(void)
++{
++ struct dentry *entry = NULL;
++ struct dentry *d_tracer;
++ int err = 0;
+
-+ goto out;
++ d_tracer = tracing_init_dentry();
++ if (IS_ERR(d_tracer)) {
++ err = PTR_ERR(d_tracer);
++ goto err;
+ }
+
-+ if (current->prio == cpu_wakeup_task->prio)
-+ per_cpu(wakeup_sharedprio, cpu) = 1;
-+
-+ /*
-+ * The task we are waiting for is about to be switched to.
-+ * Calculate latency and store it in histogram.
-+ */
-+ stop = ftrace_now(raw_smp_processor_id());
-+
-+ latency = ((long) (stop - next->preempt_timestamp_hist)) /
-+ NSECS_PER_USECS;
-+
-+ if (per_cpu(wakeup_sharedprio, cpu)) {
-+ latency_hist(WAKEUP_LATENCY_SHAREDPRIO, cpu, latency, 0, stop,
-+ next);
-+ per_cpu(wakeup_sharedprio, cpu) = 0;
-+ } else {
-+ latency_hist(WAKEUP_LATENCY, cpu, latency, 0, stop, next);
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ if (timerandwakeup_enabled_data.enabled) {
-+ latency_hist(TIMERANDWAKEUP_LATENCY, cpu,
-+ next->timer_offset + latency, next->timer_offset,
-+ stop, next);
-+ }
-+#endif
++ entry = tracefs_create_file("synthetic_events", 0644, d_tracer,
++ NULL, &synth_events_fops);
++ if (!entry) {
++ err = -ENODEV;
++ goto err;
+ }
+
-+out_reset:
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ next->timer_offset = 0;
-+#endif
-+ put_task_struct(cpu_wakeup_task);
-+ per_cpu(wakeup_task, cpu) = NULL;
-+out:
-+ raw_spin_unlock_irqrestore(&wakeup_lock, flags);
++ return err;
++ err:
++ pr_warn("Could not create tracefs 'synthetic_events' entry\n");
++
++ return err;
+}
-+#endif
+
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+static notrace void probe_hrtimer_interrupt(void *v, int cpu,
-+ long long latency_ns, struct task_struct *curr,
-+ struct task_struct *task)
++fs_initcall(trace_events_hist_init);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_events_trigger.c linux-4.14/kernel/trace/trace_events_trigger.c
+--- linux-4.14.orig/kernel/trace/trace_events_trigger.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_events_trigger.c 2018-09-05 11:05:07.000000000 +0200
+@@ -63,7 +63,8 @@
+ * any trigger that should be deferred, ETT_NONE if nothing to defer.
+ */
+ enum event_trigger_type
+-event_triggers_call(struct trace_event_file *file, void *rec)
++event_triggers_call(struct trace_event_file *file, void *rec,
++ struct ring_buffer_event *event)
+ {
+ struct event_trigger_data *data;
+ enum event_trigger_type tt = ETT_NONE;
+@@ -76,7 +77,7 @@
+ if (data->paused)
+ continue;
+ if (!rec) {
+- data->ops->func(data, rec);
++ data->ops->func(data, rec, event);
+ continue;
+ }
+ filter = rcu_dereference_sched(data->filter);
+@@ -86,7 +87,7 @@
+ tt |= data->cmd_ops->trigger_type;
+ continue;
+ }
+- data->ops->func(data, rec);
++ data->ops->func(data, rec, event);
+ }
+ return tt;
+ }
+@@ -108,7 +109,7 @@
+ void
+ event_triggers_post_call(struct trace_event_file *file,
+ enum event_trigger_type tt,
+- void *rec)
++ void *rec, struct ring_buffer_event *event)
+ {
+ struct event_trigger_data *data;
+
+@@ -116,7 +117,7 @@
+ if (data->paused)
+ continue;
+ if (data->cmd_ops->trigger_type & tt)
+- data->ops->func(data, rec);
++ data->ops->func(data, rec, event);
+ }
+ }
+ EXPORT_SYMBOL_GPL(event_triggers_post_call);
+@@ -914,8 +915,15 @@
+ data->named_data = named_data;
+ }
+
++struct event_trigger_data *
++get_named_trigger_data(struct event_trigger_data *data)
+{
-+ if (latency_ns <= 0 && task != NULL && rt_task(task) &&
-+ (task->prio < curr->prio ||
-+ (task->prio == curr->prio &&
-+ !cpumask_test_cpu(cpu, &task->cpus_allowed)))) {
-+ long latency;
-+ cycle_t now;
-+
-+ if (missed_timer_offsets_pid) {
-+ if (likely(missed_timer_offsets_pid !=
-+ task_pid_nr(task)))
-+ return;
-+ }
-+
-+ now = ftrace_now(cpu);
-+ latency = (long) div_s64(-latency_ns, NSECS_PER_USECS);
-+ latency_hist(MISSED_TIMER_OFFSETS, cpu, latency, latency, now,
-+ task);
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ task->timer_offset = latency;
-+#endif
-+ }
-+}
-+#endif
-+
-+static __init int latency_hist_init(void)
-+{
-+ struct dentry *latency_hist_root = NULL;
-+ struct dentry *dentry;
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ struct dentry *dentry_sharedprio;
-+#endif
-+ struct dentry *entry;
-+ struct dentry *enable_root;
-+ int i = 0;
-+ struct hist_data *my_hist;
-+ char name[64];
-+ char *cpufmt = "CPU%d";
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) || \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ char *cpufmt_maxlatproc = "max_latency-CPU%d";
-+ struct maxlatproc_data *mp = NULL;
-+#endif
-+
-+ dentry = tracing_init_dentry();
-+ latency_hist_root = debugfs_create_dir(latency_hist_dir_root, dentry);
-+ enable_root = debugfs_create_dir("enable", latency_hist_root);
-+
-+#ifdef CONFIG_INTERRUPT_OFF_HIST
-+ dentry = debugfs_create_dir(irqsoff_hist_dir, latency_hist_root);
-+ for_each_possible_cpu(i) {
-+ sprintf(name, cpufmt, i);
-+ entry = debugfs_create_file(name, 0444, dentry,
-+ &per_cpu(irqsoff_hist, i), &latency_hist_fops);
-+ my_hist = &per_cpu(irqsoff_hist, i);
-+ atomic_set(&my_hist->hist_mode, 1);
-+ my_hist->min_lat = LONG_MAX;
-+ }
-+ entry = debugfs_create_file("reset", 0644, dentry,
-+ (void *)IRQSOFF_LATENCY, &latency_hist_reset_fops);
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_OFF_HIST
-+ dentry = debugfs_create_dir(preemptoff_hist_dir,
-+ latency_hist_root);
-+ for_each_possible_cpu(i) {
-+ sprintf(name, cpufmt, i);
-+ entry = debugfs_create_file(name, 0444, dentry,
-+ &per_cpu(preemptoff_hist, i), &latency_hist_fops);
-+ my_hist = &per_cpu(preemptoff_hist, i);
-+ atomic_set(&my_hist->hist_mode, 1);
-+ my_hist->min_lat = LONG_MAX;
-+ }
-+ entry = debugfs_create_file("reset", 0644, dentry,
-+ (void *)PREEMPTOFF_LATENCY, &latency_hist_reset_fops);
-+#endif
-+
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) && defined(CONFIG_PREEMPT_OFF_HIST)
-+ dentry = debugfs_create_dir(preemptirqsoff_hist_dir,
-+ latency_hist_root);
-+ for_each_possible_cpu(i) {
-+ sprintf(name, cpufmt, i);
-+ entry = debugfs_create_file(name, 0444, dentry,
-+ &per_cpu(preemptirqsoff_hist, i), &latency_hist_fops);
-+ my_hist = &per_cpu(preemptirqsoff_hist, i);
-+ atomic_set(&my_hist->hist_mode, 1);
-+ my_hist->min_lat = LONG_MAX;
-+ }
-+ entry = debugfs_create_file("reset", 0644, dentry,
-+ (void *)PREEMPTIRQSOFF_LATENCY, &latency_hist_reset_fops);
-+#endif
-+
-+#if defined(CONFIG_INTERRUPT_OFF_HIST) || defined(CONFIG_PREEMPT_OFF_HIST)
-+ entry = debugfs_create_file("preemptirqsoff", 0644,
-+ enable_root, (void *)&preemptirqsoff_enabled_data,
-+ &enable_fops);
-+#endif
-+
-+#ifdef CONFIG_WAKEUP_LATENCY_HIST
-+ dentry = debugfs_create_dir(wakeup_latency_hist_dir,
-+ latency_hist_root);
-+ dentry_sharedprio = debugfs_create_dir(
-+ wakeup_latency_hist_dir_sharedprio, dentry);
-+ for_each_possible_cpu(i) {
-+ sprintf(name, cpufmt, i);
-+
-+ entry = debugfs_create_file(name, 0444, dentry,
-+ &per_cpu(wakeup_latency_hist, i),
-+ &latency_hist_fops);
-+ my_hist = &per_cpu(wakeup_latency_hist, i);
-+ atomic_set(&my_hist->hist_mode, 1);
-+ my_hist->min_lat = LONG_MAX;
-+
-+ entry = debugfs_create_file(name, 0444, dentry_sharedprio,
-+ &per_cpu(wakeup_latency_hist_sharedprio, i),
-+ &latency_hist_fops);
-+ my_hist = &per_cpu(wakeup_latency_hist_sharedprio, i);
-+ atomic_set(&my_hist->hist_mode, 1);
-+ my_hist->min_lat = LONG_MAX;
-+
-+ sprintf(name, cpufmt_maxlatproc, i);
-+
-+ mp = &per_cpu(wakeup_maxlatproc, i);
-+ entry = debugfs_create_file(name, 0444, dentry, mp,
-+ &maxlatproc_fops);
-+ clear_maxlatprocdata(mp);
-+
-+ mp = &per_cpu(wakeup_maxlatproc_sharedprio, i);
-+ entry = debugfs_create_file(name, 0444, dentry_sharedprio, mp,
-+ &maxlatproc_fops);
-+ clear_maxlatprocdata(mp);
-+ }
-+ entry = debugfs_create_file("pid", 0644, dentry,
-+ (void *)&wakeup_pid, &pid_fops);
-+ entry = debugfs_create_file("reset", 0644, dentry,
-+ (void *)WAKEUP_LATENCY, &latency_hist_reset_fops);
-+ entry = debugfs_create_file("reset", 0644, dentry_sharedprio,
-+ (void *)WAKEUP_LATENCY_SHAREDPRIO, &latency_hist_reset_fops);
-+ entry = debugfs_create_file("wakeup", 0644,
-+ enable_root, (void *)&wakeup_latency_enabled_data,
-+ &enable_fops);
-+#endif
-+
-+#ifdef CONFIG_MISSED_TIMER_OFFSETS_HIST
-+ dentry = debugfs_create_dir(missed_timer_offsets_dir,
-+ latency_hist_root);
-+ for_each_possible_cpu(i) {
-+ sprintf(name, cpufmt, i);
-+ entry = debugfs_create_file(name, 0444, dentry,
-+ &per_cpu(missed_timer_offsets, i), &latency_hist_fops);
-+ my_hist = &per_cpu(missed_timer_offsets, i);
-+ atomic_set(&my_hist->hist_mode, 1);
-+ my_hist->min_lat = LONG_MAX;
-+
-+ sprintf(name, cpufmt_maxlatproc, i);
-+ mp = &per_cpu(missed_timer_offsets_maxlatproc, i);
-+ entry = debugfs_create_file(name, 0444, dentry, mp,
-+ &maxlatproc_fops);
-+ clear_maxlatprocdata(mp);
-+ }
-+ entry = debugfs_create_file("pid", 0644, dentry,
-+ (void *)&missed_timer_offsets_pid, &pid_fops);
-+ entry = debugfs_create_file("reset", 0644, dentry,
-+ (void *)MISSED_TIMER_OFFSETS, &latency_hist_reset_fops);
-+ entry = debugfs_create_file("missed_timer_offsets", 0644,
-+ enable_root, (void *)&missed_timer_offsets_enabled_data,
-+ &enable_fops);
-+#endif
-+
-+#if defined(CONFIG_WAKEUP_LATENCY_HIST) && \
-+ defined(CONFIG_MISSED_TIMER_OFFSETS_HIST)
-+ dentry = debugfs_create_dir(timerandwakeup_latency_hist_dir,
-+ latency_hist_root);
-+ for_each_possible_cpu(i) {
-+ sprintf(name, cpufmt, i);
-+ entry = debugfs_create_file(name, 0444, dentry,
-+ &per_cpu(timerandwakeup_latency_hist, i),
-+ &latency_hist_fops);
-+ my_hist = &per_cpu(timerandwakeup_latency_hist, i);
-+ atomic_set(&my_hist->hist_mode, 1);
-+ my_hist->min_lat = LONG_MAX;
-+
-+ sprintf(name, cpufmt_maxlatproc, i);
-+ mp = &per_cpu(timerandwakeup_maxlatproc, i);
-+ entry = debugfs_create_file(name, 0444, dentry, mp,
-+ &maxlatproc_fops);
-+ clear_maxlatprocdata(mp);
-+ }
-+ entry = debugfs_create_file("reset", 0644, dentry,
-+ (void *)TIMERANDWAKEUP_LATENCY, &latency_hist_reset_fops);
-+ entry = debugfs_create_file("timerandwakeup", 0644,
-+ enable_root, (void *)&timerandwakeup_enabled_data,
-+ &enable_fops);
-+#endif
-+ return 0;
++ return data->named_data;
+}
+
-+device_initcall(latency_hist_init);
-diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
-index 8696ce6bf2f6..277f048a4695 100644
---- a/kernel/trace/trace.c
-+++ b/kernel/trace/trace.c
-@@ -1897,6 +1897,7 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags,
- struct task_struct *tsk = current;
+ static void
+-traceon_trigger(struct event_trigger_data *data, void *rec)
++traceon_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ if (tracing_is_on())
+ return;
+@@ -924,7 +932,8 @@
+ }
+
+ static void
+-traceon_count_trigger(struct event_trigger_data *data, void *rec)
++traceon_count_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ if (tracing_is_on())
+ return;
+@@ -939,7 +948,8 @@
+ }
+
+ static void
+-traceoff_trigger(struct event_trigger_data *data, void *rec)
++traceoff_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ if (!tracing_is_on())
+ return;
+@@ -948,7 +958,8 @@
+ }
+
+ static void
+-traceoff_count_trigger(struct event_trigger_data *data, void *rec)
++traceoff_count_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ if (!tracing_is_on())
+ return;
+@@ -1045,7 +1056,8 @@
- entry->preempt_count = pc & 0xff;
-+ entry->preempt_lazy_count = preempt_lazy_count();
- entry->pid = (tsk) ? tsk->pid : 0;
- entry->flags =
- #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT
-@@ -1907,8 +1908,11 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags,
- ((pc & NMI_MASK ) ? TRACE_FLAG_NMI : 0) |
- ((pc & HARDIRQ_MASK) ? TRACE_FLAG_HARDIRQ : 0) |
- ((pc & SOFTIRQ_MASK) ? TRACE_FLAG_SOFTIRQ : 0) |
-- (tif_need_resched() ? TRACE_FLAG_NEED_RESCHED : 0) |
-+ (tif_need_resched_now() ? TRACE_FLAG_NEED_RESCHED : 0) |
-+ (need_resched_lazy() ? TRACE_FLAG_NEED_RESCHED_LAZY : 0) |
- (test_preempt_need_resched() ? TRACE_FLAG_PREEMPT_RESCHED : 0);
-+
-+ entry->migrate_disable = (tsk) ? __migrate_disabled(tsk) & 0xFF : 0;
+ #ifdef CONFIG_TRACER_SNAPSHOT
+ static void
+-snapshot_trigger(struct event_trigger_data *data, void *rec)
++snapshot_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ struct trace_event_file *file = data->private_data;
+
+@@ -1056,7 +1068,8 @@
}
- EXPORT_SYMBOL_GPL(tracing_generic_entry_update);
-@@ -2892,14 +2896,17 @@ get_total_entries(struct trace_buffer *buf,
+ static void
+-snapshot_count_trigger(struct event_trigger_data *data, void *rec)
++snapshot_count_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ if (!data->count)
+ return;
+@@ -1064,7 +1077,7 @@
+ if (data->count != -1)
+ (data->count)--;
- static void print_lat_help_header(struct seq_file *m)
+- snapshot_trigger(data, rec);
++ snapshot_trigger(data, rec, event);
+ }
+
+ static int
+@@ -1143,13 +1156,15 @@
+ #define STACK_SKIP 3
+
+ static void
+-stacktrace_trigger(struct event_trigger_data *data, void *rec)
++stacktrace_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
{
-- seq_puts(m, "# _------=> CPU# \n"
-- "# / _-----=> irqs-off \n"
-- "# | / _----=> need-resched \n"
-- "# || / _---=> hardirq/softirq \n"
-- "# ||| / _--=> preempt-depth \n"
-- "# |||| / delay \n"
-- "# cmd pid ||||| time | caller \n"
-- "# \\ / ||||| \\ | / \n");
-+ seq_puts(m, "# _--------=> CPU# \n"
-+ "# / _-------=> irqs-off \n"
-+ "# | / _------=> need-resched \n"
-+ "# || / _-----=> need-resched_lazy \n"
-+ "# ||| / _----=> hardirq/softirq \n"
-+ "# |||| / _---=> preempt-depth \n"
-+ "# ||||| / _--=> preempt-lazy-depth\n"
-+ "# |||||| / _-=> migrate-disable \n"
-+ "# ||||||| / delay \n"
-+ "# cmd pid |||||||| time | caller \n"
-+ "# \\ / |||||||| \\ | / \n");
+ trace_dump_stack(STACK_SKIP);
}
- static void print_event_info(struct trace_buffer *buf, struct seq_file *m)
-@@ -2925,11 +2932,14 @@ static void print_func_help_header_irq(struct trace_buffer *buf, struct seq_file
- print_event_info(buf, m);
- seq_puts(m, "# _-----=> irqs-off\n"
- "# / _----=> need-resched\n"
-- "# | / _---=> hardirq/softirq\n"
-- "# || / _--=> preempt-depth\n"
-- "# ||| / delay\n"
-- "# TASK-PID CPU# |||| TIMESTAMP FUNCTION\n"
-- "# | | | |||| | |\n");
-+ "# |/ _-----=> need-resched_lazy\n"
-+ "# || / _---=> hardirq/softirq\n"
-+ "# ||| / _--=> preempt-depth\n"
-+ "# |||| / _-=> preempt-lazy-depth\n"
-+ "# ||||| / _-=> migrate-disable \n"
-+ "# |||||| / delay\n"
-+ "# TASK-PID CPU# ||||||| TIMESTAMP FUNCTION\n"
-+ "# | | | ||||||| | |\n");
+ static void
+-stacktrace_count_trigger(struct event_trigger_data *data, void *rec)
++stacktrace_count_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ if (!data->count)
+ return;
+@@ -1157,7 +1172,7 @@
+ if (data->count != -1)
+ (data->count)--;
+
+- stacktrace_trigger(data, rec);
++ stacktrace_trigger(data, rec, event);
}
- void
-diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
-index fd24b1f9ac43..852b2c81be25 100644
---- a/kernel/trace/trace.h
-+++ b/kernel/trace/trace.h
-@@ -124,6 +124,7 @@ struct kretprobe_trace_entry_head {
+ static int
+@@ -1219,7 +1234,8 @@
+ }
+
+ static void
+-event_enable_trigger(struct event_trigger_data *data, void *rec)
++event_enable_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ struct enable_trigger_data *enable_data = data->private_data;
+
+@@ -1230,7 +1246,8 @@
+ }
+
+ static void
+-event_enable_count_trigger(struct event_trigger_data *data, void *rec)
++event_enable_count_trigger(struct event_trigger_data *data, void *rec,
++ struct ring_buffer_event *event)
+ {
+ struct enable_trigger_data *enable_data = data->private_data;
+
+@@ -1244,7 +1261,7 @@
+ if (data->count != -1)
+ (data->count)--;
+
+- event_enable_trigger(data, rec);
++ event_enable_trigger(data, rec, event);
+ }
+
+ int event_enable_trigger_print(struct seq_file *m,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace.h linux-4.14/kernel/trace/trace.h
+--- linux-4.14.orig/kernel/trace/trace.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace.h 2018-09-05 11:05:07.000000000 +0200
+@@ -127,6 +127,7 @@
* NEED_RESCHED - reschedule is requested
* HARDIRQ - inside an interrupt handler
* SOFTIRQ - inside a softirq handler
*/
enum trace_flag_type {
TRACE_FLAG_IRQS_OFF = 0x01,
-@@ -133,6 +134,7 @@ enum trace_flag_type {
+@@ -136,6 +137,7 @@
TRACE_FLAG_SOFTIRQ = 0x10,
TRACE_FLAG_PREEMPT_RESCHED = 0x20,
TRACE_FLAG_NMI = 0x40,
};
#define TRACE_BUF_SIZE 1024
-diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
-index 03c0a48c3ac4..0b85d516b491 100644
---- a/kernel/trace/trace_events.c
-+++ b/kernel/trace/trace_events.c
-@@ -187,6 +187,8 @@ static int trace_define_common_fields(void)
- __common_field(unsigned char, flags);
- __common_field(unsigned char, preempt_count);
- __common_field(int, pid);
-+ __common_field(unsigned short, migrate_disable);
-+ __common_field(unsigned short, padding);
+@@ -273,6 +275,8 @@
+ /* function tracing enabled */
+ int function_enabled;
+ #endif
++ int time_stamp_abs_ref;
++ struct list_head hist_vars;
+ };
- return ret;
- }
-diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
-index 03cdff84d026..940bd10b4406 100644
---- a/kernel/trace/trace_irqsoff.c
-+++ b/kernel/trace/trace_irqsoff.c
-@@ -13,6 +13,7 @@
- #include <linux/uaccess.h>
- #include <linux/module.h>
- #include <linux/ftrace.h>
-+#include <trace/events/hist.h>
+ enum {
+@@ -286,6 +290,11 @@
+ extern int trace_array_get(struct trace_array *tr);
+ extern void trace_array_put(struct trace_array *tr);
- #include "trace.h"
++extern int tracing_set_time_stamp_abs(struct trace_array *tr, bool abs);
++extern int tracing_set_clock(struct trace_array *tr, const char *clockstr);
++
++extern bool trace_clock_in_ns(struct trace_array *tr);
++
+ /*
+ * The global tracer (top) should be the first trace array added,
+ * but we check the flag anyway.
+@@ -1293,7 +1302,7 @@
+ unsigned long eflags = file->flags;
-@@ -424,11 +425,13 @@ void start_critical_timings(void)
- {
- if (preempt_trace() || irq_trace())
- start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
-+ trace_preemptirqsoff_hist_rcuidle(TRACE_START, 1);
- }
- EXPORT_SYMBOL_GPL(start_critical_timings);
+ if (eflags & EVENT_FILE_FL_TRIGGER_COND)
+- *tt = event_triggers_call(file, entry);
++ *tt = event_triggers_call(file, entry, event);
- void stop_critical_timings(void)
- {
-+ trace_preemptirqsoff_hist_rcuidle(TRACE_STOP, 0);
- if (preempt_trace() || irq_trace())
- stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
- }
-@@ -438,6 +441,7 @@ EXPORT_SYMBOL_GPL(stop_critical_timings);
- #ifdef CONFIG_PROVE_LOCKING
- void time_hardirqs_on(unsigned long a0, unsigned long a1)
- {
-+ trace_preemptirqsoff_hist_rcuidle(IRQS_ON, 0);
- if (!preempt_trace() && irq_trace())
- stop_critical_timing(a0, a1);
- }
-@@ -446,6 +450,7 @@ void time_hardirqs_off(unsigned long a0, unsigned long a1)
- {
- if (!preempt_trace() && irq_trace())
- start_critical_timing(a0, a1);
-+ trace_preemptirqsoff_hist_rcuidle(IRQS_OFF, 1);
+ if (test_bit(EVENT_FILE_FL_SOFT_DISABLED_BIT, &file->flags) ||
+ (unlikely(file->flags & EVENT_FILE_FL_FILTERED) &&
+@@ -1330,7 +1339,7 @@
+ trace_buffer_unlock_commit(file->tr, buffer, event, irq_flags, pc);
+
+ if (tt)
+- event_triggers_post_call(file, tt, entry);
++ event_triggers_post_call(file, tt, entry, event);
}
- #else /* !CONFIG_PROVE_LOCKING */
-@@ -471,6 +476,7 @@ inline void print_irqtrace_events(struct task_struct *curr)
+ /**
+@@ -1363,7 +1372,7 @@
+ irq_flags, pc, regs);
+
+ if (tt)
+- event_triggers_post_call(file, tt, entry);
++ event_triggers_post_call(file, tt, entry, event);
+ }
+
+ #define FILTER_PRED_INVALID ((unsigned short)-1)
+@@ -1545,6 +1554,8 @@
+ extern void unpause_named_trigger(struct event_trigger_data *data);
+ extern void set_named_trigger_data(struct event_trigger_data *data,
+ struct event_trigger_data *named_data);
++extern struct event_trigger_data *
++get_named_trigger_data(struct event_trigger_data *data);
+ extern int register_event_command(struct event_command *cmd);
+ extern int unregister_event_command(struct event_command *cmd);
+ extern int register_trigger_hist_enable_disable_cmds(void);
+@@ -1588,7 +1599,8 @@
*/
- void trace_hardirqs_on(void)
- {
-+ trace_preemptirqsoff_hist(IRQS_ON, 0);
- if (!preempt_trace() && irq_trace())
- stop_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
- }
-@@ -480,11 +486,13 @@ void trace_hardirqs_off(void)
- {
- if (!preempt_trace() && irq_trace())
- start_critical_timing(CALLER_ADDR0, CALLER_ADDR1);
-+ trace_preemptirqsoff_hist(IRQS_OFF, 1);
- }
- EXPORT_SYMBOL(trace_hardirqs_off);
+ struct event_trigger_ops {
+ void (*func)(struct event_trigger_data *data,
+- void *rec);
++ void *rec,
++ struct ring_buffer_event *rbe);
+ int (*init)(struct event_trigger_ops *ops,
+ struct event_trigger_data *data);
+ void (*free)(struct event_trigger_ops *ops,
+@@ -1755,6 +1767,13 @@
+ int trace_keep_overwrite(struct tracer *tracer, u32 mask, int set);
+ int set_tracer_flag(struct trace_array *tr, unsigned int mask, int enabled);
+
++#define MAX_EVENT_NAME_LEN 64
++
++extern int trace_run_command(const char *buf, int (*createfn)(int, char**));
++extern ssize_t trace_parse_run_command(struct file *file,
++ const char __user *buffer, size_t count, loff_t *ppos,
++ int (*createfn)(int, char**));
++
+ /*
+ * Normal trace_printk() and friends allocates special buffers
+ * to do the manipulation, as well as saves the print formats
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_hwlat.c linux-4.14/kernel/trace/trace_hwlat.c
+--- linux-4.14.orig/kernel/trace/trace_hwlat.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/trace/trace_hwlat.c 2018-09-05 11:05:07.000000000 +0200
+@@ -279,7 +279,7 @@
+ * of this thread, than stop migrating for the duration
+ * of the current test.
+ */
+- if (!cpumask_equal(current_mask, ¤t->cpus_allowed))
++ if (!cpumask_equal(current_mask, current->cpus_ptr))
+ goto disable;
- __visible void trace_hardirqs_on_caller(unsigned long caller_addr)
- {
-+ trace_preemptirqsoff_hist(IRQS_ON, 0);
- if (!preempt_trace() && irq_trace())
- stop_critical_timing(CALLER_ADDR0, caller_addr);
- }
-@@ -494,6 +502,7 @@ __visible void trace_hardirqs_off_caller(unsigned long caller_addr)
- {
- if (!preempt_trace() && irq_trace())
- start_critical_timing(CALLER_ADDR0, caller_addr);
-+ trace_preemptirqsoff_hist(IRQS_OFF, 1);
- }
- EXPORT_SYMBOL(trace_hardirqs_off_caller);
+ get_online_cpus();
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_kprobe.c linux-4.14/kernel/trace/trace_kprobe.c
+--- linux-4.14.orig/kernel/trace/trace_kprobe.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_kprobe.c 2018-09-05 11:05:07.000000000 +0200
+@@ -918,8 +918,8 @@
+ static ssize_t probes_write(struct file *file, const char __user *buffer,
+ size_t count, loff_t *ppos)
+ {
+- return traceprobe_probes_write(file, buffer, count, ppos,
+- create_trace_kprobe);
++ return trace_parse_run_command(file, buffer, count, ppos,
++ create_trace_kprobe);
+ }
+
+ static const struct file_operations kprobe_events_ops = {
+@@ -1444,9 +1444,9 @@
+
+ pr_info("Testing kprobe tracing: ");
+
+- ret = traceprobe_command("p:testprobe kprobe_trace_selftest_target "
+- "$stack $stack0 +0($stack)",
+- create_trace_kprobe);
++ ret = trace_run_command("p:testprobe kprobe_trace_selftest_target "
++ "$stack $stack0 +0($stack)",
++ create_trace_kprobe);
+ if (WARN_ON_ONCE(ret)) {
+ pr_warn("error on probing function entry.\n");
+ warn++;
+@@ -1466,8 +1466,8 @@
+ }
+ }
-@@ -503,12 +512,14 @@ EXPORT_SYMBOL(trace_hardirqs_off_caller);
- #ifdef CONFIG_PREEMPT_TRACER
- void trace_preempt_on(unsigned long a0, unsigned long a1)
- {
-+ trace_preemptirqsoff_hist(PREEMPT_ON, 0);
- if (preempt_trace() && !irq_trace())
- stop_critical_timing(a0, a1);
- }
+- ret = traceprobe_command("r:testprobe2 kprobe_trace_selftest_target "
+- "$retval", create_trace_kprobe);
++ ret = trace_run_command("r:testprobe2 kprobe_trace_selftest_target "
++ "$retval", create_trace_kprobe);
+ if (WARN_ON_ONCE(ret)) {
+ pr_warn("error on probing function return.\n");
+ warn++;
+@@ -1537,13 +1537,13 @@
+ disable_trace_kprobe(tk, file);
+ }
- void trace_preempt_off(unsigned long a0, unsigned long a1)
- {
-+ trace_preemptirqsoff_hist(PREEMPT_ON, 1);
- if (preempt_trace() && !irq_trace())
- start_critical_timing(a0, a1);
- }
-diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
-index 3fc20422c166..65a6dde71a7d 100644
---- a/kernel/trace/trace_output.c
-+++ b/kernel/trace/trace_output.c
-@@ -386,6 +386,7 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry)
+- ret = traceprobe_command("-:testprobe", create_trace_kprobe);
++ ret = trace_run_command("-:testprobe", create_trace_kprobe);
+ if (WARN_ON_ONCE(ret)) {
+ pr_warn("error on deleting a probe.\n");
+ warn++;
+ }
+
+- ret = traceprobe_command("-:testprobe2", create_trace_kprobe);
++ ret = trace_run_command("-:testprobe2", create_trace_kprobe);
+ if (WARN_ON_ONCE(ret)) {
+ pr_warn("error on deleting a probe.\n");
+ warn++;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_output.c linux-4.14/kernel/trace/trace_output.c
+--- linux-4.14.orig/kernel/trace/trace_output.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_output.c 2018-09-05 11:05:07.000000000 +0200
+@@ -447,6 +447,7 @@
{
char hardsoft_irq;
char need_resched;
char irqs_off;
int hardirq;
int softirq;
-@@ -416,6 +417,9 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry)
+@@ -477,6 +478,9 @@
break;
}
hardsoft_irq =
(nmi && hardirq) ? 'Z' :
nmi ? 'z' :
-@@ -424,14 +428,25 @@ int trace_print_lat_fmt(struct trace_seq *s, struct trace_entry *entry)
+@@ -485,14 +489,25 @@
softirq ? 's' :
'.' ;
-- trace_seq_printf(s, "%c%c%c",
-- irqs_off, need_resched, hardsoft_irq);
-+ trace_seq_printf(s, "%c%c%c%c",
-+ irqs_off, need_resched, need_resched_lazy,
-+ hardsoft_irq);
+- trace_seq_printf(s, "%c%c%c",
+- irqs_off, need_resched, hardsoft_irq);
++ trace_seq_printf(s, "%c%c%c%c",
++ irqs_off, need_resched, need_resched_lazy,
++ hardsoft_irq);
+
+ if (entry->preempt_count)
+ trace_seq_printf(s, "%x", entry->preempt_count);
+ else
+ trace_seq_putc(s, '.');
+
++ if (entry->preempt_lazy_count)
++ trace_seq_printf(s, "%x", entry->preempt_lazy_count);
++ else
++ trace_seq_putc(s, '.');
++
++ if (entry->migrate_disable)
++ trace_seq_printf(s, "%x", entry->migrate_disable);
++ else
++ trace_seq_putc(s, '.');
++
+ return !trace_seq_has_overflowed(s);
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_probe.c linux-4.14/kernel/trace/trace_probe.c
+--- linux-4.14.orig/kernel/trace/trace_probe.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_probe.c 2018-09-05 11:05:07.000000000 +0200
+@@ -621,92 +621,6 @@
+ kfree(arg->comm);
+ }
+
+-int traceprobe_command(const char *buf, int (*createfn)(int, char **))
+-{
+- char **argv;
+- int argc, ret;
+-
+- argc = 0;
+- ret = 0;
+- argv = argv_split(GFP_KERNEL, buf, &argc);
+- if (!argv)
+- return -ENOMEM;
+-
+- if (argc)
+- ret = createfn(argc, argv);
+-
+- argv_free(argv);
+-
+- return ret;
+-}
+-
+-#define WRITE_BUFSIZE 4096
+-
+-ssize_t traceprobe_probes_write(struct file *file, const char __user *buffer,
+- size_t count, loff_t *ppos,
+- int (*createfn)(int, char **))
+-{
+- char *kbuf, *buf, *tmp;
+- int ret = 0;
+- size_t done = 0;
+- size_t size;
+-
+- kbuf = kmalloc(WRITE_BUFSIZE, GFP_KERNEL);
+- if (!kbuf)
+- return -ENOMEM;
+-
+- while (done < count) {
+- size = count - done;
+-
+- if (size >= WRITE_BUFSIZE)
+- size = WRITE_BUFSIZE - 1;
+-
+- if (copy_from_user(kbuf, buffer + done, size)) {
+- ret = -EFAULT;
+- goto out;
+- }
+- kbuf[size] = '\0';
+- buf = kbuf;
+- do {
+- tmp = strchr(buf, '\n');
+- if (tmp) {
+- *tmp = '\0';
+- size = tmp - buf + 1;
+- } else {
+- size = strlen(buf);
+- if (done + size < count) {
+- if (buf != kbuf)
+- break;
+- /* This can accept WRITE_BUFSIZE - 2 ('\n' + '\0') */
+- pr_warn("Line length is too long: Should be less than %d\n",
+- WRITE_BUFSIZE - 2);
+- ret = -EINVAL;
+- goto out;
+- }
+- }
+- done += size;
+-
+- /* Remove comments */
+- tmp = strchr(buf, '#');
+-
+- if (tmp)
+- *tmp = '\0';
+-
+- ret = traceprobe_command(buf, createfn);
+- if (ret)
+- goto out;
+- buf += size;
+-
+- } while (done < count);
+- }
+- ret = done;
+-
+-out:
+- kfree(kbuf);
+-
+- return ret;
+-}
+-
+ static int __set_print_fmt(struct trace_probe *tp, char *buf, int len,
+ bool is_return)
+ {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_probe.h linux-4.14/kernel/trace/trace_probe.h
+--- linux-4.14.orig/kernel/trace/trace_probe.h 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_probe.h 2018-09-05 11:05:07.000000000 +0200
+@@ -42,7 +42,6 @@
+
+ #define MAX_TRACE_ARGS 128
+ #define MAX_ARGSTR_LEN 63
+-#define MAX_EVENT_NAME_LEN 64
+ #define MAX_STRING_SIZE PATH_MAX
+
+ /* Reserved field names */
+@@ -356,12 +355,6 @@
+
+ extern int traceprobe_split_symbol_offset(char *symbol, long *offset);
+
+-extern ssize_t traceprobe_probes_write(struct file *file,
+- const char __user *buffer, size_t count, loff_t *ppos,
+- int (*createfn)(int, char**));
+-
+-extern int traceprobe_command(const char *buf, int (*createfn)(int, char**));
+-
+ /* Sum up total data length for dynamic arraies (strings) */
+ static nokprobe_inline int
+ __get_data_size(struct trace_probe *tp, struct pt_regs *regs)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/trace_uprobe.c linux-4.14/kernel/trace/trace_uprobe.c
+--- linux-4.14.orig/kernel/trace/trace_uprobe.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/trace/trace_uprobe.c 2018-09-05 11:05:07.000000000 +0200
+@@ -647,7 +647,7 @@
+ static ssize_t probes_write(struct file *file, const char __user *buffer,
+ size_t count, loff_t *ppos)
+ {
+- return traceprobe_probes_write(file, buffer, count, ppos, create_trace_uprobe);
++ return trace_parse_run_command(file, buffer, count, ppos, create_trace_uprobe);
+ }
+
+ static const struct file_operations uprobe_events_ops = {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/tracing_map.c linux-4.14/kernel/trace/tracing_map.c
+--- linux-4.14.orig/kernel/trace/tracing_map.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/trace/tracing_map.c 2018-09-05 11:05:07.000000000 +0200
+@@ -66,6 +66,73 @@
+ return (u64)atomic64_read(&elt->fields[i].sum);
+ }
+
++/**
++ * tracing_map_set_var - Assign a tracing_map_elt's variable field
++ * @elt: The tracing_map_elt
++ * @i: The index of the given variable associated with the tracing_map_elt
++ * @n: The value to assign
++ *
++ * Assign n to variable i associated with the specified tracing_map_elt
++ * instance. The index i is the index returned by the call to
++ * tracing_map_add_var() when the tracing map was set up.
++ */
++void tracing_map_set_var(struct tracing_map_elt *elt, unsigned int i, u64 n)
++{
++ atomic64_set(&elt->vars[i], n);
++ elt->var_set[i] = true;
++}
++
++/**
++ * tracing_map_var_set - Return whether or not a variable has been set
++ * @elt: The tracing_map_elt
++ * @i: The index of the given variable associated with the tracing_map_elt
++ *
++ * Return true if the variable has been set, false otherwise. The
++ * index i is the index returned by the call to tracing_map_add_var()
++ * when the tracing map was set up.
++ */
++bool tracing_map_var_set(struct tracing_map_elt *elt, unsigned int i)
++{
++ return elt->var_set[i];
++}
++
++/**
++ * tracing_map_read_var - Return the value of a tracing_map_elt's variable field
++ * @elt: The tracing_map_elt
++ * @i: The index of the given variable associated with the tracing_map_elt
++ *
++ * Retrieve the value of the variable i associated with the specified
++ * tracing_map_elt instance. The index i is the index returned by the
++ * call to tracing_map_add_var() when the tracing map was set
++ * up.
++ *
++ * Return: The variable value associated with field i for elt.
++ */
++u64 tracing_map_read_var(struct tracing_map_elt *elt, unsigned int i)
++{
++ return (u64)atomic64_read(&elt->vars[i]);
++}
++
++/**
++ * tracing_map_read_var_once - Return and reset a tracing_map_elt's variable field
++ * @elt: The tracing_map_elt
++ * @i: The index of the given variable associated with the tracing_map_elt
++ *
++ * Retrieve the value of the variable i associated with the specified
++ * tracing_map_elt instance, and reset the variable to the 'not set'
++ * state. The index i is the index returned by the call to
++ * tracing_map_add_var() when the tracing map was set up. The reset
++ * essentially makes the variable a read-once variable if it's only
++ * accessed using this function.
++ *
++ * Return: The variable value associated with field i for elt.
++ */
++u64 tracing_map_read_var_once(struct tracing_map_elt *elt, unsigned int i)
++{
++ elt->var_set[i] = false;
++ return (u64)atomic64_read(&elt->vars[i]);
++}
++
+ int tracing_map_cmp_string(void *val_a, void *val_b)
+ {
+ char *a = val_a;
+@@ -171,6 +238,28 @@
+ }
+
+ /**
++ * tracing_map_add_var - Add a field describing a tracing_map var
++ * @map: The tracing_map
++ *
++ * Add a var to the map and return the index identifying it in the map
++ * and associated tracing_map_elts. This is the index used for
++ * instance to update a var for a particular tracing_map_elt using
++ * tracing_map_update_var() or reading it via tracing_map_read_var().
++ *
++ * Return: The index identifying the var in the map and associated
++ * tracing_map_elts, or -EINVAL on error.
++ */
++int tracing_map_add_var(struct tracing_map *map)
++{
++ int ret = -EINVAL;
++
++ if (map->n_vars < TRACING_MAP_VARS_MAX)
++ ret = map->n_vars++;
++
++ return ret;
++}
++
++/**
+ * tracing_map_add_key_field - Add a field describing a tracing_map key
+ * @map: The tracing_map
+ * @offset: The offset within the key
+@@ -280,6 +369,11 @@
+ if (elt->fields[i].cmp_fn == tracing_map_cmp_atomic64)
+ atomic64_set(&elt->fields[i].sum, 0);
+
++ for (i = 0; i < elt->map->n_vars; i++) {
++ atomic64_set(&elt->vars[i], 0);
++ elt->var_set[i] = false;
++ }
++
+ if (elt->map->ops && elt->map->ops->elt_clear)
+ elt->map->ops->elt_clear(elt);
+ }
+@@ -306,6 +400,8 @@
+ if (elt->map->ops && elt->map->ops->elt_free)
+ elt->map->ops->elt_free(elt);
+ kfree(elt->fields);
++ kfree(elt->vars);
++ kfree(elt->var_set);
+ kfree(elt->key);
+ kfree(elt);
+ }
+@@ -333,6 +429,18 @@
+ goto free;
+ }
+
++ elt->vars = kcalloc(map->n_vars, sizeof(*elt->vars), GFP_KERNEL);
++ if (!elt->vars) {
++ err = -ENOMEM;
++ goto free;
++ }
++
++ elt->var_set = kcalloc(map->n_vars, sizeof(*elt->var_set), GFP_KERNEL);
++ if (!elt->var_set) {
++ err = -ENOMEM;
++ goto free;
++ }
++
+ tracing_map_elt_init_fields(elt);
+
+ if (map->ops && map->ops->elt_alloc) {
+@@ -414,7 +522,9 @@
+ __tracing_map_insert(struct tracing_map *map, void *key, bool lookup_only)
+ {
+ u32 idx, key_hash, test_key;
++ int dup_try = 0;
+ struct tracing_map_entry *entry;
++ struct tracing_map_elt *val;
+
+ key_hash = jhash(key, map->key_size, 0);
+ if (key_hash == 0)
+@@ -426,10 +536,33 @@
+ entry = TRACING_MAP_ENTRY(map->map, idx);
+ test_key = entry->key;
+
+- if (test_key && test_key == key_hash && entry->val &&
+- keys_match(key, entry->val->key, map->key_size)) {
+- atomic64_inc(&map->hits);
+- return entry->val;
++ if (test_key && test_key == key_hash) {
++ val = READ_ONCE(entry->val);
++ if (val &&
++ keys_match(key, val->key, map->key_size)) {
++ if (!lookup_only)
++ atomic64_inc(&map->hits);
++ return val;
++ } else if (unlikely(!val)) {
++ /*
++ * The key is present. But, val (pointer to elt
++ * struct) is still NULL. which means some other
++ * thread is in the process of inserting an
++ * element.
++ *
++ * On top of that, it's key_hash is same as the
++ * one being inserted right now. So, it's
++ * possible that the element has the same
++ * key as well.
++ */
++
++ dup_try++;
++ if (dup_try > map->map_size) {
++ atomic64_inc(&map->drops);
++ break;
++ }
++ continue;
++ }
+ }
+
+ if (!test_key) {
+@@ -451,6 +584,13 @@
+ atomic64_inc(&map->hits);
+
+ return entry->val;
++ } else {
++ /*
++ * cmpxchg() failed. Loop around once
++ * more to check what key was inserted.
++ */
++ dup_try++;
++ continue;
+ }
+ }
+
+@@ -815,67 +955,15 @@
+ return sort_entry;
+ }
+
+-static struct tracing_map_elt *copy_elt(struct tracing_map_elt *elt)
+-{
+- struct tracing_map_elt *dup_elt;
+- unsigned int i;
+-
+- dup_elt = tracing_map_elt_alloc(elt->map);
+- if (IS_ERR(dup_elt))
+- return NULL;
+-
+- if (elt->map->ops && elt->map->ops->elt_copy)
+- elt->map->ops->elt_copy(dup_elt, elt);
+-
+- dup_elt->private_data = elt->private_data;
+- memcpy(dup_elt->key, elt->key, elt->map->key_size);
+-
+- for (i = 0; i < elt->map->n_fields; i++) {
+- atomic64_set(&dup_elt->fields[i].sum,
+- atomic64_read(&elt->fields[i].sum));
+- dup_elt->fields[i].cmp_fn = elt->fields[i].cmp_fn;
+- }
+-
+- return dup_elt;
+-}
+-
+-static int merge_dup(struct tracing_map_sort_entry **sort_entries,
+- unsigned int target, unsigned int dup)
+-{
+- struct tracing_map_elt *target_elt, *elt;
+- bool first_dup = (target - dup) == 1;
+- int i;
+-
+- if (first_dup) {
+- elt = sort_entries[target]->elt;
+- target_elt = copy_elt(elt);
+- if (!target_elt)
+- return -ENOMEM;
+- sort_entries[target]->elt = target_elt;
+- sort_entries[target]->elt_copied = true;
+- } else
+- target_elt = sort_entries[target]->elt;
+-
+- elt = sort_entries[dup]->elt;
+-
+- for (i = 0; i < elt->map->n_fields; i++)
+- atomic64_add(atomic64_read(&elt->fields[i].sum),
+- &target_elt->fields[i].sum);
+-
+- sort_entries[dup]->dup = true;
+-
+- return 0;
+-}
+-
+-static int merge_dups(struct tracing_map_sort_entry **sort_entries,
++static void detect_dups(struct tracing_map_sort_entry **sort_entries,
+ int n_entries, unsigned int key_size)
+ {
+ unsigned int dups = 0, total_dups = 0;
+- int err, i, j;
++ int i;
+ void *key;
+
+ if (n_entries < 2)
+- return total_dups;
++ return;
- if (entry->preempt_count)
- trace_seq_printf(s, "%x", entry->preempt_count);
- else
- trace_seq_putc(s, '.');
+ sort(sort_entries, n_entries, sizeof(struct tracing_map_sort_entry *),
+ (int (*)(const void *, const void *))cmp_entries_dup, NULL);
+@@ -884,30 +972,14 @@
+ for (i = 1; i < n_entries; i++) {
+ if (!memcmp(sort_entries[i]->key, key, key_size)) {
+ dups++; total_dups++;
+- err = merge_dup(sort_entries, i - dups, i);
+- if (err)
+- return err;
+ continue;
+ }
+ key = sort_entries[i]->key;
+ dups = 0;
+ }
-+ if (entry->preempt_lazy_count)
-+ trace_seq_printf(s, "%x", entry->preempt_lazy_count);
-+ else
-+ trace_seq_putc(s, '.');
-+
-+ if (entry->migrate_disable)
-+ trace_seq_printf(s, "%x", entry->migrate_disable);
-+ else
-+ trace_seq_putc(s, '.');
-+
- return !trace_seq_has_overflowed(s);
+- if (!total_dups)
+- return total_dups;
+-
+- for (i = 0, j = 0; i < n_entries; i++) {
+- if (!sort_entries[i]->dup) {
+- sort_entries[j] = sort_entries[i];
+- if (j++ != i)
+- sort_entries[i] = NULL;
+- } else {
+- destroy_sort_entry(sort_entries[i]);
+- sort_entries[i] = NULL;
+- }
+- }
+-
+- return total_dups;
++ WARN_ONCE(total_dups > 0,
++ "Duplicates detected: %d\n", total_dups);
}
-diff --git a/kernel/user.c b/kernel/user.c
-index b069ccbfb0b0..1a2e88e98b5e 100644
---- a/kernel/user.c
-+++ b/kernel/user.c
-@@ -161,11 +161,11 @@ void free_uid(struct user_struct *up)
+ static bool is_key(struct tracing_map *map, unsigned int field_idx)
+@@ -1033,10 +1105,7 @@
+ return 1;
+ }
+
+- ret = merge_dups(entries, n_entries, map->key_size);
+- if (ret < 0)
+- goto free;
+- n_entries -= ret;
++ detect_dups(entries, n_entries, map->key_size);
+
+ if (is_key(map, sort_keys[0].field_idx))
+ cmp_entries_fn = cmp_entries_key;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/trace/tracing_map.h linux-4.14/kernel/trace/tracing_map.h
+--- linux-4.14.orig/kernel/trace/tracing_map.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/trace/tracing_map.h 2018-09-05 11:05:07.000000000 +0200
+@@ -6,10 +6,11 @@
+ #define TRACING_MAP_BITS_MAX 17
+ #define TRACING_MAP_BITS_MIN 7
+
+-#define TRACING_MAP_KEYS_MAX 2
++#define TRACING_MAP_KEYS_MAX 3
+ #define TRACING_MAP_VALS_MAX 3
+ #define TRACING_MAP_FIELDS_MAX (TRACING_MAP_KEYS_MAX + \
+ TRACING_MAP_VALS_MAX)
++#define TRACING_MAP_VARS_MAX 16
+ #define TRACING_MAP_SORT_KEYS_MAX 2
+
+ typedef int (*tracing_map_cmp_fn_t) (void *val_a, void *val_b);
+@@ -137,6 +138,8 @@
+ struct tracing_map_elt {
+ struct tracing_map *map;
+ struct tracing_map_field *fields;
++ atomic64_t *vars;
++ bool *var_set;
+ void *key;
+ void *private_data;
+ };
+@@ -192,6 +195,7 @@
+ int key_idx[TRACING_MAP_KEYS_MAX];
+ unsigned int n_keys;
+ struct tracing_map_sort_key sort_key;
++ unsigned int n_vars;
+ atomic64_t hits;
+ atomic64_t drops;
+ };
+@@ -215,11 +219,6 @@
+ * Element allocation occurs before tracing begins, when the
+ * tracing_map_init() call is made by client code.
+ *
+- * @elt_copy: At certain points in the lifetime of an element, it may
+- * need to be copied. The copy should include a copy of the
+- * client-allocated data, which can be copied into the 'to'
+- * element from the 'from' element.
+- *
+ * @elt_free: When a tracing_map_elt is freed, this function is called
+ * and allows client-allocated per-element data to be freed.
+ *
+@@ -233,8 +232,6 @@
+ */
+ struct tracing_map_ops {
+ int (*elt_alloc)(struct tracing_map_elt *elt);
+- void (*elt_copy)(struct tracing_map_elt *to,
+- struct tracing_map_elt *from);
+ void (*elt_free)(struct tracing_map_elt *elt);
+ void (*elt_clear)(struct tracing_map_elt *elt);
+ void (*elt_init)(struct tracing_map_elt *elt);
+@@ -248,6 +245,7 @@
+ extern int tracing_map_init(struct tracing_map *map);
+
+ extern int tracing_map_add_sum_field(struct tracing_map *map);
++extern int tracing_map_add_var(struct tracing_map *map);
+ extern int tracing_map_add_key_field(struct tracing_map *map,
+ unsigned int offset,
+ tracing_map_cmp_fn_t cmp_fn);
+@@ -267,7 +265,13 @@
+
+ extern void tracing_map_update_sum(struct tracing_map_elt *elt,
+ unsigned int i, u64 n);
++extern void tracing_map_set_var(struct tracing_map_elt *elt,
++ unsigned int i, u64 n);
++extern bool tracing_map_var_set(struct tracing_map_elt *elt, unsigned int i);
+ extern u64 tracing_map_read_sum(struct tracing_map_elt *elt, unsigned int i);
++extern u64 tracing_map_read_var(struct tracing_map_elt *elt, unsigned int i);
++extern u64 tracing_map_read_var_once(struct tracing_map_elt *elt, unsigned int i);
++
+ extern void tracing_map_set_field_descr(struct tracing_map *map,
+ unsigned int i,
+ unsigned int key_offset,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/user.c linux-4.14/kernel/user.c
+--- linux-4.14.orig/kernel/user.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/user.c 2018-09-05 11:05:07.000000000 +0200
+@@ -162,11 +162,11 @@
if (!up)
return;
}
struct user_struct *alloc_uid(kuid_t uid)
-diff --git a/kernel/watchdog.c b/kernel/watchdog.c
-index 6d1020c03d41..70c6a2f79f7e 100644
---- a/kernel/watchdog.c
-+++ b/kernel/watchdog.c
-@@ -315,6 +315,8 @@ static int is_softlockup(unsigned long touch_ts)
-
- #ifdef CONFIG_HARDLOCKUP_DETECTOR
-
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/watchdog.c linux-4.14/kernel/watchdog.c
+--- linux-4.14.orig/kernel/watchdog.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/watchdog.c 2018-09-05 11:05:07.000000000 +0200
+@@ -462,7 +462,7 @@
+ * Start the timer first to prevent the NMI watchdog triggering
+ * before the timer has a chance to fire.
+ */
+- hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ hrtimer->function = watchdog_timer_fn;
+ hrtimer_start(hrtimer, ns_to_ktime(sample_period),
+ HRTIMER_MODE_REL_PINNED);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/watchdog_hld.c linux-4.14/kernel/watchdog_hld.c
+--- linux-4.14.orig/kernel/watchdog_hld.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/watchdog_hld.c 2018-09-05 11:05:07.000000000 +0200
+@@ -24,6 +24,8 @@
+ static DEFINE_PER_CPU(bool, watchdog_nmi_touch);
+ static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
+ static DEFINE_PER_CPU(struct perf_event *, dead_event);
+static DEFINE_RAW_SPINLOCK(watchdog_output_lock);
+
- static struct perf_event_attr wd_hw_attr = {
- .type = PERF_TYPE_HARDWARE,
- .config = PERF_COUNT_HW_CPU_CYCLES,
-@@ -348,6 +350,13 @@ static void watchdog_overflow_callback(struct perf_event *event,
+ static struct cpumask dead_events_mask;
+
+ static unsigned long hardlockup_allcpu_dumped;
+@@ -134,6 +136,13 @@
/* only print hardlockups once */
if (__this_cpu_read(hard_watchdog_warn) == true)
return;
pr_emerg("Watchdog detected hard LOCKUP on cpu %d", this_cpu);
print_modules();
-@@ -365,6 +374,7 @@ static void watchdog_overflow_callback(struct perf_event *event,
+@@ -151,6 +160,7 @@
!test_and_set_bit(0, &hardlockup_allcpu_dumped))
trigger_allbutself_cpu_backtrace();
if (hardlockup_panic)
nmi_panic(regs, "Hard LOCKUP");
-@@ -512,6 +522,7 @@ static void watchdog_enable(unsigned int cpu)
- /* kick off the timer for the hardlockup detector */
- hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
- hrtimer->function = watchdog_timer_fn;
-+ hrtimer->irqsafe = 1;
-
- /* Enable the perf event */
- watchdog_nmi_enable(cpu);
-diff --git a/kernel/workqueue.c b/kernel/workqueue.c
-index 479d840db286..24eba6620a45 100644
---- a/kernel/workqueue.c
-+++ b/kernel/workqueue.c
-@@ -48,6 +48,8 @@
- #include <linux/nodemask.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/workqueue.c linux-4.14/kernel/workqueue.c
+--- linux-4.14.orig/kernel/workqueue.c 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/kernel/workqueue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -49,6 +49,8 @@
#include <linux/moduleparam.h>
#include <linux/uaccess.h>
+ #include <linux/nmi.h>
+#include <linux/locallock.h>
+#include <linux/delay.h>
#include "workqueue_internal.h"
-@@ -121,11 +123,16 @@ enum {
+@@ -123,11 +125,16 @@
* cpu or grabbing pool->lock is enough for read access. If
* POOL_DISASSOCIATED is set, it's identical to L.
*
*
* PW: wq_pool_mutex and wq->mutex protected for writes. Either for reads.
*
-@@ -134,7 +141,7 @@ enum {
+@@ -136,7 +143,7 @@
*
* WQ: wq->mutex protected.
*
*
* MD: wq_mayday_lock protected.
*/
-@@ -185,7 +192,7 @@ struct worker_pool {
+@@ -186,7 +193,7 @@
atomic_t nr_running ____cacheline_aligned_in_smp;
/*
* from get_work_pool().
*/
struct rcu_head rcu;
-@@ -214,7 +221,7 @@ struct pool_workqueue {
+@@ -215,7 +222,7 @@
/*
* Release of unbound pwq is punted to system_wq. See put_pwq()
* and pwq_unbound_release_workfn() for details. pool_workqueue
* determined without grabbing wq->mutex.
*/
struct work_struct unbound_release_work;
-@@ -348,6 +355,8 @@ EXPORT_SYMBOL_GPL(system_power_efficient_wq);
+@@ -352,6 +359,8 @@
struct workqueue_struct *system_freezable_power_efficient_wq __read_mostly;
EXPORT_SYMBOL_GPL(system_freezable_power_efficient_wq);
static int worker_thread(void *__worker);
static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
-@@ -355,20 +364,20 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
+@@ -359,20 +368,20 @@
#include <trace/events/workqueue.h>
#define assert_rcu_or_pool_mutex() \
#define for_each_cpu_worker_pool(pool, cpu) \
for ((pool) = &per_cpu(cpu_worker_pools, cpu)[0]; \
-@@ -380,7 +389,7 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
+@@ -384,7 +393,7 @@
* @pool: iteration cursor
* @pi: integer used for iteration
*
* locked. If the pool needs to be used beyond the locking in effect, the
* caller is responsible for guaranteeing that the pool stays online.
*
-@@ -412,7 +421,7 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
+@@ -416,7 +425,7 @@
* @pwq: iteration cursor
* @wq: the target workqueue
*
* If the pwq needs to be used beyond the locking in effect, the caller is
* responsible for guaranteeing that the pwq stays online.
*
-@@ -424,6 +433,31 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
+@@ -428,6 +437,31 @@
if (({ assert_rcu_or_wq_mutex(wq); false; })) { } \
else
#ifdef CONFIG_DEBUG_OBJECTS_WORK
static struct debug_obj_descr work_debug_descr;
-@@ -548,7 +582,7 @@ static int worker_pool_assign_id(struct worker_pool *pool)
+@@ -552,7 +586,7 @@
* @wq: the target workqueue
* @node: the node ID
*
* read locked.
* If the pwq needs to be used beyond the locking in effect, the caller is
* responsible for guaranteeing that the pwq stays online.
-@@ -692,8 +726,8 @@ static struct pool_workqueue *get_work_pwq(struct work_struct *work)
+@@ -696,8 +730,8 @@
* @work: the work item of interest
*
* Pools are created and destroyed under wq_pool_mutex, and allows read
*
* All fields of the returned pool are accessible as long as the above
* mentioned locking is in effect. If the returned pool needs to be used
-@@ -830,50 +864,45 @@ static struct worker *first_idle_worker(struct worker_pool *pool)
+@@ -834,50 +868,45 @@
*/
static void wake_up_worker(struct worker_pool *pool)
{
+ * wq_worker_running - a worker is running again
* @task: task waking up
- * @cpu: CPU @task is waking up to
- *
+- *
- * This function is called during try_to_wake_up() when a worker is
- * being awoken.
-- *
+ *
- * CONTEXT:
- * spin_lock_irq(rq->lock)
+ * This function is called when a worker returns from schedule()
struct worker_pool *pool;
/*
-@@ -882,29 +911,26 @@ struct task_struct *wq_worker_sleeping(struct task_struct *task)
+@@ -886,29 +915,26 @@
* checking NOT_RUNNING.
*/
if (worker->flags & WORKER_NOT_RUNNING)
}
/**
-@@ -1098,12 +1124,14 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq)
+@@ -1102,12 +1128,14 @@
{
if (pwq) {
/*
}
}
-@@ -1207,7 +1235,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
+@@ -1211,7 +1239,7 @@
struct worker_pool *pool;
struct pool_workqueue *pwq;
/* try to steal the timer if it exists */
if (is_dwork) {
-@@ -1226,6 +1254,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
+@@ -1230,6 +1258,7 @@
if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)))
return 0;
/*
* The queueing is in progress, or it is already queued. Try to
* steal it from ->worklist without clearing WORK_STRUCT_PENDING.
-@@ -1264,14 +1293,16 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
+@@ -1268,14 +1297,16 @@
set_work_pool_and_keep_pending(work, pool->id);
spin_unlock(&pool->lock);
return -EAGAIN;
}
-@@ -1373,7 +1404,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+@@ -1377,7 +1408,7 @@
* queued or lose PENDING. Grabbing PENDING and queueing should
* happen with IRQ disabled.
*/
debug_work_activate(work);
-@@ -1381,6 +1412,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+@@ -1385,6 +1416,7 @@
if (unlikely(wq->flags & __WQ_DRAINING) &&
WARN_ON_ONCE(!is_chained_work(wq)))
return;
retry:
if (req_cpu == WORK_CPU_UNBOUND)
cpu = wq_select_unbound_cpu(raw_smp_processor_id());
-@@ -1437,10 +1469,8 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+@@ -1441,10 +1473,8 @@
/* pwq determined, queue */
trace_workqueue_queue_work(req_cpu, pwq, work);
pwq->nr_in_flight[pwq->work_color]++;
work_flags = work_color_to_flags(pwq->work_color);
-@@ -1458,7 +1488,9 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+@@ -1462,7 +1492,9 @@
insert_work(pwq, work, worklist, work_flags);
}
/**
-@@ -1478,14 +1510,14 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq,
+@@ -1482,14 +1514,14 @@
bool ret = false;
unsigned long flags;
return ret;
}
EXPORT_SYMBOL(queue_work_on);
-@@ -1552,14 +1584,14 @@ bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
+@@ -1498,8 +1530,11 @@
+ {
+ struct delayed_work *dwork = (struct delayed_work *)__data;
+
++ /* XXX */
++ /* local_lock(pendingb_lock); */
+ /* should have been called from irqsafe timer with irq already off */
+ __queue_work(dwork->cpu, dwork->wq, &dwork->work);
++ /* local_unlock(pendingb_lock); */
+ }
+ EXPORT_SYMBOL(delayed_work_timer_fn);
+
+@@ -1555,14 +1590,14 @@
unsigned long flags;
/* read the comment in __queue_work() */
return ret;
}
EXPORT_SYMBOL(queue_delayed_work_on);
-@@ -1594,7 +1626,7 @@ bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
+@@ -1597,7 +1632,7 @@
if (likely(ret >= 0)) {
__queue_delayed_work(cpu, wq, dwork, delay);
}
/* -ENOENT from try_to_grab_pending() becomes %true */
-@@ -1627,7 +1659,9 @@ static void worker_enter_idle(struct worker *worker)
+@@ -1630,7 +1665,9 @@
worker->last_active = jiffies;
/* idle_list is LIFO */
if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT);
-@@ -1660,7 +1694,9 @@ static void worker_leave_idle(struct worker *worker)
+@@ -1663,7 +1700,9 @@
return;
worker_clr_flags(worker, WORKER_IDLE);
pool->nr_idle--;
}
static struct worker *alloc_worker(int node)
-@@ -1826,7 +1862,9 @@ static void destroy_worker(struct worker *worker)
+@@ -1829,7 +1868,9 @@
pool->nr_workers--;
pool->nr_idle--;
worker->flags |= WORKER_DIE;
wake_up_process(worker->task);
}
-@@ -2785,14 +2823,14 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
+@@ -2815,14 +2856,14 @@
might_sleep();
/* see the comment in try_to_grab_pending() with the same code */
pwq = get_work_pwq(work);
if (pwq) {
-@@ -2821,10 +2859,11 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
- else
- lock_map_acquire_read(&pwq->wq->lockdep_map);
- lock_map_release(&pwq->wq->lockdep_map);
+@@ -2853,10 +2894,11 @@
+ lock_map_acquire(&pwq->wq->lockdep_map);
+ lock_map_release(&pwq->wq->lockdep_map);
+ }
-
+ rcu_read_unlock();
return true;
return false;
}
-@@ -2911,7 +2950,7 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)
+@@ -2946,7 +2988,7 @@
/* tell other tasks trying to grab @work to back off */
mark_work_canceling(work);
- local_irq_restore(flags);
+ local_unlock_irqrestore(pendingb_lock, flags);
- flush_work(work);
- clear_work_data(work);
-@@ -2966,10 +3005,10 @@ EXPORT_SYMBOL_GPL(cancel_work_sync);
+ /*
+ * This allows canceling during early boot. We know that @work
+@@ -3007,10 +3049,10 @@
*/
bool flush_delayed_work(struct delayed_work *dwork)
{
return flush_work(&dwork->work);
}
EXPORT_SYMBOL(flush_delayed_work);
-@@ -2987,7 +3026,7 @@ static bool __cancel_work(struct work_struct *work, bool is_dwork)
+@@ -3028,7 +3070,7 @@
return false;
set_work_pool_and_clear_pending(work, get_work_pool_id(work));
return ret;
}
-@@ -3245,7 +3284,7 @@ static void rcu_free_pool(struct rcu_head *rcu)
+@@ -3284,7 +3326,7 @@
* put_unbound_pool - put a worker_pool
* @pool: worker_pool to put
*
* safe manner. get_unbound_pool() calls this function on its failure path
* and this function should be able to release pools which went through,
* successfully or not, init_worker_pool().
-@@ -3299,8 +3338,8 @@ static void put_unbound_pool(struct worker_pool *pool)
+@@ -3338,8 +3380,8 @@
del_timer_sync(&pool->idle_timer);
del_timer_sync(&pool->mayday_timer);
}
/**
-@@ -3407,14 +3446,14 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
+@@ -3446,14 +3488,14 @@
put_unbound_pool(pool);
mutex_unlock(&wq_pool_mutex);
}
/**
-@@ -4064,7 +4103,7 @@ void destroy_workqueue(struct workqueue_struct *wq)
+@@ -4128,7 +4170,7 @@
* The base ref is never dropped on per-cpu pwqs. Directly
* schedule RCU free.
*/
} else {
/*
* We're the sole accessor of @wq at this point. Directly
-@@ -4157,7 +4196,8 @@ bool workqueue_congested(int cpu, struct workqueue_struct *wq)
+@@ -4238,7 +4280,8 @@
struct pool_workqueue *pwq;
bool ret;
if (cpu == WORK_CPU_UNBOUND)
cpu = smp_processor_id();
-@@ -4168,7 +4208,8 @@ bool workqueue_congested(int cpu, struct workqueue_struct *wq)
+@@ -4249,7 +4292,8 @@
pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
ret = !list_empty(&pwq->delayed_works);
return ret;
}
-@@ -4194,15 +4235,15 @@ unsigned int work_busy(struct work_struct *work)
+@@ -4275,15 +4319,15 @@
if (work_pending(work))
ret |= WORK_BUSY_PENDING;
return ret;
}
-@@ -4391,7 +4432,7 @@ void show_workqueue_state(void)
+@@ -4472,7 +4516,7 @@
unsigned long flags;
int pi;
pr_info("Showing busy workqueues and worker pools:\n");
-@@ -4444,7 +4485,7 @@ void show_workqueue_state(void)
- spin_unlock_irqrestore(&pool->lock, flags);
+@@ -4537,7 +4581,7 @@
+ touch_nmi_watchdog();
}
- rcu_read_unlock_sched();
}
/*
-@@ -4782,16 +4823,16 @@ bool freeze_workqueues_busy(void)
+@@ -4898,16 +4942,16 @@
* nr_active is monotonically decreasing. It's safe
* to peek without lock.
*/
}
out_unlock:
mutex_unlock(&wq_pool_mutex);
-@@ -4981,7 +5022,8 @@ static ssize_t wq_pool_ids_show(struct device *dev,
+@@ -5097,7 +5141,8 @@
const char *delim = "";
int node, written = 0;
for_each_node(node) {
written += scnprintf(buf + written, PAGE_SIZE - written,
"%s%d:%d", delim, node,
-@@ -4989,7 +5031,8 @@ static ssize_t wq_pool_ids_show(struct device *dev,
+@@ -5105,7 +5150,8 @@
delim = " ";
}
written += scnprintf(buf + written, PAGE_SIZE - written, "\n");
return written;
}
-diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h
-index 8635417c587b..f000c4d6917e 100644
---- a/kernel/workqueue_internal.h
-+++ b/kernel/workqueue_internal.h
-@@ -43,6 +43,7 @@ struct worker {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/kernel/workqueue_internal.h linux-4.14/kernel/workqueue_internal.h
+--- linux-4.14.orig/kernel/workqueue_internal.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/kernel/workqueue_internal.h 2018-09-05 11:05:07.000000000 +0200
+@@ -45,6 +45,7 @@
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
int id; /* I: worker id */
/*
* Opaque string set with work_set_desc(). Printed out with task
-@@ -68,7 +69,7 @@ static inline struct worker *current_wq_worker(void)
+@@ -70,7 +71,7 @@
* Scheduler hooks for concurrency managed workqueue. Only to be used from
* sched/core.c and workqueue.c.
*/
+void wq_worker_sleeping(struct task_struct *task);
#endif /* _KERNEL_WORKQUEUE_INTERNAL_H */
-diff --git a/lib/Kconfig b/lib/Kconfig
-index 260a80e313b9..b06becb3f477 100644
---- a/lib/Kconfig
-+++ b/lib/Kconfig
-@@ -400,6 +400,7 @@ config CHECK_SIGNATURE
-
- config CPUMASK_OFFSTACK
- bool "Force CPU masks off stack" if DEBUG_PER_CPU_MAPS
-+ depends on !PREEMPT_RT_FULL
- help
- Use dynamic allocation for cpumask_var_t, instead of putting
- them on the stack. This is a bit more expensive, but avoids
-diff --git a/lib/debugobjects.c b/lib/debugobjects.c
-index 056052dc8e91..d8494e126de8 100644
---- a/lib/debugobjects.c
-+++ b/lib/debugobjects.c
-@@ -308,7 +308,10 @@ __debug_object_init(void *addr, struct debug_obj_descr *descr, int onstack)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/debugobjects.c linux-4.14/lib/debugobjects.c
+--- linux-4.14.orig/lib/debugobjects.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/debugobjects.c 2018-09-05 11:05:07.000000000 +0200
+@@ -336,7 +336,10 @@
struct debug_obj *obj;
unsigned long flags;
db = get_bucket((unsigned long) addr);
-diff --git a/lib/idr.c b/lib/idr.c
-index 6098336df267..9decbe914595 100644
---- a/lib/idr.c
-+++ b/lib/idr.c
-@@ -30,6 +30,7 @@
- #include <linux/idr.h>
- #include <linux/spinlock.h>
- #include <linux/percpu.h>
-+#include <linux/locallock.h>
-
- #define MAX_IDR_SHIFT (sizeof(int) * 8 - 1)
- #define MAX_IDR_BIT (1U << MAX_IDR_SHIFT)
-@@ -45,6 +46,37 @@ static DEFINE_PER_CPU(struct idr_layer *, idr_preload_head);
- static DEFINE_PER_CPU(int, idr_preload_cnt);
- static DEFINE_SPINLOCK(simple_ida_lock);
-
-+#ifdef CONFIG_PREEMPT_RT_FULL
-+static DEFINE_LOCAL_IRQ_LOCK(idr_lock);
-+
-+static inline void idr_preload_lock(void)
-+{
-+ local_lock(idr_lock);
-+}
-+
-+static inline void idr_preload_unlock(void)
-+{
-+ local_unlock(idr_lock);
-+}
-+
-+void idr_preload_end(void)
-+{
-+ idr_preload_unlock();
-+}
-+EXPORT_SYMBOL(idr_preload_end);
-+#else
-+static inline void idr_preload_lock(void)
-+{
-+ preempt_disable();
-+}
-+
-+static inline void idr_preload_unlock(void)
-+{
-+ preempt_enable();
-+}
-+#endif
-+
-+
- /* the maximum ID which can be allocated given idr->layers */
- static int idr_max(int layers)
- {
-@@ -115,14 +147,14 @@ static struct idr_layer *idr_layer_alloc(gfp_t gfp_mask, struct idr *layer_idr)
- * context. See idr_preload() for details.
- */
- if (!in_interrupt()) {
-- preempt_disable();
-+ idr_preload_lock();
- new = __this_cpu_read(idr_preload_head);
- if (new) {
- __this_cpu_write(idr_preload_head, new->ary[0]);
- __this_cpu_dec(idr_preload_cnt);
- new->ary[0] = NULL;
- }
-- preempt_enable();
-+ idr_preload_unlock();
- if (new)
- return new;
- }
-@@ -366,7 +398,6 @@ static void idr_fill_slot(struct idr *idr, void *ptr, int id,
- idr_mark_full(pa, id);
- }
-
--
- /**
- * idr_preload - preload for idr_alloc()
- * @gfp_mask: allocation mask to use for preloading
-@@ -401,7 +432,7 @@ void idr_preload(gfp_t gfp_mask)
- WARN_ON_ONCE(in_interrupt());
- might_sleep_if(gfpflags_allow_blocking(gfp_mask));
-
-- preempt_disable();
-+ idr_preload_lock();
-
- /*
- * idr_alloc() is likely to succeed w/o full idr_layer buffer and
-@@ -413,9 +444,9 @@ void idr_preload(gfp_t gfp_mask)
- while (__this_cpu_read(idr_preload_cnt) < MAX_IDR_FREE) {
- struct idr_layer *new;
-
-- preempt_enable();
-+ idr_preload_unlock();
- new = kmem_cache_zalloc(idr_layer_cache, gfp_mask);
-- preempt_disable();
-+ idr_preload_lock();
- if (!new)
- break;
-
-diff --git a/lib/irq_poll.c b/lib/irq_poll.c
-index 1d6565e81030..b23a79761df7 100644
---- a/lib/irq_poll.c
-+++ b/lib/irq_poll.c
-@@ -36,6 +36,7 @@ void irq_poll_sched(struct irq_poll *iop)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/irq_poll.c linux-4.14/lib/irq_poll.c
+--- linux-4.14.orig/lib/irq_poll.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/irq_poll.c 2018-09-05 11:05:07.000000000 +0200
+@@ -37,6 +37,7 @@
list_add_tail(&iop->list, this_cpu_ptr(&blk_cpu_iopoll));
__raise_softirq_irqoff(IRQ_POLL_SOFTIRQ);
local_irq_restore(flags);
}
EXPORT_SYMBOL(irq_poll_sched);
-@@ -71,6 +72,7 @@ void irq_poll_complete(struct irq_poll *iop)
+@@ -72,6 +73,7 @@
local_irq_save(flags);
__irq_poll_complete(iop);
local_irq_restore(flags);
}
EXPORT_SYMBOL(irq_poll_complete);
-@@ -95,6 +97,7 @@ static void __latent_entropy irq_poll_softirq(struct softirq_action *h)
+@@ -96,6 +98,7 @@
}
local_irq_enable();
/* Even though interrupts have been re-enabled, this
* access is safe because interrupts can only add new
-@@ -132,6 +135,7 @@ static void __latent_entropy irq_poll_softirq(struct softirq_action *h)
+@@ -133,6 +136,7 @@
__raise_softirq_irqoff(IRQ_POLL_SOFTIRQ);
local_irq_enable();
}
/**
-@@ -195,6 +199,7 @@ static int irq_poll_cpu_dead(unsigned int cpu)
+@@ -196,6 +200,7 @@
this_cpu_ptr(&blk_cpu_iopoll));
__raise_softirq_irqoff(IRQ_POLL_SOFTIRQ);
local_irq_enable();
return 0;
}
-diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
-index f3a217ea0388..4611b156ef79 100644
---- a/lib/locking-selftest.c
-+++ b/lib/locking-selftest.c
-@@ -590,6 +590,8 @@ GENERATE_TESTCASE(init_held_rsem)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/Kconfig linux-4.14/lib/Kconfig
+--- linux-4.14.orig/lib/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -428,6 +428,7 @@
+
+ config CPUMASK_OFFSTACK
+ bool "Force CPU masks off stack" if DEBUG_PER_CPU_MAPS
++ depends on !PREEMPT_RT_FULL
+ help
+ Use dynamic allocation for cpumask_var_t, instead of putting
+ them on the stack. This is a bit more expensive, but avoids
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/Kconfig.debug linux-4.14/lib/Kconfig.debug
+--- linux-4.14.orig/lib/Kconfig.debug 2018-09-05 11:03:22.000000000 +0200
++++ linux-4.14/lib/Kconfig.debug 2018-09-05 11:05:07.000000000 +0200
+@@ -1197,7 +1197,7 @@
+
+ config DEBUG_LOCKING_API_SELFTESTS
+ bool "Locking API boot-time self-tests"
+- depends on DEBUG_KERNEL
++ depends on DEBUG_KERNEL && !PREEMPT_RT_FULL
+ help
+ Say Y here if you want the kernel to run a short self-test during
+ bootup. The self-test checks whether common types of locking bugs
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/locking-selftest.c linux-4.14/lib/locking-selftest.c
+--- linux-4.14.orig/lib/locking-selftest.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/locking-selftest.c 2018-09-05 11:05:07.000000000 +0200
+@@ -742,6 +742,8 @@
#include "locking-selftest-spin-hardirq.h"
GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_spin)
#include "locking-selftest-rlock-hardirq.h"
GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_hard_rlock)
-@@ -605,9 +607,12 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_rlock)
+@@ -757,9 +759,12 @@
#include "locking-selftest-wlock-softirq.h"
GENERATE_PERMUTATIONS_2_EVENTS(irqsafe1_soft_wlock)
/*
* Enabling hardirqs with a softirq-safe lock held:
*/
-@@ -640,6 +645,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A_rlock)
+@@ -792,6 +797,8 @@
#undef E1
#undef E2
/*
* Enabling irqs with an irq-safe lock held:
*/
-@@ -663,6 +670,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2A_rlock)
+@@ -815,6 +822,8 @@
#include "locking-selftest-spin-hardirq.h"
GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_spin)
#include "locking-selftest-rlock-hardirq.h"
GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_hard_rlock)
-@@ -678,6 +687,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_rlock)
+@@ -830,6 +839,8 @@
#include "locking-selftest-wlock-softirq.h"
GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_wlock)
#undef E1
#undef E2
-@@ -709,6 +720,8 @@ GENERATE_PERMUTATIONS_2_EVENTS(irqsafe2B_soft_wlock)
+@@ -861,6 +872,8 @@
#include "locking-selftest-spin-hardirq.h"
GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_spin)
#include "locking-selftest-rlock-hardirq.h"
GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_hard_rlock)
-@@ -724,6 +737,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_rlock)
+@@ -876,6 +889,8 @@
#include "locking-selftest-wlock-softirq.h"
GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_wlock)
#undef E1
#undef E2
#undef E3
-@@ -757,6 +772,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe3_soft_wlock)
+@@ -909,6 +924,8 @@
#include "locking-selftest-spin-hardirq.h"
GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_spin)
#include "locking-selftest-rlock-hardirq.h"
GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_hard_rlock)
-@@ -772,10 +789,14 @@ GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_rlock)
+@@ -924,10 +941,14 @@
#include "locking-selftest-wlock-softirq.h"
GENERATE_PERMUTATIONS_3_EVENTS(irqsafe4_soft_wlock)
/*
* read-lock / write-lock irq inversion.
*
-@@ -838,6 +859,10 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_inversion_soft_wlock)
+@@ -990,6 +1011,10 @@
#undef E2
#undef E3
/*
* read-lock / write-lock recursion that is actually safe.
*/
-@@ -876,6 +901,8 @@ GENERATE_PERMUTATIONS_3_EVENTS(irq_read_recursion_soft)
+@@ -1028,6 +1053,8 @@
#undef E2
#undef E3
/*
* read-lock / write-lock recursion that is unsafe.
*/
-@@ -1858,6 +1885,7 @@ void locking_selftest(void)
+@@ -2057,6 +2084,7 @@
printk(" --------------------------------------------------------------------------\n");
/*
* irq-context testcases:
*/
-@@ -1870,6 +1898,28 @@ void locking_selftest(void)
+@@ -2069,6 +2097,28 @@
DO_TESTCASE_6x2("irq read-recursion", irq_read_recursion);
// DO_TESTCASE_6x2B("irq read-recursion #2", irq_read_recursion2);
ww_tests();
-diff --git a/lib/percpu_ida.c b/lib/percpu_ida.c
-index 6d40944960de..822a2c027e72 100644
---- a/lib/percpu_ida.c
-+++ b/lib/percpu_ida.c
-@@ -26,6 +26,9 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/percpu_ida.c linux-4.14/lib/percpu_ida.c
+--- linux-4.14.orig/lib/percpu_ida.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/percpu_ida.c 2018-09-05 11:05:07.000000000 +0200
+@@ -27,6 +27,9 @@
#include <linux/string.h>
#include <linux/spinlock.h>
#include <linux/percpu_ida.h>
struct percpu_ida_cpu {
/*
-@@ -148,13 +151,13 @@ int percpu_ida_alloc(struct percpu_ida *pool, int state)
+@@ -149,13 +152,13 @@
unsigned long flags;
int tag;
return tag;
}
-@@ -173,6 +176,7 @@ int percpu_ida_alloc(struct percpu_ida *pool, int state)
+@@ -174,6 +177,7 @@
if (!tags->nr_free)
alloc_global_tags(pool, tags);
if (!tags->nr_free)
steal_tags(pool, tags);
-@@ -184,7 +188,7 @@ int percpu_ida_alloc(struct percpu_ida *pool, int state)
+@@ -185,7 +189,7 @@
}
spin_unlock(&pool->lock);
if (tag >= 0 || state == TASK_RUNNING)
break;
-@@ -196,7 +200,7 @@ int percpu_ida_alloc(struct percpu_ida *pool, int state)
+@@ -197,7 +201,7 @@
schedule();
tags = this_cpu_ptr(pool->tag_cpu);
}
if (state != TASK_RUNNING)
-@@ -221,7 +225,7 @@ void percpu_ida_free(struct percpu_ida *pool, unsigned tag)
+@@ -222,7 +226,7 @@
BUG_ON(tag >= pool->nr_tags);
tags = this_cpu_ptr(pool->tag_cpu);
spin_lock(&tags->lock);
-@@ -253,7 +257,7 @@ void percpu_ida_free(struct percpu_ida *pool, unsigned tag)
+@@ -254,7 +258,7 @@
spin_unlock(&pool->lock);
}
}
EXPORT_SYMBOL_GPL(percpu_ida_free);
-@@ -345,7 +349,7 @@ int percpu_ida_for_each_free(struct percpu_ida *pool, percpu_ida_cb fn,
+@@ -346,7 +350,7 @@
struct percpu_ida_cpu *remote;
unsigned cpu, i, err = 0;
for_each_possible_cpu(cpu) {
remote = per_cpu_ptr(pool->tag_cpu, cpu);
spin_lock(&remote->lock);
-@@ -367,7 +371,7 @@ int percpu_ida_for_each_free(struct percpu_ida *pool, percpu_ida_cb fn,
+@@ -368,7 +372,7 @@
}
spin_unlock(&pool->lock);
out:
return err;
}
EXPORT_SYMBOL_GPL(percpu_ida_for_each_free);
-diff --git a/lib/radix-tree.c b/lib/radix-tree.c
-index 8e6d552c40dd..741da5a77fd5 100644
---- a/lib/radix-tree.c
-+++ b/lib/radix-tree.c
-@@ -36,7 +36,7 @@
- #include <linux/bitops.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/radix-tree.c linux-4.14/lib/radix-tree.c
+--- linux-4.14.orig/lib/radix-tree.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/lib/radix-tree.c 2018-09-05 11:05:07.000000000 +0200
+@@ -37,7 +37,7 @@
#include <linux/rcupdate.h>
- #include <linux/preempt.h> /* in_interrupt() */
+ #include <linux/slab.h>
+ #include <linux/string.h>
-
+#include <linux/locallock.h>
/* Number of nodes in fully populated tree of given height */
static unsigned long height_to_maxnodes[RADIX_TREE_MAX_PATH + 1] __read_mostly;
-@@ -68,6 +68,7 @@ struct radix_tree_preload {
+@@ -86,6 +86,7 @@
struct radix_tree_node *nodes;
};
static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, };
+static DEFINE_LOCAL_IRQ_LOCK(radix_tree_preloads_lock);
- static inline void *node_to_entry(void *ptr)
+ static inline struct radix_tree_node *entry_to_node(void *ptr)
{
-@@ -290,13 +291,14 @@ radix_tree_node_alloc(struct radix_tree_root *root)
+@@ -404,12 +405,13 @@
* succeed in getting a node here (and never reach
* kmem_cache_alloc)
*/
+ rtp = &get_locked_var(radix_tree_preloads_lock, radix_tree_preloads);
if (rtp->nr) {
ret = rtp->nodes;
- rtp->nodes = ret->private_data;
- ret->private_data = NULL;
+ rtp->nodes = ret->parent;
rtp->nr--;
}
+ put_locked_var(radix_tree_preloads_lock, radix_tree_preloads);
/*
* Update the allocation stack trace as this is more useful
* for debugging.
-@@ -357,14 +359,14 @@ static int __radix_tree_preload(gfp_t gfp_mask, int nr)
+@@ -475,14 +477,14 @@
*/
gfp_mask &= ~__GFP_ACCOUNT;
+ local_lock(radix_tree_preloads_lock);
rtp = this_cpu_ptr(&radix_tree_preloads);
if (rtp->nr < nr) {
- node->private_data = rtp->nodes;
-@@ -406,7 +408,7 @@ int radix_tree_maybe_preload(gfp_t gfp_mask)
+ node->parent = rtp->nodes;
+@@ -524,7 +526,7 @@
if (gfpflags_allow_blocking(gfp_mask))
return __radix_tree_preload(gfp_mask, RADIX_TREE_PRELOAD_SIZE);
/* Preloading doesn't help anything with this gfp mask, skip it */
return 0;
}
EXPORT_SYMBOL(radix_tree_maybe_preload);
-@@ -422,7 +424,7 @@ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order)
+@@ -562,7 +564,7 @@
/* Preloading doesn't help anything with this gfp mask, skip it */
if (!gfpflags_allow_blocking(gfp_mask)) {
return 0;
}
-@@ -456,6 +458,12 @@ int radix_tree_maybe_preload_order(gfp_t gfp_mask, int order)
+@@ -596,6 +598,12 @@
return __radix_tree_preload(gfp_mask, nr_nodes);
}
+}
+EXPORT_SYMBOL(radix_tree_preload_end);
+
- /*
- * The maximum index which can be stored in a radix tree
- */
-diff --git a/lib/scatterlist.c b/lib/scatterlist.c
-index 004fc70fc56a..ccc46992a517 100644
---- a/lib/scatterlist.c
-+++ b/lib/scatterlist.c
-@@ -620,7 +620,7 @@ void sg_miter_stop(struct sg_mapping_iter *miter)
+ static unsigned radix_tree_load_root(const struct radix_tree_root *root,
+ struct radix_tree_node **nodep, unsigned long *maxindex)
+ {
+@@ -2105,10 +2113,16 @@
+ void idr_preload(gfp_t gfp_mask)
+ {
+ if (__radix_tree_preload(gfp_mask, IDR_PRELOAD_SIZE))
+- preempt_disable();
++ local_lock(radix_tree_preloads_lock);
+ }
+ EXPORT_SYMBOL(idr_preload);
+
++void idr_preload_end(void)
++{
++ local_unlock(radix_tree_preloads_lock);
++}
++EXPORT_SYMBOL(idr_preload_end);
++
+ /**
+ * ida_pre_get - reserve resources for ida allocation
+ * @ida: ida handle
+@@ -2125,7 +2139,7 @@
+ * to return to the ida_pre_get() step.
+ */
+ if (!__radix_tree_preload(gfp, IDA_PRELOAD_SIZE))
+- preempt_enable();
++ local_unlock(radix_tree_preloads_lock);
+
+ if (!this_cpu_read(ida_bitmap)) {
+ struct ida_bitmap *bitmap = kmalloc(sizeof(*bitmap), gfp);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/scatterlist.c linux-4.14/lib/scatterlist.c
+--- linux-4.14.orig/lib/scatterlist.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/scatterlist.c 2018-09-05 11:05:07.000000000 +0200
+@@ -620,7 +620,7 @@
flush_kernel_dcache_page(miter->page);
if (miter->__flags & SG_MITER_ATOMIC) {
kunmap_atomic(miter->addr);
} else
kunmap(miter->page);
-@@ -664,7 +664,7 @@ size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
- if (!sg_miter_skip(&miter, skip))
- return false;
-
-- local_irq_save(flags);
-+ local_irq_save_nort(flags);
-
- while (sg_miter_next(&miter) && offset < buflen) {
- unsigned int len;
-@@ -681,7 +681,7 @@ size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
-
- sg_miter_stop(&miter);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/smp_processor_id.c linux-4.14/lib/smp_processor_id.c
+--- linux-4.14.orig/lib/smp_processor_id.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/smp_processor_id.c 2018-09-05 11:05:07.000000000 +0200
+@@ -23,7 +23,7 @@
+ * Kernel threads bound to a single CPU can safely use
+ * smp_processor_id():
+ */
+- if (cpumask_equal(¤t->cpus_allowed, cpumask_of(this_cpu)))
++ if (cpumask_equal(current->cpus_ptr, cpumask_of(this_cpu)))
+ goto out;
-- local_irq_restore(flags);
-+ local_irq_restore_nort(flags);
- return offset;
- }
- EXPORT_SYMBOL(sg_copy_buffer);
-diff --git a/lib/smp_processor_id.c b/lib/smp_processor_id.c
-index 1afec32de6f2..11fa431046a8 100644
---- a/lib/smp_processor_id.c
-+++ b/lib/smp_processor_id.c
-@@ -39,8 +39,9 @@ notrace static unsigned int check_preemption_disabled(const char *what1,
- if (!printk_ratelimit())
- goto out_enable;
-
-- printk(KERN_ERR "BUG: using %s%s() in preemptible [%08x] code: %s/%d\n",
-- what1, what2, preempt_count() - 1, current->comm, current->pid);
-+ printk(KERN_ERR "BUG: using %s%s() in preemptible [%08x %08x] code: %s/%d\n",
-+ what1, what2, preempt_count() - 1, __migrate_disabled(current),
-+ current->comm, current->pid);
-
- print_symbol("caller is %s\n", (long)__builtin_return_address(0));
- dump_stack();
-diff --git a/localversion-rt b/localversion-rt
-new file mode 100644
-index 000000000000..ad3da1bcab7e
---- /dev/null
-+++ b/localversion-rt
+ /*
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/lib/timerqueue.c linux-4.14/lib/timerqueue.c
+--- linux-4.14.orig/lib/timerqueue.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/lib/timerqueue.c 2018-09-05 11:05:07.000000000 +0200
+@@ -33,8 +33,9 @@
+ * @head: head of timerqueue
+ * @node: timer node to be added
+ *
+- * Adds the timer node to the timerqueue, sorted by the
+- * node's expires value.
++ * Adds the timer node to the timerqueue, sorted by the node's expires
++ * value. Returns true if the newly added timer is the first expiring timer in
++ * the queue.
+ */
+ bool timerqueue_add(struct timerqueue_head *head, struct timerqueue_node *node)
+ {
+@@ -70,7 +71,8 @@
+ * @head: head of timerqueue
+ * @node: timer node to be removed
+ *
+- * Removes the timer node from the timerqueue.
++ * Removes the timer node from the timerqueue. Returns true if the queue is
++ * not empty after the remove.
+ */
+ bool timerqueue_del(struct timerqueue_head *head, struct timerqueue_node *node)
+ {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/localversion-rt linux-4.14/localversion-rt
+--- linux-4.14.orig/localversion-rt 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/localversion-rt 2018-09-05 11:05:07.000000000 +0200
@@ -0,0 +1 @@
-+-rt4
-diff --git a/mm/Kconfig b/mm/Kconfig
-index 86e3e0e74d20..77e5862a1ed2 100644
---- a/mm/Kconfig
-+++ b/mm/Kconfig
-@@ -410,7 +410,7 @@ config NOMMU_INITIAL_TRIM_EXCESS
-
- config TRANSPARENT_HUGEPAGE
- bool "Transparent Hugepage Support"
-- depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE
-+ depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE && !PREEMPT_RT_FULL
- select COMPACTION
- select RADIX_TREE_MULTIORDER
- help
-diff --git a/mm/backing-dev.c b/mm/backing-dev.c
-index 8fde443f36d7..d7a863b0ec20 100644
---- a/mm/backing-dev.c
-+++ b/mm/backing-dev.c
-@@ -457,9 +457,9 @@ void wb_congested_put(struct bdi_writeback_congested *congested)
++-rt40
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/backing-dev.c linux-4.14/mm/backing-dev.c
+--- linux-4.14.orig/mm/backing-dev.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/backing-dev.c 2018-09-05 11:05:07.000000000 +0200
+@@ -470,9 +470,9 @@
{
unsigned long flags;
return;
}
-diff --git a/mm/compaction.c b/mm/compaction.c
-index 70e6bec46dc2..6678ed58b7c6 100644
---- a/mm/compaction.c
-+++ b/mm/compaction.c
-@@ -1593,10 +1593,12 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/compaction.c linux-4.14/mm/compaction.c
+--- linux-4.14.orig/mm/compaction.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/mm/compaction.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1634,10 +1634,12 @@
block_start_pfn(cc->migrate_pfn, cc->order);
if (cc->last_migrated_pfn < current_block_start) {
/* No more flushing until we migrate again */
cc->last_migrated_pfn = 0;
}
-diff --git a/mm/filemap.c b/mm/filemap.c
-index 779801092ef1..554e1b4d0fc5 100644
---- a/mm/filemap.c
-+++ b/mm/filemap.c
-@@ -159,9 +159,12 @@ static int page_cache_tree_insert(struct address_space *mapping,
- * node->private_list is protected by
- * mapping->tree_lock.
- */
-- if (!list_empty(&node->private_list))
-- list_lru_del(&workingset_shadow_nodes,
-+ if (!list_empty(&node->private_list)) {
-+ local_lock(workingset_shadow_lock);
-+ list_lru_del(&__workingset_shadow_nodes,
- &node->private_list);
-+ local_unlock(workingset_shadow_lock);
-+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/filemap.c linux-4.14/mm/filemap.c
+--- linux-4.14.orig/mm/filemap.c 2018-09-05 11:03:28.000000000 +0200
++++ linux-4.14/mm/filemap.c 2018-09-05 11:05:07.000000000 +0200
+@@ -110,6 +110,7 @@
+ * ->i_mmap_rwsem
+ * ->tasklist_lock (memory_failure, collect_procs_ao)
+ */
++DECLARE_LOCAL_IRQ_LOCK(shadow_nodes_lock);
+
+ static int page_cache_tree_insert(struct address_space *mapping,
+ struct page *page, void **shadowp)
+@@ -133,8 +134,10 @@
+ if (shadowp)
+ *shadowp = p;
}
++ local_lock(shadow_nodes_lock);
+ __radix_tree_replace(&mapping->page_tree, node, slot, page,
+- workingset_update_node, mapping);
++ __workingset_update_node, mapping);
++ local_unlock(shadow_nodes_lock);
+ mapping->nrpages++;
return 0;
}
-@@ -217,8 +220,10 @@ static void page_cache_tree_delete(struct address_space *mapping,
- if (!dax_mapping(mapping) && !workingset_node_pages(node) &&
- list_empty(&node->private_list)) {
- node->private_data = mapping;
-- list_lru_add(&workingset_shadow_nodes,
-- &node->private_list);
-+ local_lock(workingset_shadow_lock);
-+ list_lru_add(&__workingset_shadow_nodes,
-+ &node->private_list);
-+ local_unlock(workingset_shadow_lock);
- }
- }
+@@ -151,6 +154,7 @@
+ VM_BUG_ON_PAGE(PageTail(page), page);
+ VM_BUG_ON_PAGE(nr != 1 && shadow, page);
+
++ local_lock(shadow_nodes_lock);
+ for (i = 0; i < nr; i++) {
+ struct radix_tree_node *node;
+ void **slot;
+@@ -162,8 +166,9 @@
-diff --git a/mm/highmem.c b/mm/highmem.c
-index 50b4ca6787f0..77518a3b35a1 100644
---- a/mm/highmem.c
-+++ b/mm/highmem.c
-@@ -29,10 +29,11 @@
+ radix_tree_clear_tags(&mapping->page_tree, node, slot);
+ __radix_tree_replace(&mapping->page_tree, node, slot, shadow,
+- workingset_update_node, mapping);
++ __workingset_update_node, mapping);
+ }
++ local_unlock(shadow_nodes_lock);
+
+ if (shadow) {
+ mapping->nrexceptional += nr;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/highmem.c linux-4.14/mm/highmem.c
+--- linux-4.14.orig/mm/highmem.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/mm/highmem.c 2018-09-05 11:05:07.000000000 +0200
+@@ -30,10 +30,11 @@
#include <linux/kgdb.h>
#include <asm/tlbflush.h>
/*
* Virtual_count is not a pure "count".
-@@ -107,8 +108,9 @@ static inline wait_queue_head_t *get_pkmap_wait_queue_head(unsigned int color)
+@@ -108,8 +109,9 @@
unsigned long totalhigh_pages __read_mostly;
EXPORT_SYMBOL(totalhigh_pages);
unsigned int nr_free_highpages (void)
{
-diff --git a/mm/memcontrol.c b/mm/memcontrol.c
-index d536a9daa511..70ac8827ee8c 100644
---- a/mm/memcontrol.c
-+++ b/mm/memcontrol.c
-@@ -67,6 +67,7 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/Kconfig linux-4.14/mm/Kconfig
+--- linux-4.14.orig/mm/Kconfig 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -385,7 +385,7 @@
+
+ config TRANSPARENT_HUGEPAGE
+ bool "Transparent Hugepage Support"
+- depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE
++ depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE && !PREEMPT_RT_FULL
+ select COMPACTION
+ select RADIX_TREE_MULTIORDER
+ help
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/memcontrol.c linux-4.14/mm/memcontrol.c
+--- linux-4.14.orig/mm/memcontrol.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/memcontrol.c 2018-09-05 11:05:07.000000000 +0200
+@@ -69,6 +69,7 @@
#include <net/sock.h>
#include <net/ip.h>
#include "slab.h"
+#include <linux/locallock.h>
- #include <asm/uaccess.h>
+ #include <linux/uaccess.h>
-@@ -92,6 +93,8 @@ int do_swap_account __read_mostly;
+@@ -94,6 +95,8 @@
#define do_swap_account 0
#endif
/* Whether legacy memory+swap accounting is active */
static bool do_memsw_account(void)
{
-@@ -1692,6 +1695,7 @@ struct memcg_stock_pcp {
- #define FLUSHING_CACHED_CHARGE 0
- };
- static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
-+static DEFINE_LOCAL_IRQ_LOCK(memcg_stock_ll);
- static DEFINE_MUTEX(percpu_charge_mutex);
-
- /**
-@@ -1714,7 +1718,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
- if (nr_pages > CHARGE_BATCH)
- return ret;
-
-- local_irq_save(flags);
-+ local_lock_irqsave(memcg_stock_ll, flags);
-
- stock = this_cpu_ptr(&memcg_stock);
- if (memcg == stock->cached && stock->nr_pages >= nr_pages) {
-@@ -1722,7 +1726,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
- ret = true;
- }
-
-- local_irq_restore(flags);
-+ local_unlock_irqrestore(memcg_stock_ll, flags);
-
- return ret;
- }
-@@ -1749,13 +1753,13 @@ static void drain_local_stock(struct work_struct *dummy)
- struct memcg_stock_pcp *stock;
- unsigned long flags;
-
-- local_irq_save(flags);
-+ local_lock_irqsave(memcg_stock_ll, flags);
-
- stock = this_cpu_ptr(&memcg_stock);
- drain_stock(stock);
- clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
-
-- local_irq_restore(flags);
-+ local_unlock_irqrestore(memcg_stock_ll, flags);
- }
-
- /*
-@@ -1767,7 +1771,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
- struct memcg_stock_pcp *stock;
- unsigned long flags;
-
-- local_irq_save(flags);
-+ local_lock_irqsave(memcg_stock_ll, flags);
-
- stock = this_cpu_ptr(&memcg_stock);
- if (stock->cached != memcg) { /* reset if necessary */
-@@ -1776,7 +1780,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
- }
- stock->nr_pages += nr_pages;
-
-- local_irq_restore(flags);
-+ local_unlock_irqrestore(memcg_stock_ll, flags);
- }
-
- /*
-@@ -1792,7 +1796,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
- return;
- /* Notify other cpus that system-wide "drain" is running */
- get_online_cpus();
+@@ -1831,7 +1834,7 @@
+ * as well as workers from this path always operate on the local
+ * per-cpu data. CPU up doesn't touch memcg_stock at all.
+ */
- curcpu = get_cpu();
+ curcpu = get_cpu_light();
for_each_online_cpu(cpu) {
struct memcg_stock_pcp *stock = &per_cpu(memcg_stock, cpu);
struct mem_cgroup *memcg;
-@@ -1809,7 +1813,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
- schedule_work_on(cpu, &stock->work);
+@@ -1851,7 +1854,7 @@
}
+ css_put(&memcg->css);
}
- put_cpu();
+ put_cpu_light();
- put_online_cpus();
mutex_unlock(&percpu_charge_mutex);
}
-@@ -4548,12 +4552,12 @@ static int mem_cgroup_move_account(struct page *page,
+
+@@ -4624,12 +4627,12 @@
ret = 0;
out_unlock:
unlock_page(page);
out:
-@@ -5428,10 +5432,10 @@ void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,
+@@ -5572,10 +5575,10 @@
commit_charge(page, memcg, lrucare);
if (do_memsw_account() && PageSwapCache(page)) {
swp_entry_t entry = { .val = page_private(page) };
-@@ -5487,14 +5491,14 @@ static void uncharge_batch(struct mem_cgroup *memcg, unsigned long pgpgout,
- memcg_oom_recover(memcg);
+@@ -5644,7 +5647,7 @@
+ memcg_oom_recover(ug->memcg);
}
- local_irq_save(flags);
+ local_lock_irqsave(event_lock, flags);
- __this_cpu_sub(memcg->stat->count[MEM_CGROUP_STAT_RSS], nr_anon);
- __this_cpu_sub(memcg->stat->count[MEM_CGROUP_STAT_CACHE], nr_file);
- __this_cpu_sub(memcg->stat->count[MEM_CGROUP_STAT_RSS_HUGE], nr_huge);
- __this_cpu_add(memcg->stat->events[MEM_CGROUP_EVENTS_PGPGOUT], pgpgout);
- __this_cpu_add(memcg->stat->nr_page_events, nr_pages);
- memcg_check_events(memcg, dummy_page);
+ __this_cpu_sub(ug->memcg->stat->count[MEMCG_RSS], ug->nr_anon);
+ __this_cpu_sub(ug->memcg->stat->count[MEMCG_CACHE], ug->nr_file);
+ __this_cpu_sub(ug->memcg->stat->count[MEMCG_RSS_HUGE], ug->nr_huge);
+@@ -5652,7 +5655,7 @@
+ __this_cpu_add(ug->memcg->stat->events[PGPGOUT], ug->pgpgout);
+ __this_cpu_add(ug->memcg->stat->nr_page_events, nr_pages);
+ memcg_check_events(ug->memcg, ug->dummy_page);
- local_irq_restore(flags);
+ local_unlock_irqrestore(event_lock, flags);
- if (!mem_cgroup_is_root(memcg))
- css_put_many(&memcg->css, nr_pages);
-@@ -5649,10 +5653,10 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
+ if (!mem_cgroup_is_root(ug->memcg))
+ css_put_many(&ug->memcg->css, nr_pages);
+@@ -5815,10 +5818,10 @@
commit_charge(newpage, memcg, false);
}
DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
-@@ -5832,6 +5836,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
- {
+@@ -6010,6 +6013,7 @@
struct mem_cgroup *memcg, *swap_memcg;
+ unsigned int nr_entries;
unsigned short oldid;
+ unsigned long flags;
VM_BUG_ON_PAGE(PageLRU(page), page);
VM_BUG_ON_PAGE(page_count(page), page);
-@@ -5872,12 +5877,16 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
+@@ -6055,13 +6059,17 @@
* important here to have the interrupts disabled because it is the
* only synchronisation we have for udpating the per-CPU variables.
*/
+#ifndef CONFIG_PREEMPT_RT_BASE
VM_BUG_ON(!irqs_disabled());
+#endif
- mem_cgroup_charge_statistics(memcg, page, false, -1);
+ mem_cgroup_charge_statistics(memcg, page, PageTransHuge(page),
+ -nr_entries);
memcg_check_events(memcg, page);
if (!mem_cgroup_is_root(memcg))
- css_put(&memcg->css);
+ css_put_many(&memcg->css, nr_entries);
+ local_unlock_irqrestore(event_lock, flags);
}
- /*
-diff --git a/mm/mmu_context.c b/mm/mmu_context.c
-index 6f4d27c5bb32..5cd25c745a8f 100644
---- a/mm/mmu_context.c
-+++ b/mm/mmu_context.c
-@@ -23,6 +23,7 @@ void use_mm(struct mm_struct *mm)
+ /**
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/mmu_context.c linux-4.14/mm/mmu_context.c
+--- linux-4.14.orig/mm/mmu_context.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/mm/mmu_context.c 2018-09-05 11:05:07.000000000 +0200
+@@ -25,6 +25,7 @@
struct task_struct *tsk = current;
task_lock(tsk);
+ preempt_disable_rt();
active_mm = tsk->active_mm;
if (active_mm != mm) {
- atomic_inc(&mm->mm_count);
-@@ -30,6 +31,7 @@ void use_mm(struct mm_struct *mm)
+ mmgrab(mm);
+@@ -32,6 +33,7 @@
}
tsk->mm = mm;
switch_mm(active_mm, mm, tsk);
task_unlock(tsk);
#ifdef finish_arch_post_lock_switch
finish_arch_post_lock_switch();
-diff --git a/mm/page_alloc.c b/mm/page_alloc.c
-index 34ada718ef47..21f0dc3fe2aa 100644
---- a/mm/page_alloc.c
-+++ b/mm/page_alloc.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/page_alloc.c linux-4.14/mm/page_alloc.c
+--- linux-4.14.orig/mm/page_alloc.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/page_alloc.c 2018-09-05 11:05:07.000000000 +0200
@@ -61,6 +61,7 @@
- #include <linux/page_ext.h>
#include <linux/hugetlb.h>
#include <linux/sched/rt.h>
+ #include <linux/sched/mm.h>
+#include <linux/locallock.h>
#include <linux/page_owner.h>
#include <linux/kthread.h>
#include <linux/memcontrol.h>
-@@ -281,6 +282,18 @@ EXPORT_SYMBOL(nr_node_ids);
+@@ -286,6 +287,18 @@
EXPORT_SYMBOL(nr_online_nodes);
#endif
int page_group_by_mobility_disabled __read_mostly;
#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
-@@ -1072,7 +1085,7 @@ static bool bulkfree_pcp_prepare(struct page *page)
+@@ -1094,7 +1107,7 @@
#endif /* CONFIG_DEBUG_VM */
/*
* Assumes all pages on list are in same zone, and of same order.
* count is the number of pages to free.
*
-@@ -1083,19 +1096,58 @@ static bool bulkfree_pcp_prepare(struct page *page)
+@@ -1105,15 +1118,53 @@
* pinned" detection logic.
*/
static void free_pcppages_bulk(struct zone *zone, int count,
{
- int migratetype = 0;
- int batch_free = 0;
- unsigned long nr_scanned;
bool isolated_pageblocks;
+ unsigned long flags;
-+
-+ spin_lock_irqsave(&zone->lock, flags);
- spin_lock(&zone->lock);
++ spin_lock_irqsave(&zone->lock, flags);
isolated_pageblocks = has_isolate_pageblock(zone);
- nr_scanned = node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED);
- if (nr_scanned)
- __mod_node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED, -nr_scanned);
+ while (!list_empty(list)) {
+ struct page *page;
-+ int mt; /* migratetype of the to-be-freed page */
++ int mt; /* migratetype of the to-be-freed page */
+
+ page = list_first_entry(list, struct page, lru);
+ /* must delete as __free_one_page list manipulates */
while (count) {
struct page *page;
struct list_head *list;
-@@ -1111,7 +1163,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
+@@ -1129,7 +1180,7 @@
batch_free++;
if (++migratetype == MIGRATE_PCPTYPES)
migratetype = 0;
} while (list_empty(list));
/* This is the only non-empty list. Free them all. */
-@@ -1119,27 +1171,12 @@ static void free_pcppages_bulk(struct zone *zone, int count,
+@@ -1137,27 +1188,12 @@
batch_free = count;
do {
}
static void free_one_page(struct zone *zone,
-@@ -1148,7 +1185,9 @@ static void free_one_page(struct zone *zone,
+@@ -1165,13 +1201,15 @@
+ unsigned int order,
int migratetype)
{
- unsigned long nr_scanned;
- spin_lock(&zone->lock);
+ unsigned long flags;
+
+ spin_lock_irqsave(&zone->lock, flags);
- nr_scanned = node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED);
- if (nr_scanned)
- __mod_node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED, -nr_scanned);
-@@ -1158,7 +1197,7 @@ static void free_one_page(struct zone *zone,
+ if (unlikely(has_isolate_pageblock(zone) ||
+ is_migrate_isolate(migratetype))) {
migratetype = get_pfnblock_migratetype(page, pfn);
}
__free_one_page(page, pfn, zone, order, migratetype);
}
static void __meminit __init_single_page(struct page *page, unsigned long pfn,
-@@ -1244,10 +1283,10 @@ static void __free_pages_ok(struct page *page, unsigned int order)
+@@ -1257,10 +1295,10 @@
return;
migratetype = get_pfnblock_migratetype(page, pfn);
}
static void __init __free_pages_boot_core(struct page *page, unsigned int order)
-@@ -2246,16 +2285,18 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
+@@ -2378,16 +2416,18 @@
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
{
unsigned long flags;
}
#endif
-@@ -2271,16 +2312,21 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone)
+@@ -2403,16 +2443,21 @@
unsigned long flags;
struct per_cpu_pageset *pset;
struct per_cpu_pages *pcp;
}
/*
-@@ -2366,8 +2412,17 @@ void drain_all_pages(struct zone *zone)
+@@ -2447,6 +2492,7 @@
+ drain_pages(cpu);
+ }
+
++#ifndef CONFIG_PREEMPT_RT_BASE
+ static void drain_local_pages_wq(struct work_struct *work)
+ {
+ /*
+@@ -2460,6 +2506,7 @@
+ drain_local_pages(NULL);
+ preempt_enable();
+ }
++#endif
+
+ /*
+ * Spill all the per-cpu pages from all CPUs back into the buddy allocator.
+@@ -2526,7 +2573,14 @@
else
cpumask_clear_cpu(cpu, &cpus_with_pcps);
}
-+#ifndef CONFIG_PREEMPT_RT_BASE
- on_each_cpu_mask(&cpus_with_pcps, (smp_call_func_t) drain_local_pages,
- zone, 1);
-+#else
+-
++#ifdef CONFIG_PREEMPT_RT_BASE
+ for_each_cpu(cpu, &cpus_with_pcps) {
+ if (zone)
+ drain_pages_zone(cpu, zone);
+ else
+ drain_pages(cpu);
+ }
++#else
+ for_each_cpu(cpu, &cpus_with_pcps) {
+ struct work_struct *work = per_cpu_ptr(&pcpu_drain, cpu);
+ INIT_WORK(work, drain_local_pages_wq);
+@@ -2534,6 +2588,7 @@
+ }
+ for_each_cpu(cpu, &cpus_with_pcps)
+ flush_work(per_cpu_ptr(&pcpu_drain, cpu));
+#endif
- }
- #ifdef CONFIG_HIBERNATION
-@@ -2427,7 +2482,7 @@ void free_hot_cold_page(struct page *page, bool cold)
+ mutex_unlock(&pcpu_drain_mutex);
+ }
+@@ -2610,7 +2665,7 @@
migratetype = get_pfnblock_migratetype(page, pfn);
set_pcppage_migratetype(page, migratetype);
__count_vm_event(PGFREE);
/*
-@@ -2453,12 +2508,17 @@ void free_hot_cold_page(struct page *page, bool cold)
+@@ -2636,12 +2691,17 @@
pcp->count++;
if (pcp->count >= pcp->high) {
unsigned long batch = READ_ONCE(pcp->batch);
}
/*
-@@ -2600,7 +2660,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
- struct per_cpu_pages *pcp;
- struct list_head *list;
-
-- local_irq_save(flags);
-+ local_lock_irqsave(pa_lock, flags);
- do {
- pcp = &this_cpu_ptr(zone->pageset)->pcp;
- list = &pcp->lists[migratetype];
-@@ -2627,7 +2687,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
- * allocate greater than order-1 page units with __GFP_NOFAIL.
- */
- WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
-- spin_lock_irqsave(&zone->lock, flags);
-+ local_spin_lock_irqsave(pa_lock, &zone->lock, flags);
+@@ -2789,7 +2849,7 @@
+ struct page *page;
+ unsigned long flags;
- do {
- page = NULL;
-@@ -2639,22 +2699,24 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
- if (!page)
- page = __rmqueue(zone, order, migratetype);
- } while (page && check_new_pages(page, order));
-- spin_unlock(&zone->lock);
-- if (!page)
-+ if (!page) {
-+ spin_unlock(&zone->lock);
- goto failed;
-+ }
- __mod_zone_freepage_state(zone, -(1 << order),
- get_pcppage_migratetype(page));
-+ spin_unlock(&zone->lock);
+- local_irq_save(flags);
++ local_lock_irqsave(pa_lock, flags);
+ pcp = &this_cpu_ptr(zone->pageset)->pcp;
+ list = &pcp->lists[migratetype];
+ page = __rmqueue_pcplist(zone, migratetype, cold, pcp, list);
+@@ -2797,7 +2857,7 @@
+ __count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
+ zone_statistics(preferred_zone, zone);
}
+- local_irq_restore(flags);
++ local_unlock_irqrestore(pa_lock, flags);
+ return page;
+ }
+
+@@ -2824,7 +2884,7 @@
+ * allocate greater than order-1 page units with __GFP_NOFAIL.
+ */
+ WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order > 1));
+- spin_lock_irqsave(&zone->lock, flags);
++ local_spin_lock_irqsave(pa_lock, &zone->lock, flags);
+
+ do {
+ page = NULL;
+@@ -2844,14 +2904,14 @@
__count_zid_vm_events(PGALLOC, page_zonenum(page), 1 << order);
- zone_statistics(preferred_zone, zone, gfp_flags);
+ zone_statistics(preferred_zone, zone);
- local_irq_restore(flags);
+ local_unlock_irqrestore(pa_lock, flags);
- VM_BUG_ON_PAGE(bad_range(zone, page), page);
+ out:
+ VM_BUG_ON_PAGE(page && bad_range(zone, page), page);
return page;
failed:
return NULL;
}
-@@ -6505,7 +6567,9 @@ static int page_alloc_cpu_notify(struct notifier_block *self,
- int cpu = (unsigned long)hcpu;
+@@ -6778,8 +6838,9 @@
- if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
-+ local_lock_irq_on(swapvec_lock, cpu);
- lru_add_drain_cpu(cpu);
-+ local_unlock_irq_on(swapvec_lock, cpu);
- drain_pages(cpu);
-
- /*
-@@ -6531,6 +6595,7 @@ static int page_alloc_cpu_notify(struct notifier_block *self,
- void __init page_alloc_init(void)
+ static int page_alloc_cpu_dead(unsigned int cpu)
{
- hotcpu_notifier(page_alloc_cpu_notify, 0);
-+ local_irq_lock_init(pa_lock);
- }
+-
++ local_lock_irq_on(swapvec_lock, cpu);
+ lru_add_drain_cpu(cpu);
++ local_unlock_irq_on(swapvec_lock, cpu);
+ drain_pages(cpu);
- /*
-@@ -7359,7 +7424,7 @@ void zone_pcp_reset(struct zone *zone)
+ /*
+@@ -7683,7 +7744,7 @@
struct per_cpu_pageset *pset;
/* avoid races with drain_pages() */
if (zone->pageset != &boot_pageset) {
for_each_online_cpu(cpu) {
pset = per_cpu_ptr(zone->pageset, cpu);
-@@ -7368,7 +7433,7 @@ void zone_pcp_reset(struct zone *zone)
+@@ -7692,7 +7753,7 @@
free_percpu(zone->pageset);
zone->pageset = &boot_pageset;
}
}
#ifdef CONFIG_MEMORY_HOTREMOVE
-diff --git a/mm/slab.h b/mm/slab.h
-index bc05fdc3edce..610cf61634f0 100644
---- a/mm/slab.h
-+++ b/mm/slab.h
-@@ -426,7 +426,11 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/slab.h linux-4.14/mm/slab.h
+--- linux-4.14.orig/mm/slab.h 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/slab.h 2018-09-05 11:05:07.000000000 +0200
+@@ -451,7 +451,11 @@
* The slab lists for all objects.
*/
struct kmem_cache_node {
#ifdef CONFIG_SLAB
struct list_head slabs_partial; /* partial list first, better asm code */
-diff --git a/mm/slub.c b/mm/slub.c
-index 2b3e740609e9..1732f9c5d31f 100644
---- a/mm/slub.c
-+++ b/mm/slub.c
-@@ -1141,7 +1141,7 @@ static noinline int free_debug_processing(
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/slub.c linux-4.14/mm/slub.c
+--- linux-4.14.orig/mm/slub.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/slub.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1179,7 +1179,7 @@
unsigned long uninitialized_var(flags);
int ret = 0;
slab_lock(page);
if (s->flags & SLAB_CONSISTENCY_CHECKS) {
-@@ -1176,7 +1176,7 @@ static noinline int free_debug_processing(
+@@ -1214,7 +1214,7 @@
bulk_cnt, cnt);
slab_unlock(page);
if (!ret)
slab_fix(s, "Object at 0x%p not freed", object);
return ret;
-@@ -1304,6 +1304,12 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
+@@ -1342,6 +1342,12 @@
#endif /* CONFIG_SLUB_DEBUG */
/*
* Hooks for other subsystems that check memory allocations. In a typical
* production configuration these hooks all should produce no code at all.
-@@ -1523,10 +1529,17 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
+@@ -1561,10 +1567,17 @@
void *start, *p;
int idx, order;
bool shuffle;
if (gfpflags_allow_blocking(flags))
+ enableirqs = true;
+#ifdef CONFIG_PREEMPT_RT_FULL
-+ if (system_state == SYSTEM_RUNNING)
++ if (system_state > SYSTEM_BOOTING)
+ enableirqs = true;
+#endif
+ if (enableirqs)
local_irq_enable();
flags |= s->allocflags;
-@@ -1601,7 +1614,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
+@@ -1623,7 +1636,7 @@
page->frozen = 1;
out:
local_irq_disable();
if (!page)
return NULL;
-@@ -1660,6 +1673,16 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
+@@ -1681,6 +1694,16 @@
__free_pages(page, order);
}
#define need_reserve_slab_rcu \
(sizeof(((struct page *)NULL)->lru) < sizeof(struct rcu_head))
-@@ -1691,6 +1714,12 @@ static void free_slab(struct kmem_cache *s, struct page *page)
+@@ -1712,6 +1735,12 @@
}
call_rcu(head, rcu_free_slab);
} else
__free_slab(s, page);
}
-@@ -1798,7 +1827,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
+@@ -1819,7 +1848,7 @@
if (!n || !n->nr_partial)
return NULL;
list_for_each_entry_safe(page, page2, &n->partial, lru) {
void *t;
-@@ -1823,7 +1852,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
+@@ -1844,7 +1873,7 @@
break;
}
return object;
}
-@@ -2069,7 +2098,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
+@@ -2090,7 +2119,7 @@
* that acquire_slab() will see a slab page that
* is frozen
*/
}
} else {
m = M_FULL;
-@@ -2080,7 +2109,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
+@@ -2101,7 +2130,7 @@
* slabs from diagnostic functions will not see
* any frozen slabs.
*/
}
}
-@@ -2115,7 +2144,7 @@ static void deactivate_slab(struct kmem_cache *s, struct page *page,
+@@ -2136,7 +2165,7 @@
goto redo;
if (lock)
if (m == M_FREE) {
stat(s, DEACTIVATE_EMPTY);
-@@ -2147,10 +2176,10 @@ static void unfreeze_partials(struct kmem_cache *s,
+@@ -2171,10 +2200,10 @@
n2 = get_node(s, page_to_nid(page));
if (n != n2) {
if (n)
}
do {
-@@ -2179,7 +2208,7 @@ static void unfreeze_partials(struct kmem_cache *s,
+@@ -2203,7 +2232,7 @@
}
if (n)
while (discard_page) {
page = discard_page;
-@@ -2218,14 +2247,21 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain)
+@@ -2242,14 +2271,21 @@
pobjects = oldpage->pobjects;
pages = oldpage->pages;
if (drain && pobjects > s->cpu_partial) {
oldpage = NULL;
pobjects = 0;
pages = 0;
-@@ -2297,7 +2333,22 @@ static bool has_cpu_slab(int cpu, void *info)
+@@ -2319,7 +2355,22 @@
static void flush_all(struct kmem_cache *s)
{
}
/*
-@@ -2352,10 +2403,10 @@ static unsigned long count_partial(struct kmem_cache_node *n,
+@@ -2374,10 +2425,10 @@
unsigned long x = 0;
struct page *page;
return x;
}
#endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */
-@@ -2493,8 +2544,10 @@ static inline void *get_freelist(struct kmem_cache *s, struct page *page)
+@@ -2515,8 +2566,10 @@
* already disabled (which is the case for bulk allocation).
*/
static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
void *freelist;
struct page *page;
-@@ -2554,6 +2607,13 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+@@ -2572,6 +2625,13 @@
VM_BUG_ON(!c->page->frozen);
c->freelist = get_freepointer(s, freelist);
c->tid = next_tid(c->tid);
return freelist;
new_slab:
-@@ -2585,7 +2645,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
- deactivate_slab(s, page, get_freepointer(s, freelist));
- c->page = NULL;
- c->freelist = NULL;
+@@ -2587,7 +2647,7 @@
+
+ if (unlikely(!freelist)) {
+ slab_out_of_memory(s, gfpflags, node);
+- return NULL;
++ goto out;
+ }
+
+ page = c->page;
+@@ -2600,7 +2660,7 @@
+ goto new_slab; /* Slab failed checks. Next slab needed */
+
+ deactivate_slab(s, page, get_freepointer(s, freelist), c);
- return freelist;
+ goto out;
}
/*
-@@ -2597,6 +2657,7 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+@@ -2612,6 +2672,7 @@
{
void *p;
unsigned long flags;
local_irq_save(flags);
#ifdef CONFIG_PREEMPT
-@@ -2608,8 +2669,9 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
+@@ -2623,8 +2684,9 @@
c = this_cpu_ptr(s->cpu_slab);
#endif
return p;
}
-@@ -2795,7 +2857,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
+@@ -2810,7 +2872,7 @@
do {
if (unlikely(n)) {
n = NULL;
}
prior = page->freelist;
-@@ -2827,7 +2889,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
+@@ -2842,7 +2904,7 @@
* Otherwise the list_lock will synchronize with
* other processors updating the list of slabs.
*/
}
}
-@@ -2869,7 +2931,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
+@@ -2884,7 +2946,7 @@
add_partial(n, page, DEACTIVATE_TO_TAIL);
stat(s, FREE_ADD_PARTIAL);
}
return;
slab_empty:
-@@ -2884,7 +2946,7 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
+@@ -2899,7 +2961,7 @@
remove_full(s, n, page);
}
stat(s, FREE_SLAB);
discard_slab(s, page);
}
-@@ -3089,6 +3151,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
+@@ -3104,6 +3166,7 @@
void **p)
{
struct kmem_cache_cpu *c;
int i;
/* memcg and kmem_cache debug support */
-@@ -3112,7 +3175,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
+@@ -3127,7 +3190,7 @@
* of re-populating per CPU c->freelist
*/
p[i] = ___slab_alloc(s, flags, NUMA_NO_NODE,
if (unlikely(!p[i]))
goto error;
-@@ -3124,6 +3187,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
+@@ -3139,6 +3202,7 @@
}
c->tid = next_tid(c->tid);
local_irq_enable();
/* Clear memory outside IRQ disabled fastpath loop */
if (unlikely(flags & __GFP_ZERO)) {
-@@ -3271,7 +3335,7 @@ static void
+@@ -3153,6 +3217,7 @@
+ return i;
+ error:
+ local_irq_enable();
++ free_delayed(&to_free);
+ slab_post_alloc_hook(s, flags, i, p);
+ __kmem_cache_free_bulk(s, i, p);
+ return 0;
+@@ -3286,7 +3351,7 @@
init_kmem_cache_node(struct kmem_cache_node *n)
{
n->nr_partial = 0;
INIT_LIST_HEAD(&n->partial);
#ifdef CONFIG_SLUB_DEBUG
atomic_long_set(&n->nr_slabs, 0);
-@@ -3615,6 +3679,10 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
+@@ -3640,6 +3705,10 @@
const char *text)
{
#ifdef CONFIG_SLUB_DEBUG
void *addr = page_address(page);
void *p;
unsigned long *map = kzalloc(BITS_TO_LONGS(page->objects) *
-@@ -3635,6 +3703,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
+@@ -3660,6 +3729,7 @@
slab_unlock(page);
kfree(map);
#endif
}
/*
-@@ -3648,7 +3717,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
+@@ -3673,7 +3743,7 @@
struct page *page, *h;
BUG_ON(irqs_disabled());
list_for_each_entry_safe(page, h, &n->partial, lru) {
if (!page->inuse) {
remove_partial(n, page);
-@@ -3658,7 +3727,7 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
+@@ -3683,7 +3753,7 @@
"Objects remaining in %s on __kmem_cache_shutdown()");
}
}
list_for_each_entry_safe(page, h, &discard, lru)
discard_slab(s, page);
-@@ -3916,7 +3985,7 @@ int __kmem_cache_shrink(struct kmem_cache *s, bool deactivate)
+@@ -3927,7 +3997,7 @@
for (i = 0; i < SHRINK_PROMOTE_MAX; i++)
INIT_LIST_HEAD(promote + i);
/*
* Build lists of slabs to discard or promote.
-@@ -3947,7 +4016,7 @@ int __kmem_cache_shrink(struct kmem_cache *s, bool deactivate)
+@@ -3958,7 +4028,7 @@
for (i = SHRINK_PROMOTE_MAX - 1; i >= 0; i--)
list_splice(promote + i, &n->partial);
/* Release empty slabs */
list_for_each_entry_safe(page, t, &discard, lru)
-@@ -4123,6 +4192,12 @@ void __init kmem_cache_init(void)
+@@ -4171,6 +4241,12 @@
{
static __initdata struct kmem_cache boot_kmem_cache,
boot_kmem_cache_node;
if (debug_guardpage_minorder())
slub_max_order = 0;
-@@ -4331,7 +4406,7 @@ static int validate_slab_node(struct kmem_cache *s,
+@@ -4379,7 +4455,7 @@
struct page *page;
unsigned long flags;
list_for_each_entry(page, &n->partial, lru) {
validate_slab_slab(s, page, map);
-@@ -4353,7 +4428,7 @@ static int validate_slab_node(struct kmem_cache *s,
+@@ -4401,7 +4477,7 @@
s->name, count, atomic_long_read(&n->nr_slabs));
out:
return count;
}
-@@ -4541,12 +4616,12 @@ static int list_locations(struct kmem_cache *s, char *buf,
+@@ -4589,12 +4665,12 @@
if (!atomic_long_read(&n->nr_slabs))
continue;
}
for (i = 0; i < t.count; i++) {
-diff --git a/mm/swap.c b/mm/swap.c
-index 4dcf852e1e6d..69c3a5b24060 100644
---- a/mm/swap.c
-+++ b/mm/swap.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/swap.c linux-4.14/mm/swap.c
+--- linux-4.14.orig/mm/swap.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/mm/swap.c 2018-09-05 11:05:07.000000000 +0200
@@ -32,6 +32,7 @@
#include <linux/memcontrol.h>
#include <linux/gfp.h>
#include <linux/hugetlb.h>
#include <linux/page_idle.h>
-@@ -50,6 +51,8 @@ static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
+@@ -50,6 +51,8 @@
#ifdef CONFIG_SMP
static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs);
#endif
/*
* This path almost never happens for VM activity - pages are normally
-@@ -240,11 +243,11 @@ void rotate_reclaimable_page(struct page *page)
+@@ -252,11 +255,11 @@
unsigned long flags;
get_page(page);
}
}
-@@ -294,12 +297,13 @@ void activate_page(struct page *page)
+@@ -306,12 +309,13 @@
{
page = compound_head(page);
if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
}
}
-@@ -326,7 +330,7 @@ void activate_page(struct page *page)
+@@ -338,7 +342,7 @@
static void __lru_cache_activate_page(struct page *page)
{
int i;
/*
-@@ -348,7 +352,7 @@ static void __lru_cache_activate_page(struct page *page)
+@@ -360,7 +364,7 @@
}
}
}
/*
-@@ -390,12 +394,12 @@ EXPORT_SYMBOL(mark_page_accessed);
+@@ -402,12 +406,12 @@
static void __lru_cache_add(struct page *page)
{
}
/**
-@@ -593,9 +597,15 @@ void lru_add_drain_cpu(int cpu)
+@@ -613,9 +617,15 @@
unsigned long flags;
/* No harm done if a racing interrupt already did this */
}
pvec = &per_cpu(lru_deactivate_file_pvecs, cpu);
-@@ -627,11 +637,12 @@ void deactivate_file_page(struct page *page)
+@@ -647,11 +657,12 @@
return;
if (likely(get_page_unless_zero(page))) {
}
}
-@@ -646,27 +657,31 @@ void deactivate_file_page(struct page *page)
- void deactivate_page(struct page *page)
+@@ -666,21 +677,32 @@
{
- if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
-- struct pagevec *pvec = &get_cpu_var(lru_deactivate_pvecs);
+ if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
+ !PageSwapCache(page) && !PageUnevictable(page)) {
+- struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
+ struct pagevec *pvec = &get_locked_var(swapvec_lock,
-+ lru_deactivate_pvecs);
++ lru_lazyfree_pvecs);
get_page(page);
if (!pagevec_add(pvec, page) || PageCompound(page))
- pagevec_lru_move_fn(pvec, lru_deactivate_fn, NULL);
-- put_cpu_var(lru_deactivate_pvecs);
-+ put_locked_var(swapvec_lock, lru_deactivate_pvecs);
+ pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL);
+- put_cpu_var(lru_lazyfree_pvecs);
++ put_locked_var(swapvec_lock, lru_lazyfree_pvecs);
}
}
+ local_unlock_cpu(swapvec_lock);
}
--static void lru_add_drain_per_cpu(struct work_struct *dummy)
+#ifdef CONFIG_PREEMPT_RT_BASE
+static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work)
- {
-- lru_add_drain();
++{
+ local_lock_on(swapvec_lock, cpu);
+ lru_add_drain_cpu(cpu);
+ local_unlock_on(swapvec_lock, cpu);
- }
-
--static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
++}
++
+#else
++
+ static void lru_add_drain_per_cpu(struct work_struct *dummy)
+ {
+ lru_add_drain();
+@@ -688,6 +710,16 @@
- /*
- * lru_add_drain_wq is used to do lru_add_drain_all() from a WQ_MEM_RECLAIM
-@@ -686,6 +701,22 @@ static int __init lru_init(void)
- }
- early_initcall(lru_init);
+ static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
-+static void lru_add_drain_per_cpu(struct work_struct *dummy)
-+{
-+ lru_add_drain();
-+}
-+
-+static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work);
+static inline void remote_lru_add_drain(int cpu, struct cpumask *has_work)
+{
+ struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
+
+ INIT_WORK(work, lru_add_drain_per_cpu);
-+ queue_work_on(cpu, lru_add_drain_wq, work);
++ queue_work_on(cpu, mm_percpu_wq, work);
+ cpumask_set_cpu(cpu, has_work);
+}
+#endif
+
- void lru_add_drain_all(void)
+ void lru_add_drain_all_cpuslocked(void)
{
static DEFINE_MUTEX(lock);
-@@ -697,21 +728,18 @@ void lru_add_drain_all(void)
+@@ -705,21 +737,19 @@
cpumask_clear(&has_work);
for_each_online_cpu(cpu) {
- struct work_struct *work = &per_cpu(lru_add_drain_work, cpu);
--
+
if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
pagevec_count(&per_cpu(lru_deactivate_file_pvecs, cpu)) ||
- pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
+ pagevec_count(&per_cpu(lru_lazyfree_pvecs, cpu)) ||
- need_activate_page_drain(cpu)) {
- INIT_WORK(work, lru_add_drain_per_cpu);
-- queue_work_on(cpu, lru_add_drain_wq, work);
+- queue_work_on(cpu, mm_percpu_wq, work);
- cpumask_set_cpu(cpu, &has_work);
- }
+ need_activate_page_drain(cpu))
flush_work(&per_cpu(lru_add_drain_work, cpu));
+#endif
- put_online_cpus();
mutex_unlock(&lock);
-diff --git a/mm/truncate.c b/mm/truncate.c
-index 8d8c62d89e6d..5bf1bd25d077 100644
---- a/mm/truncate.c
-+++ b/mm/truncate.c
-@@ -62,9 +62,12 @@ static void clear_exceptional_entry(struct address_space *mapping,
- * protected by mapping->tree_lock.
- */
- if (!workingset_node_shadows(node) &&
-- !list_empty(&node->private_list))
-- list_lru_del(&workingset_shadow_nodes,
-+ !list_empty(&node->private_list)) {
-+ local_lock(workingset_shadow_lock);
-+ list_lru_del(&__workingset_shadow_nodes,
- &node->private_list);
-+ local_unlock(workingset_shadow_lock);
-+ }
- __radix_tree_delete_node(&mapping->page_tree, node);
+ }
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/truncate.c linux-4.14/mm/truncate.c
+--- linux-4.14.orig/mm/truncate.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/mm/truncate.c 2018-09-05 11:05:07.000000000 +0200
+@@ -41,8 +41,10 @@
+ goto unlock;
+ if (*slot != entry)
+ goto unlock;
++ local_lock(shadow_nodes_lock);
+ __radix_tree_replace(&mapping->page_tree, node, slot, NULL,
+- workingset_update_node, mapping);
++ __workingset_update_node, mapping);
++ local_unlock(shadow_nodes_lock);
+ mapping->nrexceptional--;
unlock:
spin_unlock_irq(&mapping->tree_lock);
-diff --git a/mm/vmalloc.c b/mm/vmalloc.c
-index f2481cb4e6b2..db4de08fa97c 100644
---- a/mm/vmalloc.c
-+++ b/mm/vmalloc.c
-@@ -845,7 +845,7 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/vmalloc.c linux-4.14/mm/vmalloc.c
+--- linux-4.14.orig/mm/vmalloc.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/vmalloc.c 2018-09-05 11:05:07.000000000 +0200
+@@ -865,7 +865,7 @@
struct vmap_block *vb;
struct vmap_area *va;
unsigned long vb_idx;
void *vaddr;
node = numa_node_id();
-@@ -888,11 +888,12 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
+@@ -908,11 +908,12 @@
BUG_ON(err);
radix_tree_preload_end();
return vaddr;
}
-@@ -961,6 +962,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
+@@ -981,6 +982,7 @@
struct vmap_block *vb;
void *vaddr = NULL;
unsigned int order;
BUG_ON(offset_in_page(size));
BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC);
-@@ -975,7 +977,8 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
+@@ -995,7 +997,8 @@
order = get_order(size);
rcu_read_lock();
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
unsigned long pages_off;
-@@ -998,7 +1001,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
+@@ -1018,7 +1021,7 @@
break;
}
rcu_read_unlock();
/* Allocate new block if nothing was found */
-diff --git a/mm/vmstat.c b/mm/vmstat.c
-index 604f26a4f696..312006d2db50 100644
---- a/mm/vmstat.c
-+++ b/mm/vmstat.c
-@@ -245,6 +245,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/vmstat.c linux-4.14/mm/vmstat.c
+--- linux-4.14.orig/mm/vmstat.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/mm/vmstat.c 2018-09-05 11:05:07.000000000 +0200
+@@ -249,6 +249,7 @@
long x;
long t;
x = delta + __this_cpu_read(*p);
t = __this_cpu_read(pcp->stat_threshold);
-@@ -254,6 +255,7 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
+@@ -258,6 +259,7 @@
x = 0;
}
__this_cpu_write(*p, x);
}
EXPORT_SYMBOL(__mod_zone_page_state);
-@@ -265,6 +267,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
+@@ -269,6 +271,7 @@
long x;
long t;
x = delta + __this_cpu_read(*p);
t = __this_cpu_read(pcp->stat_threshold);
-@@ -274,6 +277,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
+@@ -278,6 +281,7 @@
x = 0;
}
__this_cpu_write(*p, x);
}
EXPORT_SYMBOL(__mod_node_page_state);
-@@ -306,6 +310,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
+@@ -310,6 +314,7 @@
s8 __percpu *p = pcp->vm_stat_diff + item;
s8 v, t;
v = __this_cpu_inc_return(*p);
t = __this_cpu_read(pcp->stat_threshold);
if (unlikely(v > t)) {
-@@ -314,6 +319,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
+@@ -318,6 +323,7 @@
zone_page_state_add(v + overstep, zone, item);
__this_cpu_write(*p, -overstep);
}
}
void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
-@@ -322,6 +328,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+@@ -326,6 +332,7 @@
s8 __percpu *p = pcp->vm_node_stat_diff + item;
s8 v, t;
v = __this_cpu_inc_return(*p);
t = __this_cpu_read(pcp->stat_threshold);
if (unlikely(v > t)) {
-@@ -330,6 +337,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+@@ -334,6 +341,7 @@
node_page_state_add(v + overstep, pgdat, item);
__this_cpu_write(*p, -overstep);
}
}
void __inc_zone_page_state(struct page *page, enum zone_stat_item item)
-@@ -350,6 +358,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+@@ -354,6 +362,7 @@
s8 __percpu *p = pcp->vm_stat_diff + item;
s8 v, t;
v = __this_cpu_dec_return(*p);
t = __this_cpu_read(pcp->stat_threshold);
if (unlikely(v < - t)) {
-@@ -358,6 +367,7 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+@@ -362,6 +371,7 @@
zone_page_state_add(v - overstep, zone, item);
__this_cpu_write(*p, overstep);
}
}
void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
-@@ -366,6 +376,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+@@ -370,6 +380,7 @@
s8 __percpu *p = pcp->vm_node_stat_diff + item;
s8 v, t;
v = __this_cpu_dec_return(*p);
t = __this_cpu_read(pcp->stat_threshold);
if (unlikely(v < - t)) {
-@@ -374,6 +385,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+@@ -378,6 +389,7 @@
node_page_state_add(v - overstep, pgdat, item);
__this_cpu_write(*p, overstep);
}
}
void __dec_zone_page_state(struct page *page, enum zone_stat_item item)
-diff --git a/mm/workingset.c b/mm/workingset.c
-index fb1f9183d89a..7e6ef1a48cd3 100644
---- a/mm/workingset.c
-+++ b/mm/workingset.c
-@@ -334,7 +334,8 @@ void workingset_activation(struct page *page)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/workingset.c linux-4.14/mm/workingset.c
+--- linux-4.14.orig/mm/workingset.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/mm/workingset.c 2018-09-05 11:05:07.000000000 +0200
+@@ -338,9 +338,10 @@
* point where they would still be useful.
*/
--struct list_lru workingset_shadow_nodes;
-+struct list_lru __workingset_shadow_nodes;
-+DEFINE_LOCAL_IRQ_LOCK(workingset_shadow_lock);
+-static struct list_lru shadow_nodes;
++static struct list_lru __shadow_nodes;
++DEFINE_LOCAL_IRQ_LOCK(shadow_nodes_lock);
+
+-void workingset_update_node(struct radix_tree_node *node, void *private)
++void __workingset_update_node(struct radix_tree_node *node, void *private)
+ {
+ struct address_space *mapping = private;
+
+@@ -358,10 +359,10 @@
+ */
+ if (node->count && node->count == node->exceptional) {
+ if (list_empty(&node->private_list))
+- list_lru_add(&shadow_nodes, &node->private_list);
++ list_lru_add(&__shadow_nodes, &node->private_list);
+ } else {
+ if (!list_empty(&node->private_list))
+- list_lru_del(&shadow_nodes, &node->private_list);
++ list_lru_del(&__shadow_nodes, &node->private_list);
+ }
+ }
- static unsigned long count_shadow_nodes(struct shrinker *shrinker,
- struct shrink_control *sc)
-@@ -344,9 +345,9 @@ static unsigned long count_shadow_nodes(struct shrinker *shrinker,
- unsigned long pages;
+@@ -373,9 +374,9 @@
+ unsigned long cache;
/* list_lru lock nests inside IRQ-safe mapping->tree_lock */
- local_irq_disable();
-- shadow_nodes = list_lru_shrink_count(&workingset_shadow_nodes, sc);
+- nodes = list_lru_shrink_count(&shadow_nodes, sc);
- local_irq_enable();
-+ local_lock_irq(workingset_shadow_lock);
-+ shadow_nodes = list_lru_shrink_count(&__workingset_shadow_nodes, sc);
-+ local_unlock_irq(workingset_shadow_lock);
++ local_lock_irq(shadow_nodes_lock);
++ nodes = list_lru_shrink_count(&__shadow_nodes, sc);
++ local_unlock_irq(shadow_nodes_lock);
- if (sc->memcg) {
- pages = mem_cgroup_node_nr_lru_pages(sc->memcg, sc->nid,
-@@ -438,9 +439,9 @@ static enum lru_status shadow_lru_isolate(struct list_head *item,
+ /*
+ * Approximate a reasonable limit for the radix tree nodes
+@@ -475,15 +476,15 @@
+ goto out_invalid;
+ inc_lruvec_page_state(virt_to_page(node), WORKINGSET_NODERECLAIM);
+ __radix_tree_delete_node(&mapping->page_tree, node,
+- workingset_update_node, mapping);
++ __workingset_update_node, mapping);
+
+ out_invalid:
spin_unlock(&mapping->tree_lock);
ret = LRU_REMOVED_RETRY;
out:
- local_irq_enable();
-+ local_unlock_irq(workingset_shadow_lock);
++ local_unlock_irq(shadow_nodes_lock);
cond_resched();
- local_irq_disable();
-+ local_lock_irq(workingset_shadow_lock);
++ local_lock_irq(shadow_nodes_lock);
spin_lock(lru_lock);
return ret;
}
-@@ -451,10 +452,10 @@ static unsigned long scan_shadow_nodes(struct shrinker *shrinker,
+@@ -494,9 +495,9 @@
unsigned long ret;
/* list_lru lock nests inside IRQ-safe mapping->tree_lock */
- local_irq_disable();
-- ret = list_lru_shrink_walk(&workingset_shadow_nodes, sc,
-+ local_lock_irq(workingset_shadow_lock);
-+ ret = list_lru_shrink_walk(&__workingset_shadow_nodes, sc,
- shadow_lru_isolate, NULL);
+- ret = list_lru_shrink_walk(&shadow_nodes, sc, shadow_lru_isolate, NULL);
- local_irq_enable();
-+ local_unlock_irq(workingset_shadow_lock);
++ local_lock_irq(shadow_nodes_lock);
++ ret = list_lru_shrink_walk(&__shadow_nodes, sc, shadow_lru_isolate, NULL);
++ local_unlock_irq(shadow_nodes_lock);
return ret;
}
-@@ -492,7 +493,7 @@ static int __init workingset_init(void)
+@@ -534,7 +535,7 @@
pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n",
timestamp_bits, max_order, bucket_order);
-- ret = list_lru_init_key(&workingset_shadow_nodes, &shadow_nodes_key);
-+ ret = list_lru_init_key(&__workingset_shadow_nodes, &shadow_nodes_key);
+- ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key);
++ ret = __list_lru_init(&__shadow_nodes, true, &shadow_nodes_key);
if (ret)
goto err;
ret = register_shrinker(&workingset_shadow_shrinker);
-@@ -500,7 +501,7 @@ static int __init workingset_init(void)
+@@ -542,7 +543,7 @@
goto err_list_lru;
return 0;
err_list_lru:
-- list_lru_destroy(&workingset_shadow_nodes);
-+ list_lru_destroy(&__workingset_shadow_nodes);
+- list_lru_destroy(&shadow_nodes);
++ list_lru_destroy(&__shadow_nodes);
err:
return ret;
}
-diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
-index b0bc023d25c5..5af6426fbcbe 100644
---- a/mm/zsmalloc.c
-+++ b/mm/zsmalloc.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/mm/zsmalloc.c linux-4.14/mm/zsmalloc.c
+--- linux-4.14.orig/mm/zsmalloc.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/mm/zsmalloc.c 2018-09-05 11:05:07.000000000 +0200
@@ -53,6 +53,7 @@
#include <linux/mount.h>
#include <linux/migrate.h>
/*
* Object location (<PFN>, <obj_idx>) is encoded as
* as single (unsigned long) handle value.
-@@ -327,7 +341,7 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {}
+@@ -320,7 +334,7 @@
static int create_cache(struct zs_pool *pool)
{
0, 0, NULL);
if (!pool->handle_cachep)
return 1;
-@@ -351,10 +365,27 @@ static void destroy_cache(struct zs_pool *pool)
+@@ -344,9 +358,26 @@
static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp)
{
+ }
+#endif
+ return (unsigned long)p;
- }
-
++}
++
+#ifdef CONFIG_PREEMPT_RT_FULL
+static struct zsmalloc_handle *zs_get_pure_handle(unsigned long handle)
+{
+ return (void *)(handle &~((1 << OBJ_TAG_BITS) - 1));
-+}
+ }
+#endif
-+
+
static void cache_free_handle(struct zs_pool *pool, unsigned long handle)
{
- kmem_cache_free(pool->handle_cachep, (void *)handle);
-@@ -373,12 +404,18 @@ static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage)
+@@ -366,12 +397,18 @@
static void record_obj(unsigned long handle, unsigned long obj)
{
}
/* zpool driver */
-@@ -467,6 +504,7 @@ MODULE_ALIAS("zpool-zsmalloc");
+@@ -460,6 +497,7 @@
/* per-cpu VM mapping areas for zspage accesses that cross page boundaries */
static DEFINE_PER_CPU(struct mapping_area, zs_map_area);
static bool is_zspage_isolated(struct zspage *zspage)
{
-@@ -902,7 +940,13 @@ static unsigned long location_to_obj(struct page *page, unsigned int obj_idx)
+@@ -898,7 +936,13 @@
static unsigned long handle_to_obj(unsigned long handle)
{
}
static unsigned long obj_to_head(struct page *page, void *obj)
-@@ -916,22 +960,46 @@ static unsigned long obj_to_head(struct page *page, void *obj)
+@@ -912,22 +956,46 @@
static inline int testpin_tag(unsigned long handle)
{
}
static void reset_page(struct page *page)
-@@ -1423,7 +1491,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle,
+@@ -1365,7 +1433,7 @@
class = pool->size_class[class_idx];
off = (class->size * obj_idx) & ~PAGE_MASK;
area->vm_mm = mm;
if (off + class->size <= PAGE_SIZE) {
/* this object is contained entirely within a page */
-@@ -1477,7 +1545,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
+@@ -1419,7 +1487,7 @@
__zs_unmap_object(area, pages, off, class->size);
}
migrate_read_unlock(zspage);
unpin_tag(handle);
-diff --git a/net/core/dev.c b/net/core/dev.c
-index e1d731fdc72c..6ab4b7863755 100644
---- a/net/core/dev.c
-+++ b/net/core/dev.c
-@@ -190,6 +190,7 @@ static unsigned int napi_gen_id = NR_CPUS;
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/9p/trans_xen.c linux-4.14/net/9p/trans_xen.c
+--- linux-4.14.orig/net/9p/trans_xen.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/9p/trans_xen.c 2018-09-05 11:05:07.000000000 +0200
+@@ -38,7 +38,6 @@
+
+ #include <linux/module.h>
+ #include <linux/spinlock.h>
+-#include <linux/rwlock.h>
+ #include <net/9p/9p.h>
+ #include <net/9p/client.h>
+ #include <net/9p/transport.h>
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/bluetooth/hci_sock.c linux-4.14/net/bluetooth/hci_sock.c
+--- linux-4.14.orig/net/bluetooth/hci_sock.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/bluetooth/hci_sock.c 2018-09-05 11:05:07.000000000 +0200
+@@ -251,15 +251,13 @@
+ }
+
+ /* Send frame to sockets with specific channel */
+-void hci_send_to_channel(unsigned short channel, struct sk_buff *skb,
+- int flag, struct sock *skip_sk)
++static void __hci_send_to_channel(unsigned short channel, struct sk_buff *skb,
++ int flag, struct sock *skip_sk)
+ {
+ struct sock *sk;
+
+ BT_DBG("channel %u len %d", channel, skb->len);
+
+- read_lock(&hci_sk_list.lock);
+-
+ sk_for_each(sk, &hci_sk_list.head) {
+ struct sk_buff *nskb;
+
+@@ -285,6 +283,13 @@
+ kfree_skb(nskb);
+ }
+
++}
++
++void hci_send_to_channel(unsigned short channel, struct sk_buff *skb,
++ int flag, struct sock *skip_sk)
++{
++ read_lock(&hci_sk_list.lock);
++ __hci_send_to_channel(channel, skb, flag, skip_sk);
+ read_unlock(&hci_sk_list.lock);
+ }
+
+@@ -388,8 +393,8 @@
+ hdr->index = index;
+ hdr->len = cpu_to_le16(skb->len - HCI_MON_HDR_SIZE);
+
+- hci_send_to_channel(HCI_CHANNEL_MONITOR, skb,
+- HCI_SOCK_TRUSTED, NULL);
++ __hci_send_to_channel(HCI_CHANNEL_MONITOR, skb,
++ HCI_SOCK_TRUSTED, NULL);
+ kfree_skb(skb);
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/can/bcm.c linux-4.14/net/can/bcm.c
+--- linux-4.14.orig/net/can/bcm.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/can/bcm.c 2018-09-05 11:05:07.000000000 +0200
+@@ -102,7 +102,6 @@
+ unsigned long frames_abs, frames_filtered;
+ struct bcm_timeval ival1, ival2;
+ struct hrtimer timer, thrtimer;
+- struct tasklet_struct tsklet, thrtsklet;
+ ktime_t rx_stamp, kt_ival1, kt_ival2, kt_lastmsg;
+ int rx_ifindex;
+ int cfsiz;
+@@ -364,25 +363,34 @@
+ }
+ }
+
+-static void bcm_tx_start_timer(struct bcm_op *op)
++static bool bcm_tx_set_expiry(struct bcm_op *op, struct hrtimer *hrt)
+ {
++ ktime_t ival;
++
+ if (op->kt_ival1 && op->count)
+- hrtimer_start(&op->timer,
+- ktime_add(ktime_get(), op->kt_ival1),
+- HRTIMER_MODE_ABS);
++ ival = op->kt_ival1;
+ else if (op->kt_ival2)
+- hrtimer_start(&op->timer,
+- ktime_add(ktime_get(), op->kt_ival2),
+- HRTIMER_MODE_ABS);
++ ival = op->kt_ival2;
++ else
++ return false;
++
++ hrtimer_set_expires(hrt, ktime_add(ktime_get(), ival));
++ return true;
+ }
+
+-static void bcm_tx_timeout_tsklet(unsigned long data)
++static void bcm_tx_start_timer(struct bcm_op *op)
+ {
+- struct bcm_op *op = (struct bcm_op *)data;
++ if (bcm_tx_set_expiry(op, &op->timer))
++ hrtimer_start_expires(&op->timer, HRTIMER_MODE_ABS_SOFT);
++}
++
++/* bcm_tx_timeout_handler - performs cyclic CAN frame transmissions */
++static enum hrtimer_restart bcm_tx_timeout_handler(struct hrtimer *hrtimer)
++{
++ struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
+ struct bcm_msg_head msg_head;
+
+ if (op->kt_ival1 && (op->count > 0)) {
+-
+ op->count--;
+ if (!op->count && (op->flags & TX_COUNTEVT)) {
+
+@@ -399,22 +407,12 @@
+ }
+ bcm_can_tx(op);
+
+- } else if (op->kt_ival2)
++ } else if (op->kt_ival2) {
+ bcm_can_tx(op);
++ }
+
+- bcm_tx_start_timer(op);
+-}
+-
+-/*
+- * bcm_tx_timeout_handler - performs cyclic CAN frame transmissions
+- */
+-static enum hrtimer_restart bcm_tx_timeout_handler(struct hrtimer *hrtimer)
+-{
+- struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
+-
+- tasklet_schedule(&op->tsklet);
+-
+- return HRTIMER_NORESTART;
++ return bcm_tx_set_expiry(op, &op->timer) ?
++ HRTIMER_RESTART : HRTIMER_NORESTART;
+ }
+
+ /*
+@@ -480,7 +478,7 @@
+ /* do not send the saved data - only start throttle timer */
+ hrtimer_start(&op->thrtimer,
+ ktime_add(op->kt_lastmsg, op->kt_ival2),
+- HRTIMER_MODE_ABS);
++ HRTIMER_MODE_ABS_SOFT);
+ return;
+ }
+
+@@ -539,14 +537,21 @@
+ return;
+
+ if (op->kt_ival1)
+- hrtimer_start(&op->timer, op->kt_ival1, HRTIMER_MODE_REL);
++ hrtimer_start(&op->timer, op->kt_ival1, HRTIMER_MODE_REL_SOFT);
+ }
+
+-static void bcm_rx_timeout_tsklet(unsigned long data)
++/* bcm_rx_timeout_handler - when the (cyclic) CAN frame reception timed out */
++static enum hrtimer_restart bcm_rx_timeout_handler(struct hrtimer *hrtimer)
+ {
+- struct bcm_op *op = (struct bcm_op *)data;
++ struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
+ struct bcm_msg_head msg_head;
+
++ /* if user wants to be informed, when cyclic CAN-Messages come back */
++ if ((op->flags & RX_ANNOUNCE_RESUME) && op->last_frames) {
++ /* clear received CAN frames to indicate 'nothing received' */
++ memset(op->last_frames, 0, op->nframes * op->cfsiz);
++ }
++
+ /* create notification to user */
+ msg_head.opcode = RX_TIMEOUT;
+ msg_head.flags = op->flags;
+@@ -557,25 +562,6 @@
+ msg_head.nframes = 0;
+
+ bcm_send_to_user(op, &msg_head, NULL, 0);
+-}
+-
+-/*
+- * bcm_rx_timeout_handler - when the (cyclic) CAN frame reception timed out
+- */
+-static enum hrtimer_restart bcm_rx_timeout_handler(struct hrtimer *hrtimer)
+-{
+- struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
+-
+- /* schedule before NET_RX_SOFTIRQ */
+- tasklet_hi_schedule(&op->tsklet);
+-
+- /* no restart of the timer is done here! */
+-
+- /* if user wants to be informed, when cyclic CAN-Messages come back */
+- if ((op->flags & RX_ANNOUNCE_RESUME) && op->last_frames) {
+- /* clear received CAN frames to indicate 'nothing received' */
+- memset(op->last_frames, 0, op->nframes * op->cfsiz);
+- }
+
+ return HRTIMER_NORESTART;
+ }
+@@ -583,14 +569,12 @@
+ /*
+ * bcm_rx_do_flush - helper for bcm_rx_thr_flush
+ */
+-static inline int bcm_rx_do_flush(struct bcm_op *op, int update,
+- unsigned int index)
++static inline int bcm_rx_do_flush(struct bcm_op *op, unsigned int index)
+ {
+ struct canfd_frame *lcf = op->last_frames + op->cfsiz * index;
+
+ if ((op->last_frames) && (lcf->flags & RX_THR)) {
+- if (update)
+- bcm_rx_changed(op, lcf);
++ bcm_rx_changed(op, lcf);
+ return 1;
+ }
+ return 0;
+@@ -598,11 +582,8 @@
+
+ /*
+ * bcm_rx_thr_flush - Check for throttled data and send it to the userspace
+- *
+- * update == 0 : just check if throttled data is available (any irq context)
+- * update == 1 : check and send throttled data to userspace (soft_irq context)
+ */
+-static int bcm_rx_thr_flush(struct bcm_op *op, int update)
++static int bcm_rx_thr_flush(struct bcm_op *op)
+ {
+ int updated = 0;
+
+@@ -611,24 +592,16 @@
+
+ /* for MUX filter we start at index 1 */
+ for (i = 1; i < op->nframes; i++)
+- updated += bcm_rx_do_flush(op, update, i);
++ updated += bcm_rx_do_flush(op, i);
+
+ } else {
+ /* for RX_FILTER_ID and simple filter */
+- updated += bcm_rx_do_flush(op, update, 0);
++ updated += bcm_rx_do_flush(op, 0);
+ }
+
+ return updated;
+ }
+
+-static void bcm_rx_thr_tsklet(unsigned long data)
+-{
+- struct bcm_op *op = (struct bcm_op *)data;
+-
+- /* push the changed data to the userspace */
+- bcm_rx_thr_flush(op, 1);
+-}
+-
+ /*
+ * bcm_rx_thr_handler - the time for blocked content updates is over now:
+ * Check for throttled data and send it to the userspace
+@@ -637,9 +610,7 @@
+ {
+ struct bcm_op *op = container_of(hrtimer, struct bcm_op, thrtimer);
+
+- tasklet_schedule(&op->thrtsklet);
+-
+- if (bcm_rx_thr_flush(op, 0)) {
++ if (bcm_rx_thr_flush(op)) {
+ hrtimer_forward(hrtimer, ktime_get(), op->kt_ival2);
+ return HRTIMER_RESTART;
+ } else {
+@@ -735,23 +706,8 @@
+
+ static void bcm_remove_op(struct bcm_op *op)
+ {
+- if (op->tsklet.func) {
+- while (test_bit(TASKLET_STATE_SCHED, &op->tsklet.state) ||
+- test_bit(TASKLET_STATE_RUN, &op->tsklet.state) ||
+- hrtimer_active(&op->timer)) {
+- hrtimer_cancel(&op->timer);
+- tasklet_kill(&op->tsklet);
+- }
+- }
+-
+- if (op->thrtsklet.func) {
+- while (test_bit(TASKLET_STATE_SCHED, &op->thrtsklet.state) ||
+- test_bit(TASKLET_STATE_RUN, &op->thrtsklet.state) ||
+- hrtimer_active(&op->thrtimer)) {
+- hrtimer_cancel(&op->thrtimer);
+- tasklet_kill(&op->thrtsklet);
+- }
+- }
++ hrtimer_cancel(&op->timer);
++ hrtimer_cancel(&op->thrtimer);
+
+ if ((op->frames) && (op->frames != &op->sframe))
+ kfree(op->frames);
+@@ -979,15 +935,13 @@
+ op->ifindex = ifindex;
+
+ /* initialize uninitialized (kzalloc) structure */
+- hrtimer_init(&op->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&op->timer, CLOCK_MONOTONIC,
++ HRTIMER_MODE_REL_SOFT);
+ op->timer.function = bcm_tx_timeout_handler;
+
+- /* initialize tasklet for tx countevent notification */
+- tasklet_init(&op->tsklet, bcm_tx_timeout_tsklet,
+- (unsigned long) op);
+-
+ /* currently unused in tx_ops */
+- hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC,
++ HRTIMER_MODE_REL_SOFT);
+
+ /* add this bcm_op to the list of the tx_ops */
+ list_add(&op->list, &bo->tx_ops);
+@@ -1150,20 +1104,14 @@
+ op->rx_ifindex = ifindex;
+
+ /* initialize uninitialized (kzalloc) structure */
+- hrtimer_init(&op->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&op->timer, CLOCK_MONOTONIC,
++ HRTIMER_MODE_REL_SOFT);
+ op->timer.function = bcm_rx_timeout_handler;
+
+- /* initialize tasklet for rx timeout notification */
+- tasklet_init(&op->tsklet, bcm_rx_timeout_tsklet,
+- (unsigned long) op);
+-
+- hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC,
++ HRTIMER_MODE_REL_SOFT);
+ op->thrtimer.function = bcm_rx_thr_handler;
+
+- /* initialize tasklet for rx throttle handling */
+- tasklet_init(&op->thrtsklet, bcm_rx_thr_tsklet,
+- (unsigned long) op);
+-
+ /* add this bcm_op to the list of the rx_ops */
+ list_add(&op->list, &bo->rx_ops);
+
+@@ -1209,12 +1157,12 @@
+ */
+ op->kt_lastmsg = 0;
+ hrtimer_cancel(&op->thrtimer);
+- bcm_rx_thr_flush(op, 1);
++ bcm_rx_thr_flush(op);
+ }
+
+ if ((op->flags & STARTTIMER) && op->kt_ival1)
+ hrtimer_start(&op->timer, op->kt_ival1,
+- HRTIMER_MODE_REL);
++ HRTIMER_MODE_REL_SOFT);
+ }
+
+ /* now we can register for can_ids, if we added a new bcm_op */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/core/dev.c linux-4.14/net/core/dev.c
+--- linux-4.14.orig/net/core/dev.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/core/dev.c 2018-09-05 11:05:07.000000000 +0200
+@@ -195,6 +195,7 @@
static DEFINE_READ_MOSTLY_HASHTABLE(napi_hash, 8);
static seqcount_t devnet_rename_seq;
static inline void dev_base_seq_inc(struct net *net)
{
-@@ -211,14 +212,14 @@ static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
+@@ -217,14 +218,14 @@
static inline void rps_lock(struct softnet_data *sd)
{
#ifdef CONFIG_RPS
#endif
}
-@@ -888,7 +889,8 @@ int netdev_get_name(struct net *net, char *name, int ifindex)
+@@ -920,7 +921,8 @@
strcpy(name, dev->name);
rcu_read_unlock();
if (read_seqcount_retry(&devnet_rename_seq, seq)) {
goto retry;
}
-@@ -1157,20 +1159,17 @@ int dev_change_name(struct net_device *dev, const char *newname)
+@@ -1189,20 +1191,17 @@
if (dev->flags & IFF_UP)
return -EBUSY;
if (oldname[0] && !strchr(oldname, '%'))
netdev_info(dev, "renamed from %s\n", oldname);
-@@ -1183,11 +1182,12 @@ int dev_change_name(struct net_device *dev, const char *newname)
+@@ -1215,11 +1214,12 @@
if (ret) {
memcpy(dev->name, oldname, IFNAMSIZ);
dev->name_assign_type = old_assign_type;
netdev_adjacent_rename_links(dev, oldname);
-@@ -1208,7 +1208,8 @@ int dev_change_name(struct net_device *dev, const char *newname)
+@@ -1240,7 +1240,8 @@
/* err >= 0 after dev_alloc_name() or stores the first errno */
if (err >= 0) {
err = ret;
memcpy(dev->name, oldname, IFNAMSIZ);
memcpy(oldname, newname, IFNAMSIZ);
dev->name_assign_type = old_assign_type;
-@@ -1221,6 +1222,11 @@ int dev_change_name(struct net_device *dev, const char *newname)
+@@ -1253,6 +1254,11 @@
}
return err;
}
/**
-@@ -2263,6 +2269,7 @@ static void __netif_reschedule(struct Qdisc *q)
+@@ -2438,6 +2444,7 @@
sd->output_queue_tailp = &q->next_sched;
raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_restore(flags);
}
void __netif_schedule(struct Qdisc *q)
-@@ -2344,6 +2351,7 @@ void __dev_kfree_skb_irq(struct sk_buff *skb, enum skb_free_reason reason)
+@@ -2500,6 +2507,7 @@
__this_cpu_write(softnet_data.completion_queue, skb);
raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_restore(flags);
}
EXPORT_SYMBOL(__dev_kfree_skb_irq);
-@@ -3078,7 +3086,11 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
+@@ -3175,7 +3183,11 @@
* This permits qdisc->running owner to get the lock more
* often and dequeue packets faster.
*/
if (unlikely(contended))
spin_lock(&q->busylock);
-@@ -3141,8 +3153,10 @@ static void skb_update_prio(struct sk_buff *skb)
+@@ -3246,8 +3258,10 @@
#define skb_update_prio(skb)
#endif
/**
* dev_loopback_xmit - loop back @skb
-@@ -3376,8 +3390,7 @@ static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv)
+@@ -3487,9 +3501,12 @@
+ if (dev->flags & IFF_UP) {
int cpu = smp_processor_id(); /* ok because BHs are off */
++#ifdef CONFIG_PREEMPT_RT_FULL
++ if (txq->xmit_lock_owner != current) {
++#else
if (txq->xmit_lock_owner != cpu) {
- if (unlikely(__this_cpu_read(xmit_recursion) >
- XMIT_RECURSION_LIMIT))
++#endif
+ if (unlikely(xmit_rec_read() > XMIT_RECURSION_LIMIT))
goto recursion_alert;
skb = validate_xmit_skb(skb, dev);
-@@ -3387,9 +3400,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, void *accel_priv)
+@@ -3499,9 +3516,9 @@
HARD_TX_LOCK(dev, txq, cpu);
if (!netif_xmit_stopped(txq)) {
if (dev_xmit_complete(rc)) {
HARD_TX_UNLOCK(dev, txq);
goto out;
-@@ -3763,6 +3776,7 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
+@@ -3882,6 +3899,7 @@
rps_unlock(sd);
local_irq_restore(flags);
atomic_long_inc(&skb->dev->rx_dropped);
kfree_skb(skb);
-@@ -3781,7 +3795,7 @@ static int netif_rx_internal(struct sk_buff *skb)
+@@ -4034,7 +4052,7 @@
struct rps_dev_flow voidflow, *rflow = &voidflow;
int cpu;
rcu_read_lock();
cpu = get_rps_cpu(skb->dev, skb, &rflow);
-@@ -3791,13 +3805,13 @@ static int netif_rx_internal(struct sk_buff *skb)
+@@ -4044,14 +4062,14 @@
ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail);
rcu_read_unlock();
#endif
{
unsigned int qtail;
+
- ret = enqueue_to_backlog(skb, get_cpu(), &qtail);
- put_cpu();
+ ret = enqueue_to_backlog(skb, get_cpu_light(), &qtail);
}
return ret;
}
-@@ -3831,11 +3845,9 @@ int netif_rx_ni(struct sk_buff *skb)
+@@ -4085,11 +4103,9 @@
trace_netif_rx_ni_entry(skb);
return err;
}
-@@ -4314,7 +4326,7 @@ static void flush_backlog(struct work_struct *work)
+@@ -4607,7 +4623,7 @@
skb_queue_walk_safe(&sd->input_pkt_queue, skb, tmp) {
if (skb->dev->reg_state == NETREG_UNREGISTERING) {
__skb_unlink(skb, &sd->input_pkt_queue);
input_queue_head_incr(sd);
}
}
-@@ -4324,11 +4336,14 @@ static void flush_backlog(struct work_struct *work)
+@@ -4617,11 +4633,14 @@
skb_queue_walk_safe(&sd->process_queue, skb, tmp) {
if (skb->dev->reg_state == NETREG_UNREGISTERING) {
__skb_unlink(skb, &sd->process_queue);
}
static void flush_all_backlogs(void)
-@@ -4809,6 +4824,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
+@@ -5131,12 +5150,14 @@
sd->rps_ipi_list = NULL;
local_irq_enable();
+ preempt_check_resched_rt();
/* Send pending IPI's to kick RPS processing on remote cpus. */
- while (remsd) {
-@@ -4822,6 +4838,7 @@ static void net_rps_action_and_irq_enable(struct softnet_data *sd)
+ net_rps_send_ipi(remsd);
} else
#endif
local_irq_enable();
}
static bool sd_has_rps_ipi_waiting(struct softnet_data *sd)
-@@ -4851,7 +4868,9 @@ static int process_backlog(struct napi_struct *napi, int quota)
+@@ -5166,7 +5187,9 @@
while (again) {
struct sk_buff *skb;
rcu_read_lock();
__netif_receive_skb(skb);
rcu_read_unlock();
-@@ -4859,9 +4878,9 @@ static int process_backlog(struct napi_struct *napi, int quota)
+@@ -5174,9 +5197,9 @@
if (++work >= quota)
return work;
rps_lock(sd);
if (skb_queue_empty(&sd->input_pkt_queue)) {
/*
-@@ -4899,9 +4918,11 @@ void __napi_schedule(struct napi_struct *n)
+@@ -5214,6 +5237,7 @@
local_irq_save(flags);
____napi_schedule(this_cpu_ptr(&softnet_data), n);
local_irq_restore(flags);
}
EXPORT_SYMBOL(__napi_schedule);
+@@ -5250,6 +5274,7 @@
+ }
+ EXPORT_SYMBOL(napi_schedule_prep);
+
+#ifndef CONFIG_PREEMPT_RT_FULL
/**
* __napi_schedule_irqoff - schedule for receive
* @n: entry to schedule
-@@ -4913,6 +4934,7 @@ void __napi_schedule_irqoff(struct napi_struct *n)
+@@ -5261,6 +5286,7 @@
____napi_schedule(this_cpu_ptr(&softnet_data), n);
}
EXPORT_SYMBOL(__napi_schedule_irqoff);
+#endif
- void __napi_complete(struct napi_struct *n)
+ bool napi_complete_done(struct napi_struct *n, int work_done)
{
-@@ -5202,13 +5224,21 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
- struct softnet_data *sd = this_cpu_ptr(&softnet_data);
- unsigned long time_limit = jiffies + 2;
+@@ -5615,13 +5641,21 @@
+ unsigned long time_limit = jiffies +
+ usecs_to_jiffies(netdev_budget_usecs);
int budget = netdev_budget;
+ struct sk_buff_head tofree_q;
+ struct sk_buff *skb;
for (;;) {
struct napi_struct *n;
-@@ -5239,7 +5269,7 @@ static __latent_entropy void net_rx_action(struct softirq_action *h)
+@@ -5651,7 +5685,7 @@
list_splice_tail(&repoll, &list);
list_splice(&list, &sd->poll_list);
if (!list_empty(&sd->poll_list))
+ __raise_softirq_irqoff_ksoft(NET_RX_SOFTIRQ);
net_rps_action_and_irq_enable(sd);
- }
-@@ -8000,16 +8030,20 @@ static int dev_cpu_callback(struct notifier_block *nfb,
+ out:
+@@ -7478,7 +7512,7 @@
+ /* Initialize queue lock */
+ spin_lock_init(&queue->_xmit_lock);
+ netdev_set_xmit_lockdep_class(&queue->_xmit_lock, dev->type);
+- queue->xmit_lock_owner = -1;
++ netdev_queue_clear_owner(queue);
+ netdev_queue_numa_node_write(queue, NUMA_NO_NODE);
+ queue->dev = dev;
+ #ifdef CONFIG_BQL
+@@ -8418,6 +8452,7 @@
raise_softirq_irqoff(NET_TX_SOFTIRQ);
local_irq_enable();
+ preempt_check_resched_rt();
- /* Process offline CPU's input_pkt_queue */
- while ((skb = __skb_dequeue(&oldsd->process_queue))) {
+ #ifdef CONFIG_RPS
+ remsd = oldsd->rps_ipi_list;
+@@ -8431,10 +8466,13 @@
netif_rx_ni(skb);
input_queue_head_incr(oldsd);
}
+ kfree_skb(skb);
+ }
- return NOTIFY_OK;
+ return 0;
}
-@@ -8314,8 +8348,9 @@ static int __init net_dev_init(void)
+@@ -8738,8 +8776,9 @@
INIT_WORK(flush, flush_backlog);
INIT_LIST_HEAD(&sd->poll_list);
sd->output_queue_tailp = &sd->output_queue;
#ifdef CONFIG_RPS
-diff --git a/net/core/filter.c b/net/core/filter.c
-index b391209838ef..b86e9681a88e 100644
---- a/net/core/filter.c
-+++ b/net/core/filter.c
-@@ -1645,7 +1645,7 @@ static inline int __bpf_tx_skb(struct net_device *dev, struct sk_buff *skb)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/core/filter.c linux-4.14/net/core/filter.c
+--- linux-4.14.orig/net/core/filter.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/core/filter.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1696,7 +1696,7 @@
{
int ret;
net_crit_ratelimited("bpf: recursion limit reached on datapath, buggy bpf program?\n");
kfree_skb(skb);
return -ENETDOWN;
-@@ -1653,9 +1653,9 @@ static inline int __bpf_tx_skb(struct net_device *dev, struct sk_buff *skb)
+@@ -1704,9 +1704,9 @@
skb->dev = dev;
return ret;
}
-diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
-index cad8e791f28e..2a9364fe62a5 100644
---- a/net/core/gen_estimator.c
-+++ b/net/core/gen_estimator.c
-@@ -84,7 +84,7 @@ struct gen_estimator
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/core/gen_estimator.c linux-4.14/net/core/gen_estimator.c
+--- linux-4.14.orig/net/core/gen_estimator.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/core/gen_estimator.c 2018-09-05 11:05:07.000000000 +0200
+@@ -46,7 +46,7 @@
+ struct net_rate_estimator {
struct gnet_stats_basic_packed *bstats;
- struct gnet_stats_rate_est64 *rate_est;
spinlock_t *stats_lock;
- seqcount_t *running;
+ net_seqlock_t *running;
- int ewma_log;
- u32 last_packets;
- unsigned long avpps;
-@@ -213,7 +213,7 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
+ struct gnet_stats_basic_cpu __percpu *cpu_bstats;
+ u8 ewma_log;
+ u8 intvl_log; /* period : (250ms << intvl_log) */
+@@ -129,7 +129,7 @@
struct gnet_stats_basic_cpu __percpu *cpu_bstats,
- struct gnet_stats_rate_est64 *rate_est,
+ struct net_rate_estimator __rcu **rate_est,
spinlock_t *stats_lock,
- seqcount_t *running,
+ net_seqlock_t *running,
struct nlattr *opt)
{
- struct gen_estimator *est;
-@@ -309,7 +309,7 @@ int gen_replace_estimator(struct gnet_stats_basic_packed *bstats,
+ struct gnet_estimator *parm = nla_data(opt);
+@@ -222,7 +222,7 @@
struct gnet_stats_basic_cpu __percpu *cpu_bstats,
- struct gnet_stats_rate_est64 *rate_est,
+ struct net_rate_estimator __rcu **rate_est,
spinlock_t *stats_lock,
- seqcount_t *running, struct nlattr *opt)
+ net_seqlock_t *running, struct nlattr *opt)
{
- gen_kill_estimator(bstats, rate_est);
- return gen_new_estimator(bstats, cpu_bstats, rate_est, stats_lock, running, opt);
-diff --git a/net/core/gen_stats.c b/net/core/gen_stats.c
-index 508e051304fb..bc3b17b78c94 100644
---- a/net/core/gen_stats.c
-+++ b/net/core/gen_stats.c
-@@ -130,7 +130,7 @@ __gnet_stats_copy_basic_cpu(struct gnet_stats_basic_packed *bstats,
+ return gen_new_estimator(bstats, cpu_bstats, rate_est,
+ stats_lock, running, opt);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/core/gen_stats.c linux-4.14/net/core/gen_stats.c
+--- linux-4.14.orig/net/core/gen_stats.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/core/gen_stats.c 2018-09-05 11:05:07.000000000 +0200
+@@ -142,7 +142,7 @@
}
void
struct gnet_stats_basic_packed *bstats,
struct gnet_stats_basic_cpu __percpu *cpu,
struct gnet_stats_basic_packed *b)
-@@ -143,10 +143,10 @@ __gnet_stats_copy_basic(const seqcount_t *running,
+@@ -155,10 +155,10 @@
}
do {
if (running)
}
EXPORT_SYMBOL(__gnet_stats_copy_basic);
-@@ -164,7 +164,7 @@ EXPORT_SYMBOL(__gnet_stats_copy_basic);
+@@ -176,7 +176,7 @@
* if the room in the socket buffer was not sufficient.
*/
int
struct gnet_dump *d,
struct gnet_stats_basic_cpu __percpu *cpu,
struct gnet_stats_basic_packed *b)
-diff --git a/net/core/skbuff.c b/net/core/skbuff.c
-index 1e3e0087245b..1077b39db717 100644
---- a/net/core/skbuff.c
-+++ b/net/core/skbuff.c
-@@ -64,6 +64,7 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/core/pktgen.c linux-4.14/net/core/pktgen.c
+--- linux-4.14.orig/net/core/pktgen.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/core/pktgen.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2252,7 +2252,8 @@
+ s64 remaining;
+ struct hrtimer_sleeper t;
+
+- hrtimer_init_on_stack(&t.timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
++ hrtimer_init_sleeper_on_stack(&t, CLOCK_MONOTONIC, HRTIMER_MODE_ABS,
++ current);
+ hrtimer_set_expires(&t.timer, spin_until);
+
+ remaining = ktime_to_ns(hrtimer_expires_remaining(&t.timer));
+@@ -2267,7 +2268,6 @@
+ } while (ktime_compare(end_time, spin_until) < 0);
+ } else {
+ /* see do_nanosleep */
+- hrtimer_init_sleeper(&t, current);
+ do {
+ set_current_state(TASK_INTERRUPTIBLE);
+ hrtimer_start_expires(&t.timer, HRTIMER_MODE_ABS);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/core/skbuff.c linux-4.14/net/core/skbuff.c
+--- linux-4.14.orig/net/core/skbuff.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/core/skbuff.c 2018-09-05 11:05:07.000000000 +0200
+@@ -63,6 +63,7 @@
#include <linux/errqueue.h>
#include <linux/prefetch.h>
#include <linux/if_vlan.h>
#include <net/protocol.h>
#include <net/dst.h>
-@@ -360,6 +361,8 @@ struct napi_alloc_cache {
+@@ -330,6 +331,8 @@
static DEFINE_PER_CPU(struct page_frag_cache, netdev_alloc_cache);
static DEFINE_PER_CPU(struct napi_alloc_cache, napi_alloc_cache);
static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
{
-@@ -367,10 +370,10 @@ static void *__netdev_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
+@@ -337,10 +340,10 @@
unsigned long flags;
void *data;
- local_irq_save(flags);
+ local_lock_irqsave(netdev_alloc_lock, flags);
nc = this_cpu_ptr(&netdev_alloc_cache);
- data = __alloc_page_frag(nc, fragsz, gfp_mask);
+ data = page_frag_alloc(nc, fragsz, gfp_mask);
- local_irq_restore(flags);
+ local_unlock_irqrestore(netdev_alloc_lock, flags);
return data;
}
-@@ -389,9 +392,13 @@ EXPORT_SYMBOL(netdev_alloc_frag);
+@@ -359,9 +362,13 @@
static void *__napi_alloc_frag(unsigned int fragsz, gfp_t gfp_mask)
{
+ struct napi_alloc_cache *nc;
+ void *data;
-- return __alloc_page_frag(&nc->page, fragsz, gfp_mask);
+- return page_frag_alloc(&nc->page, fragsz, gfp_mask);
+ nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
-+ data = __alloc_page_frag(&nc->page, fragsz, gfp_mask);
++ data = page_frag_alloc(&nc->page, fragsz, gfp_mask);
+ put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
+ return data;
}
void *napi_alloc_frag(unsigned int fragsz)
-@@ -438,13 +445,13 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
+@@ -408,13 +415,13 @@
if (sk_memalloc_socks())
gfp_mask |= __GFP_MEMALLOC;
+ local_lock_irqsave(netdev_alloc_lock, flags);
nc = this_cpu_ptr(&netdev_alloc_cache);
- data = __alloc_page_frag(nc, len, gfp_mask);
+ data = page_frag_alloc(nc, len, gfp_mask);
pfmemalloc = nc->pfmemalloc;
- local_irq_restore(flags);
if (unlikely(!data))
return NULL;
-@@ -485,9 +492,10 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
+@@ -455,9 +462,10 @@
struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
gfp_t gfp_mask)
{
len += NET_SKB_PAD + NET_IP_ALIGN;
-@@ -505,7 +513,10 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
+@@ -475,7 +483,10 @@
if (sk_memalloc_socks())
gfp_mask |= __GFP_MEMALLOC;
+ nc = &get_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
- data = __alloc_page_frag(&nc->page, len, gfp_mask);
+ data = page_frag_alloc(&nc->page, len, gfp_mask);
+ pfmemalloc = nc->page.pfmemalloc;
+ put_locked_var(napi_alloc_cache_lock, napi_alloc_cache);
if (unlikely(!data))
return NULL;
-@@ -516,7 +527,7 @@ struct sk_buff *__napi_alloc_skb(struct napi_struct *napi, unsigned int len,
+@@ -486,7 +497,7 @@
}
/* use OR instead of assignment to avoid clearing of bits in mask */
skb->pfmemalloc = 1;
skb->head_frag = 1;
-@@ -760,23 +771,26 @@ EXPORT_SYMBOL(consume_skb);
+@@ -718,23 +729,26 @@
void __kfree_skb_flush(void)
{
/* record skb to CPU local list */
nc->skb_cache[nc->skb_count++] = skb;
-@@ -791,6 +805,7 @@ static inline void _kfree_skb_defer(struct sk_buff *skb)
+@@ -749,6 +763,7 @@
nc->skb_cache);
nc->skb_count = 0;
}
}
void __kfree_skb_defer(struct sk_buff *skb)
{
-diff --git a/net/core/sock.c b/net/core/sock.c
-index bc6543f7de36..2c32ee79620f 100644
---- a/net/core/sock.c
-+++ b/net/core/sock.c
-@@ -2488,12 +2488,11 @@ void lock_sock_nested(struct sock *sk, int subclass)
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/core/sock.c linux-4.14/net/core/sock.c
+--- linux-4.14.orig/net/core/sock.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/core/sock.c 2018-09-05 11:05:07.000000000 +0200
+@@ -2757,12 +2757,11 @@
if (sk->sk_lock.owned)
__lock_sock(sk);
sk->sk_lock.owned = 1;
}
EXPORT_SYMBOL(lock_sock_nested);
-diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
-index 48734ee6293f..e6864ff11352 100644
---- a/net/ipv4/icmp.c
-+++ b/net/ipv4/icmp.c
-@@ -69,6 +69,7 @@
- #include <linux/jiffies.h>
- #include <linux/kernel.h>
- #include <linux/fcntl.h>
-+#include <linux/sysrq.h>
- #include <linux/socket.h>
- #include <linux/in.h>
- #include <linux/inet.h>
-@@ -77,6 +78,7 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/ipv4/icmp.c linux-4.14/net/ipv4/icmp.c
+--- linux-4.14.orig/net/ipv4/icmp.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/ipv4/icmp.c 2018-09-05 11:05:07.000000000 +0200
+@@ -77,6 +77,7 @@
#include <linux/string.h>
#include <linux/netfilter_ipv4.h>
#include <linux/slab.h>
#include <net/snmp.h>
#include <net/ip.h>
#include <net/route.h>
-@@ -204,6 +206,8 @@ static const struct icmp_control icmp_pointers[NR_ICMP_TYPES+1];
+@@ -204,6 +205,8 @@
*
* On SMP we have one ICMP socket per-cpu.
*/
static struct sock *icmp_sk(struct net *net)
{
return *this_cpu_ptr(net->ipv4.icmp_sk);
-@@ -215,12 +219,14 @@ static inline struct sock *icmp_xmit_lock(struct net *net)
-
- local_bh_disable();
+@@ -214,12 +217,16 @@
+ {
+ struct sock *sk;
-+ local_lock(icmp_sk_lock);
++ if (!local_trylock(icmp_sk_lock))
++ return NULL;
++
sk = icmp_sk(net);
if (unlikely(!spin_trylock(&sk->sk_lock.slock))) {
* dst_link_failure() for an outgoing ICMP packet.
*/
+ local_unlock(icmp_sk_lock);
- local_bh_enable();
return NULL;
}
-@@ -230,6 +236,7 @@ static inline struct sock *icmp_xmit_lock(struct net *net)
+ return sk;
+@@ -228,6 +235,7 @@
static inline void icmp_xmit_unlock(struct sock *sk)
{
- spin_unlock_bh(&sk->sk_lock.slock);
+ spin_unlock(&sk->sk_lock.slock);
+ local_unlock(icmp_sk_lock);
}
int sysctl_icmp_msgs_per_sec __read_mostly = 1000;
-@@ -358,6 +365,7 @@ static void icmp_push_reply(struct icmp_bxm *icmp_param,
- struct sock *sk;
- struct sk_buff *skb;
-
-+ local_lock(icmp_sk_lock);
- sk = icmp_sk(dev_net((*rt)->dst.dev));
- if (ip_append_data(sk, fl4, icmp_glue_bits, icmp_param,
- icmp_param->data_len+icmp_param->head_len,
-@@ -380,6 +388,7 @@ static void icmp_push_reply(struct icmp_bxm *icmp_param,
- skb->ip_summed = CHECKSUM_NONE;
- ip_push_pending_frames(sk, fl4);
- }
-+ local_unlock(icmp_sk_lock);
- }
-
- /*
-@@ -891,6 +900,30 @@ static bool icmp_redirect(struct sk_buff *skb)
- }
-
- /*
-+ * 32bit and 64bit have different timestamp length, so we check for
-+ * the cookie at offset 20 and verify it is repeated at offset 50
-+ */
-+#define CO_POS0 20
-+#define CO_POS1 50
-+#define CO_SIZE sizeof(int)
-+#define ICMP_SYSRQ_SIZE 57
-+
-+/*
-+ * We got a ICMP_SYSRQ_SIZE sized ping request. Check for the cookie
-+ * pattern and if it matches send the next byte as a trigger to sysrq.
-+ */
-+static void icmp_check_sysrq(struct net *net, struct sk_buff *skb)
-+{
-+ int cookie = htonl(net->ipv4.sysctl_icmp_echo_sysrq);
-+ char *p = skb->data;
-+
-+ if (!memcmp(&cookie, p + CO_POS0, CO_SIZE) &&
-+ !memcmp(&cookie, p + CO_POS1, CO_SIZE) &&
-+ p[CO_POS0 + CO_SIZE] == p[CO_POS1 + CO_SIZE])
-+ handle_sysrq(p[CO_POS0 + CO_SIZE]);
-+}
-+
-+/*
- * Handle ICMP_ECHO ("ping") requests.
- *
- * RFC 1122: 3.2.2.6 MUST have an echo server that answers ICMP echo
-@@ -917,6 +950,11 @@ static bool icmp_echo(struct sk_buff *skb)
- icmp_param.data_len = skb->len;
- icmp_param.head_len = sizeof(struct icmphdr);
- icmp_reply(&icmp_param, skb);
-+
-+ if (skb->len == ICMP_SYSRQ_SIZE &&
-+ net->ipv4.sysctl_icmp_echo_sysrq) {
-+ icmp_check_sysrq(net, skb);
-+ }
- }
- /* should there be an ICMP stat for ignored echos? */
- return true;
-diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
-index 80bc36b25de2..215b90adfb05 100644
---- a/net/ipv4/sysctl_net_ipv4.c
-+++ b/net/ipv4/sysctl_net_ipv4.c
-@@ -681,6 +681,13 @@ static struct ctl_table ipv4_net_table[] = {
- .proc_handler = proc_dointvec
- },
- {
-+ .procname = "icmp_echo_sysrq",
-+ .data = &init_net.ipv4.sysctl_icmp_echo_sysrq,
-+ .maxlen = sizeof(int),
-+ .mode = 0644,
-+ .proc_handler = proc_dointvec
-+ },
-+ {
- .procname = "icmp_ignore_bogus_error_responses",
- .data = &init_net.ipv4.sysctl_icmp_ignore_bogus_error_responses,
- .maxlen = sizeof(int),
-diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
-index 2259114c7242..829e60985a81 100644
---- a/net/ipv4/tcp_ipv4.c
-+++ b/net/ipv4/tcp_ipv4.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/ipv4/tcp_ipv4.c linux-4.14/net/ipv4/tcp_ipv4.c
+--- linux-4.14.orig/net/ipv4/tcp_ipv4.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/ipv4/tcp_ipv4.c 2018-09-05 11:05:07.000000000 +0200
@@ -62,6 +62,7 @@
#include <linux/init.h>
#include <linux/times.h>
#include <net/net_namespace.h>
#include <net/icmp.h>
-@@ -564,6 +565,7 @@ void tcp_v4_send_check(struct sock *sk, struct sk_buff *skb)
+@@ -580,6 +581,7 @@
}
EXPORT_SYMBOL(tcp_v4_send_check);
/*
* This routine will send an RST to the other tcp.
*
-@@ -691,6 +693,8 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
- offsetof(struct inet_timewait_sock, tw_bound_dev_if));
-
+@@ -710,6 +712,7 @@
arg.tos = ip_hdr(skb)->tos;
-+
-+ local_lock(tcp_sk_lock);
+ arg.uid = sock_net_uid(net, sk && sk_fullsock(sk) ? sk : NULL);
local_bh_disable();
++ local_lock(tcp_sk_lock);
ip_send_unicast_reply(*this_cpu_ptr(net->ipv4.tcp_sk),
skb, &TCP_SKB_CB(skb)->header.h4.opt,
-@@ -700,6 +704,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
+@@ -717,6 +720,7 @@
+
__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
__TCP_INC_STATS(net, TCP_MIB_OUTRSTS);
- local_bh_enable();
+ local_unlock(tcp_sk_lock);
+ local_bh_enable();
#ifdef CONFIG_TCP_MD5SIG
- out:
-@@ -775,6 +780,7 @@ static void tcp_v4_send_ack(struct net *net,
- if (oif)
- arg.bound_dev_if = oif;
+@@ -796,12 +800,14 @@
arg.tos = tos;
-+ local_lock(tcp_sk_lock);
+ arg.uid = sock_net_uid(net, sk_fullsock(sk) ? sk : NULL);
local_bh_disable();
++ local_lock(tcp_sk_lock);
ip_send_unicast_reply(*this_cpu_ptr(net->ipv4.tcp_sk),
skb, &TCP_SKB_CB(skb)->header.h4.opt,
-@@ -783,6 +789,7 @@ static void tcp_v4_send_ack(struct net *net,
+ ip_hdr(skb)->saddr, ip_hdr(skb)->daddr,
+ &arg, arg.iov[0].iov_len);
__TCP_INC_STATS(net, TCP_MIB_OUTSEGS);
- local_bh_enable();
+ local_unlock(tcp_sk_lock);
+ local_bh_enable();
}
- static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
-diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
-index 2384b4aae064..bf7ab51d7035 100644
---- a/net/mac80211/rx.c
-+++ b/net/mac80211/rx.c
-@@ -4166,7 +4166,7 @@ void ieee80211_rx_napi(struct ieee80211_hw *hw, struct ieee80211_sta *pubsta,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/Kconfig linux-4.14/net/Kconfig
+--- linux-4.14.orig/net/Kconfig 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/Kconfig 2018-09-05 11:05:07.000000000 +0200
+@@ -272,7 +272,7 @@
+
+ config NET_RX_BUSY_POLL
+ bool
+- default y
++ default y if !PREEMPT_RT_FULL
+
+ config BQL
+ bool
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/mac80211/rx.c linux-4.14/net/mac80211/rx.c
+--- linux-4.14.orig/net/mac80211/rx.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/mac80211/rx.c 2018-09-05 11:05:07.000000000 +0200
+@@ -4252,7 +4252,7 @@
struct ieee80211_supported_band *sband;
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
if (WARN_ON(status->band >= NUM_NL80211_BANDS))
goto drop;
-diff --git a/net/netfilter/core.c b/net/netfilter/core.c
-index 004af030ef1a..b64f751bda45 100644
---- a/net/netfilter/core.c
-+++ b/net/netfilter/core.c
-@@ -22,12 +22,18 @@
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/netfilter/core.c linux-4.14/net/netfilter/core.c
+--- linux-4.14.orig/net/netfilter/core.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/netfilter/core.c 2018-09-05 11:05:07.000000000 +0200
+@@ -21,6 +21,7 @@
+ #include <linux/inetdevice.h>
#include <linux/proc_fs.h>
#include <linux/mutex.h>
- #include <linux/slab.h>
+#include <linux/locallock.h>
+ #include <linux/mm.h>
#include <linux/rcupdate.h>
#include <net/net_namespace.h>
- #include <net/sock.h>
+@@ -28,6 +29,11 @@
#include "nf_internals.h"
static DEFINE_MUTEX(afinfo_mutex);
const struct nf_afinfo __rcu *nf_afinfo[NFPROTO_NUMPROTO] __read_mostly;
-diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
-index dd2332390c45..f6a703b25b6c 100644
---- a/net/packet/af_packet.c
-+++ b/net/packet/af_packet.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/packet/af_packet.c linux-4.14/net/packet/af_packet.c
+--- linux-4.14.orig/net/packet/af_packet.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/packet/af_packet.c 2018-09-05 11:05:07.000000000 +0200
@@ -63,6 +63,7 @@
#include <linux/if_packet.h>
#include <linux/wireless.h>
#include <linux/kmod.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
-@@ -694,7 +695,7 @@ static void prb_retire_rx_blk_timer_expired(unsigned long data)
+@@ -707,7 +708,7 @@
if (BLOCK_NUM_PKTS(pbd)) {
while (atomic_read(&pkc->blk_fill_in_prog)) {
/* Waiting for skb_copy_bits to finish... */
}
}
-@@ -956,7 +957,7 @@ static void prb_retire_current_block(struct tpacket_kbdq_core *pkc,
+@@ -969,7 +970,7 @@
if (!(status & TP_STATUS_BLK_TMO)) {
while (atomic_read(&pkc->blk_fill_in_prog)) {
/* Waiting for skb_copy_bits to finish... */
}
}
prb_close_block(pkc, pbd, po, status);
-diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
-index 977f69886c00..f3e7a36b0396 100644
---- a/net/rds/ib_rdma.c
-+++ b/net/rds/ib_rdma.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/rds/ib_rdma.c linux-4.14/net/rds/ib_rdma.c
+--- linux-4.14.orig/net/rds/ib_rdma.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/rds/ib_rdma.c 2018-09-05 11:05:07.000000000 +0200
@@ -34,6 +34,7 @@
#include <linux/slab.h>
#include <linux/rculist.h>
#include "rds_single_path.h"
#include "ib_mr.h"
-@@ -210,7 +211,7 @@ static inline void wait_clean_list_grace(void)
+@@ -210,7 +211,7 @@
for_each_online_cpu(cpu) {
flag = &per_cpu(clean_list_grace, cpu);
while (test_bit(CLEAN_LIST_BUSY_BIT, flag))
}
}
-diff --git a/net/rxrpc/security.c b/net/rxrpc/security.c
-index 7d921e56e715..13df56a738e5 100644
---- a/net/rxrpc/security.c
-+++ b/net/rxrpc/security.c
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/rxrpc/security.c linux-4.14/net/rxrpc/security.c
+--- linux-4.14.orig/net/rxrpc/security.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/rxrpc/security.c 2018-09-05 11:05:07.000000000 +0200
@@ -19,9 +19,6 @@
#include <keys/rxrpc-type.h>
#include "ar-internal.h"
static const struct rxrpc_security *rxrpc_security_types[] = {
[RXRPC_SECURITY_NONE] = &rxrpc_no_security,
#ifdef CONFIG_RXKAD
-diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
-index 206dc24add3a..00ea9bde5bb3 100644
---- a/net/sched/sch_api.c
-+++ b/net/sched/sch_api.c
-@@ -981,7 +981,7 @@ static struct Qdisc *qdisc_create(struct net_device *dev,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/sched/sch_api.c linux-4.14/net/sched/sch_api.c
+--- linux-4.14.orig/net/sched/sch_api.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/sched/sch_api.c 2018-09-05 11:05:07.000000000 +0200
+@@ -1081,7 +1081,7 @@
rcu_assign_pointer(sch->stab, stab);
}
if (tca[TCA_RATE]) {
err = -EOPNOTSUPP;
if (sch->flags & TCQ_F_MQROOT)
-diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
-index 6cfb6e9038c2..20727e1347de 100644
---- a/net/sched/sch_generic.c
-+++ b/net/sched/sch_generic.c
-@@ -425,7 +425,11 @@ struct Qdisc noop_qdisc = {
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/sched/sch_generic.c linux-4.14/net/sched/sch_generic.c
+--- linux-4.14.orig/net/sched/sch_generic.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/sched/sch_generic.c 2018-09-05 11:05:07.000000000 +0200
+@@ -429,7 +429,11 @@
.ops = &noop_qdisc_ops,
.q.lock = __SPIN_LOCK_UNLOCKED(noop_qdisc.q.lock),
.dev_queue = &noop_netdev_queue,
.busylock = __SPIN_LOCK_UNLOCKED(noop_qdisc.busylock),
};
EXPORT_SYMBOL(noop_qdisc);
-@@ -624,9 +628,17 @@ struct Qdisc *qdisc_alloc(struct netdev_queue *dev_queue,
+@@ -628,9 +632,17 @@
lockdep_set_class(&sch->busylock,
dev->qdisc_tx_busylock ?: &qdisc_tx_busylock);
sch->ops = ops;
sch->enqueue = ops->enqueue;
-@@ -925,7 +937,7 @@ void dev_deactivate_many(struct list_head *head)
+@@ -933,7 +945,7 @@
/* Wait for outstanding qdisc_run calls. */
- list_for_each_entry(dev, head, close_list)
+ list_for_each_entry(dev, head, close_list) {
while (some_qdisc_is_busy(dev))
- yield();
+ msleep(1);
- }
-
- void dev_deactivate(struct net_device *dev)
-diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
-index 9c9db55a0c1e..e6583b018a72 100644
---- a/net/sunrpc/svc_xprt.c
-+++ b/net/sunrpc/svc_xprt.c
-@@ -396,7 +396,7 @@ void svc_xprt_do_enqueue(struct svc_xprt *xprt)
+ /* The new qdisc is assigned at this point so we can safely
+ * unwind stale skb lists and qdisc statistics
+ */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/sunrpc/svc_xprt.c linux-4.14/net/sunrpc/svc_xprt.c
+--- linux-4.14.orig/net/sunrpc/svc_xprt.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/net/sunrpc/svc_xprt.c 2018-09-05 11:05:07.000000000 +0200
+@@ -396,7 +396,7 @@
goto out;
}
pool = svc_pool_for_cpu(xprt->xpt_server, cpu);
atomic_long_inc(&pool->sp_stats.packets);
-@@ -432,7 +432,7 @@ void svc_xprt_do_enqueue(struct svc_xprt *xprt)
+@@ -432,7 +432,7 @@
atomic_long_inc(&pool->sp_stats.threads_woken);
wake_up_process(rqstp->rq_task);
goto out;
}
rcu_read_unlock();
-@@ -453,7 +453,7 @@ void svc_xprt_do_enqueue(struct svc_xprt *xprt)
+@@ -453,7 +453,7 @@
goto redo_search;
}
rqstp = NULL;
out:
trace_svc_xprt_do_enqueue(xprt, rqstp);
}
-diff --git a/scripts/mkcompile_h b/scripts/mkcompile_h
-index 6fdc97ef6023..523e0420d7f0 100755
---- a/scripts/mkcompile_h
-+++ b/scripts/mkcompile_h
-@@ -4,7 +4,8 @@ TARGET=$1
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/net/xfrm/xfrm_state.c linux-4.14/net/xfrm/xfrm_state.c
+--- linux-4.14.orig/net/xfrm/xfrm_state.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/net/xfrm/xfrm_state.c 2018-09-05 11:05:07.000000000 +0200
+@@ -427,7 +427,7 @@
+
+ static void xfrm_state_gc_destroy(struct xfrm_state *x)
+ {
+- tasklet_hrtimer_cancel(&x->mtimer);
++ hrtimer_cancel(&x->mtimer);
+ del_timer_sync(&x->rtimer);
+ kfree(x->aead);
+ kfree(x->aalg);
+@@ -472,8 +472,8 @@
+
+ static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me)
+ {
+- struct tasklet_hrtimer *thr = container_of(me, struct tasklet_hrtimer, timer);
+- struct xfrm_state *x = container_of(thr, struct xfrm_state, mtimer);
++ struct xfrm_state *x = container_of(me, struct xfrm_state, mtimer);
++ enum hrtimer_restart ret = HRTIMER_NORESTART;
+ unsigned long now = get_seconds();
+ long next = LONG_MAX;
+ int warn = 0;
+@@ -537,7 +537,8 @@
+ km_state_expired(x, 0, 0);
+ resched:
+ if (next != LONG_MAX) {
+- tasklet_hrtimer_start(&x->mtimer, ktime_set(next, 0), HRTIMER_MODE_REL);
++ hrtimer_forward_now(&x->mtimer, ktime_set(next, 0));
++ ret = HRTIMER_RESTART;
+ }
+
+ goto out;
+@@ -554,7 +555,7 @@
+
+ out:
+ spin_unlock(&x->lock);
+- return HRTIMER_NORESTART;
++ return ret;
+ }
+
+ static void xfrm_replay_timer_handler(unsigned long data);
+@@ -573,8 +574,8 @@
+ INIT_HLIST_NODE(&x->bydst);
+ INIT_HLIST_NODE(&x->bysrc);
+ INIT_HLIST_NODE(&x->byspi);
+- tasklet_hrtimer_init(&x->mtimer, xfrm_timer_handler,
+- CLOCK_BOOTTIME, HRTIMER_MODE_ABS);
++ hrtimer_init(&x->mtimer, CLOCK_BOOTTIME, HRTIMER_MODE_ABS_SOFT);
++ x->mtimer.function = xfrm_timer_handler;
+ setup_timer(&x->rtimer, xfrm_replay_timer_handler,
+ (unsigned long)x);
+ x->curlft.add_time = get_seconds();
+@@ -1031,7 +1032,9 @@
+ hlist_add_head_rcu(&x->byspi, net->xfrm.state_byspi + h);
+ }
+ x->lft.hard_add_expires_seconds = net->xfrm.sysctl_acq_expires;
+- tasklet_hrtimer_start(&x->mtimer, ktime_set(net->xfrm.sysctl_acq_expires, 0), HRTIMER_MODE_REL);
++ hrtimer_start(&x->mtimer,
++ ktime_set(net->xfrm.sysctl_acq_expires, 0),
++ HRTIMER_MODE_REL_SOFT);
+ net->xfrm.state_num++;
+ xfrm_hash_grow_check(net, x->bydst.next != NULL);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+@@ -1142,7 +1145,7 @@
+ hlist_add_head_rcu(&x->byspi, net->xfrm.state_byspi + h);
+ }
+
+- tasklet_hrtimer_start(&x->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL);
++ hrtimer_start(&x->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT);
+ if (x->replay_maxage)
+ mod_timer(&x->rtimer, jiffies + x->replay_maxage);
+
+@@ -1246,7 +1249,9 @@
+ x->mark.m = m->m;
+ x->lft.hard_add_expires_seconds = net->xfrm.sysctl_acq_expires;
+ xfrm_state_hold(x);
+- tasklet_hrtimer_start(&x->mtimer, ktime_set(net->xfrm.sysctl_acq_expires, 0), HRTIMER_MODE_REL);
++ hrtimer_start(&x->mtimer,
++ ktime_set(net->xfrm.sysctl_acq_expires, 0),
++ HRTIMER_MODE_REL_SOFT);
+ list_add(&x->km.all, &net->xfrm.state_all);
+ hlist_add_head_rcu(&x->bydst, net->xfrm.state_bydst + h);
+ h = xfrm_src_hash(net, daddr, saddr, family);
+@@ -1546,7 +1551,8 @@
+ memcpy(&x1->lft, &x->lft, sizeof(x1->lft));
+ x1->km.dying = 0;
+
+- tasklet_hrtimer_start(&x1->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL);
++ hrtimer_start(&x1->mtimer, ktime_set(1, 0),
++ HRTIMER_MODE_REL_SOFT);
+ if (x1->curlft.use_time)
+ xfrm_state_check_expire(x1);
+
+@@ -1570,7 +1576,7 @@
+ if (x->curlft.bytes >= x->lft.hard_byte_limit ||
+ x->curlft.packets >= x->lft.hard_packet_limit) {
+ x->km.state = XFRM_STATE_EXPIRED;
+- tasklet_hrtimer_start(&x->mtimer, 0, HRTIMER_MODE_REL);
++ hrtimer_start(&x->mtimer, 0, HRTIMER_MODE_REL_SOFT);
+ return -EINVAL;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/samples/trace_events/trace-events-sample.c linux-4.14/samples/trace_events/trace-events-sample.c
+--- linux-4.14.orig/samples/trace_events/trace-events-sample.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/samples/trace_events/trace-events-sample.c 2018-09-05 11:05:07.000000000 +0200
+@@ -33,7 +33,7 @@
+
+ /* Silly tracepoints */
+ trace_foo_bar("hello", cnt, array, random_strings[len],
+- ¤t->cpus_allowed);
++ current->cpus_ptr);
+
+ trace_foo_with_template_simple("HELLO", cnt);
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/scripts/mkcompile_h linux-4.14/scripts/mkcompile_h
+--- linux-4.14.orig/scripts/mkcompile_h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/scripts/mkcompile_h 2018-09-05 11:05:07.000000000 +0200
+@@ -5,7 +5,8 @@
ARCH=$2
SMP=$3
PREEMPT=$4
vecho() { [ "${quiet}" = "silent_" ] || echo "$@" ; }
-@@ -57,6 +58,7 @@ UTS_VERSION="#$VERSION"
+@@ -58,6 +59,7 @@
CONFIG_FLAGS=""
if [ -n "$SMP" ] ; then CONFIG_FLAGS="SMP"; fi
if [ -n "$PREEMPT" ] ; then CONFIG_FLAGS="$CONFIG_FLAGS PREEMPT"; fi
UTS_VERSION="$UTS_VERSION $CONFIG_FLAGS $TIMESTAMP"
# Truncate to maximum length
-diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
-index 9d33c1e85c79..3d307bda86f9 100644
---- a/sound/core/pcm_native.c
-+++ b/sound/core/pcm_native.c
-@@ -135,7 +135,7 @@ EXPORT_SYMBOL_GPL(snd_pcm_stream_unlock);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/security/apparmor/include/path.h linux-4.14/security/apparmor/include/path.h
+--- linux-4.14.orig/security/apparmor/include/path.h 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/security/apparmor/include/path.h 2018-09-05 11:05:07.000000000 +0200
+@@ -39,9 +39,10 @@
+ };
+
+ #include <linux/percpu.h>
+-#include <linux/preempt.h>
++#include <linux/locallock.h>
+
+ DECLARE_PER_CPU(struct aa_buffers, aa_buffers);
++DECLARE_LOCAL_IRQ_LOCK(aa_buffers_lock);
+
+ #define COUNT_ARGS(X...) COUNT_ARGS_HELPER(, ##X, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
+ #define COUNT_ARGS_HELPER(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, n, X...) n
+@@ -55,12 +56,24 @@
+
+ #define for_each_cpu_buffer(I) for ((I) = 0; (I) < MAX_PATH_BUFFERS; (I)++)
+
+-#ifdef CONFIG_DEBUG_PREEMPT
++#ifdef CONFIG_PREEMPT_RT_BASE
++
++static inline void AA_BUG_PREEMPT_ENABLED(const char *s)
++{
++ struct local_irq_lock *lv;
++
++ lv = this_cpu_ptr(&aa_buffers_lock);
++ WARN_ONCE(lv->owner != current,
++ "__get_buffer without aa_buffers_lock\n");
++}
++
++#elif defined(CONFIG_DEBUG_PREEMPT)
+ #define AA_BUG_PREEMPT_ENABLED(X) AA_BUG(preempt_count() <= 0, X)
+ #else
+ #define AA_BUG_PREEMPT_ENABLED(X) /* nop */
+ #endif
+
++
+ #define __get_buffer(N) ({ \
+ struct aa_buffers *__cpu_var; \
+ AA_BUG_PREEMPT_ENABLED("__get_buffer without preempt disabled"); \
+@@ -73,14 +86,14 @@
+
+ #define get_buffers(X...) \
+ do { \
+- preempt_disable(); \
++ local_lock(aa_buffers_lock); \
+ __get_buffers(X); \
+ } while (0)
+
+ #define put_buffers(X, Y...) \
+ do { \
+ __put_buffers(X, Y); \
+- preempt_enable(); \
++ local_unlock(aa_buffers_lock); \
+ } while (0)
+
+ #endif /* __AA_PATH_H */
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/security/apparmor/lsm.c linux-4.14/security/apparmor/lsm.c
+--- linux-4.14.orig/security/apparmor/lsm.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/security/apparmor/lsm.c 2018-09-05 11:05:07.000000000 +0200
+@@ -44,7 +44,7 @@
+ int apparmor_initialized;
+
+ DEFINE_PER_CPU(struct aa_buffers, aa_buffers);
+-
++DEFINE_LOCAL_IRQ_LOCK(aa_buffers_lock);
+
+ /*
+ * LSM hook functions
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/sound/core/pcm_native.c linux-4.14/sound/core/pcm_native.c
+--- linux-4.14.orig/sound/core/pcm_native.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/sound/core/pcm_native.c 2018-09-05 11:05:07.000000000 +0200
+@@ -148,7 +148,7 @@
void snd_pcm_stream_lock_irq(struct snd_pcm_substream *substream)
{
if (!substream->pcm->nonatomic)
snd_pcm_stream_lock(substream);
}
EXPORT_SYMBOL_GPL(snd_pcm_stream_lock_irq);
-@@ -150,7 +150,7 @@ void snd_pcm_stream_unlock_irq(struct snd_pcm_substream *substream)
+@@ -163,7 +163,7 @@
{
snd_pcm_stream_unlock(substream);
if (!substream->pcm->nonatomic)
}
EXPORT_SYMBOL_GPL(snd_pcm_stream_unlock_irq);
-@@ -158,7 +158,7 @@ unsigned long _snd_pcm_stream_lock_irqsave(struct snd_pcm_substream *substream)
+@@ -171,7 +171,7 @@
{
unsigned long flags = 0;
if (!substream->pcm->nonatomic)
snd_pcm_stream_lock(substream);
return flags;
}
-@@ -176,7 +176,7 @@ void snd_pcm_stream_unlock_irqrestore(struct snd_pcm_substream *substream,
+@@ -189,7 +189,7 @@
{
snd_pcm_stream_unlock(substream);
if (!substream->pcm->nonatomic)
}
EXPORT_SYMBOL_GPL(snd_pcm_stream_unlock_irqrestore);
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/sound/drivers/dummy.c linux-4.14/sound/drivers/dummy.c
+--- linux-4.14.orig/sound/drivers/dummy.c 2017-11-12 19:46:13.000000000 +0100
++++ linux-4.14/sound/drivers/dummy.c 2018-09-05 11:05:07.000000000 +0200
+@@ -376,17 +376,9 @@
+ ktime_t period_time;
+ atomic_t running;
+ struct hrtimer timer;
+- struct tasklet_struct tasklet;
+ struct snd_pcm_substream *substream;
+ };
+
+-static void dummy_hrtimer_pcm_elapsed(unsigned long priv)
+-{
+- struct dummy_hrtimer_pcm *dpcm = (struct dummy_hrtimer_pcm *)priv;
+- if (atomic_read(&dpcm->running))
+- snd_pcm_period_elapsed(dpcm->substream);
+-}
+-
+ static enum hrtimer_restart dummy_hrtimer_callback(struct hrtimer *timer)
+ {
+ struct dummy_hrtimer_pcm *dpcm;
+@@ -394,7 +386,14 @@
+ dpcm = container_of(timer, struct dummy_hrtimer_pcm, timer);
+ if (!atomic_read(&dpcm->running))
+ return HRTIMER_NORESTART;
+- tasklet_schedule(&dpcm->tasklet);
++ /*
++ * In cases of XRUN and draining, this calls .trigger to stop PCM
++ * substream.
++ */
++ snd_pcm_period_elapsed(dpcm->substream);
++ if (!atomic_read(&dpcm->running))
++ return HRTIMER_NORESTART;
++
+ hrtimer_forward_now(timer, dpcm->period_time);
+ return HRTIMER_RESTART;
+ }
+@@ -404,7 +403,7 @@
+ struct dummy_hrtimer_pcm *dpcm = substream->runtime->private_data;
+
+ dpcm->base_time = hrtimer_cb_get_time(&dpcm->timer);
+- hrtimer_start(&dpcm->timer, dpcm->period_time, HRTIMER_MODE_REL);
++ hrtimer_start(&dpcm->timer, dpcm->period_time, HRTIMER_MODE_REL_SOFT);
+ atomic_set(&dpcm->running, 1);
+ return 0;
+ }
+@@ -414,14 +413,14 @@
+ struct dummy_hrtimer_pcm *dpcm = substream->runtime->private_data;
+
+ atomic_set(&dpcm->running, 0);
+- hrtimer_cancel(&dpcm->timer);
++ if (!hrtimer_callback_running(&dpcm->timer))
++ hrtimer_cancel(&dpcm->timer);
+ return 0;
+ }
+
+ static inline void dummy_hrtimer_sync(struct dummy_hrtimer_pcm *dpcm)
+ {
+ hrtimer_cancel(&dpcm->timer);
+- tasklet_kill(&dpcm->tasklet);
+ }
+
+ static snd_pcm_uframes_t
+@@ -466,12 +465,10 @@
+ if (!dpcm)
+ return -ENOMEM;
+ substream->runtime->private_data = dpcm;
+- hrtimer_init(&dpcm->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++ hrtimer_init(&dpcm->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
+ dpcm->timer.function = dummy_hrtimer_callback;
+ dpcm->substream = substream;
+ atomic_set(&dpcm->running, 0);
+- tasklet_init(&dpcm->tasklet, dummy_hrtimer_pcm_elapsed,
+- (unsigned long)dpcm);
+ return 0;
+ }
+
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/functions linux-4.14/tools/testing/selftests/ftrace/test.d/functions
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/functions 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/functions 2018-09-05 11:05:07.000000000 +0200
+@@ -70,6 +70,13 @@
+ echo 0 > events/enable
+ }
+
++clear_synthetic_events() { # reset all current synthetic events
++ grep -v ^# synthetic_events |
++ while read line; do
++ echo "!$line" >> synthetic_events
++ done
++}
++
+ initialize_ftrace() { # Reset ftrace to initial-state
+ # As the initial state, ftrace will be set to nop tracer,
+ # no events, no triggers, no filters, no function filters,
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-extended-error-support.tc linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-extended-error-support.tc
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-extended-error-support.tc 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-extended-error-support.tc 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,39 @@
++#!/bin/sh
++# description: event trigger - test extended error support
++
++
++do_reset() {
++ reset_trigger
++ echo > set_event
++ clear_trace
++}
++
++fail() { #msg
++ do_reset
++ echo $1
++ exit_fail
++}
++
++if [ ! -f set_event ]; then
++ echo "event tracing is not supported"
++ exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++ echo "synthetic event is not supported"
++ exit_unsupported
++fi
++
++reset_tracer
++do_reset
++
++echo "Test extended error support"
++echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_wakeup/trigger
++echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_wakeup/trigger &>/dev/null
++if ! grep -q "ERROR:" events/sched/sched_wakeup/hist; then
++ fail "Failed to generate extended error in histogram"
++fi
++
++do_reset
++
++exit 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-field-variable-support.tc linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-field-variable-support.tc
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-field-variable-support.tc 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-field-variable-support.tc 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,54 @@
++#!/bin/sh
++# description: event trigger - test field variable support
++
++do_reset() {
++ reset_trigger
++ echo > set_event
++ clear_trace
++}
++
++fail() { #msg
++ do_reset
++ echo $1
++ exit_fail
++}
++
++if [ ! -f set_event ]; then
++ echo "event tracing is not supported"
++ exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++ echo "synthetic event is not supported"
++ exit_unsupported
++fi
++
++clear_synthetic_events
++reset_tracer
++do_reset
++
++echo "Test field variable support"
++
++echo 'wakeup_latency u64 lat; pid_t pid; int prio; char comm[16]' > synthetic_events
++echo 'hist:keys=comm:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_waking/trigger
++echo 'hist:keys=next_comm:wakeup_lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,next_pid,sched.sched_waking.prio,next_comm) if next_comm=="ping"' > events/sched/sched_switch/trigger
++echo 'hist:keys=pid,prio,comm:vals=lat:sort=pid,prio' > events/synthetic/wakeup_latency/trigger
++
++ping localhost -c 3
++if ! grep -q "ping" events/synthetic/wakeup_latency/hist; then
++ fail "Failed to create inter-event histogram"
++fi
++
++if ! grep -q "synthetic_prio=prio" events/sched/sched_waking/hist; then
++ fail "Failed to create histogram with field variable"
++fi
++
++echo '!hist:keys=next_comm:wakeup_lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,next_pid,sched.sched_waking.prio,next_comm) if next_comm=="ping"' >> events/sched/sched_switch/trigger
++
++if grep -q "synthetic_prio=prio" events/sched/sched_waking/hist; then
++ fail "Failed to remove histogram with field variable"
++fi
++
++do_reset
++
++exit 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-inter-event-combined-hist.tc 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,58 @@
++#!/bin/sh
++# description: event trigger - test inter-event combined histogram trigger
++
++do_reset() {
++ reset_trigger
++ echo > set_event
++ clear_trace
++}
++
++fail() { #msg
++ do_reset
++ echo $1
++ exit_fail
++}
++
++if [ ! -f set_event ]; then
++ echo "event tracing is not supported"
++ exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++ echo "synthetic event is not supported"
++ exit_unsupported
++fi
++
++reset_tracer
++do_reset
++clear_synthetic_events
++
++echo "Test create synthetic event"
++
++echo 'waking_latency u64 lat pid_t pid' > synthetic_events
++if [ ! -d events/synthetic/waking_latency ]; then
++ fail "Failed to create waking_latency synthetic event"
++fi
++
++echo "Test combined histogram"
++
++echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_waking/trigger
++echo 'hist:keys=pid:waking_lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_waking).waking_latency($waking_lat,pid) if comm=="ping"' > events/sched/sched_wakeup/trigger
++echo 'hist:keys=pid,lat:sort=pid,lat' > events/synthetic/waking_latency/trigger
++
++echo 'wakeup_latency u64 lat pid_t pid' >> synthetic_events
++echo 'hist:keys=pid:ts1=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_wakeup/trigger
++echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts1:onmatch(sched.sched_wakeup).wakeup_latency($wakeup_lat,next_pid) if next_comm=="ping"' > events/sched/sched_switch/trigger
++
++echo 'waking+wakeup_latency u64 lat; pid_t pid' >> synthetic_events
++echo 'hist:keys=pid,lat:sort=pid,lat:ww_lat=$waking_lat+$wakeup_lat:onmatch(synthetic.wakeup_latency).waking+wakeup_latency($ww_lat,pid)' >> events/synthetic/wakeup_latency/trigger
++echo 'hist:keys=pid,lat:sort=pid,lat' >> events/synthetic/waking+wakeup_latency/trigger
++
++ping localhost -c 3
++if ! grep -q "pid:" events/synthetic/waking+wakeup_latency/hist; then
++ fail "Failed to create combined histogram"
++fi
++
++do_reset
++
++exit 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-action-hist.tc linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-action-hist.tc
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-action-hist.tc 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-action-hist.tc 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,50 @@
++#!/bin/sh
++# description: event trigger - test inter-event histogram trigger onmatch action
++
++do_reset() {
++ reset_trigger
++ echo > set_event
++ clear_trace
++}
++
++fail() { #msg
++ do_reset
++ echo $1
++ exit_fail
++}
++
++if [ ! -f set_event ]; then
++ echo "event tracing is not supported"
++ exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++ echo "synthetic event is not supported"
++ exit_unsupported
++fi
++
++clear_synthetic_events
++reset_tracer
++do_reset
++
++echo "Test create synthetic event"
++
++echo 'wakeup_latency u64 lat pid_t pid char comm[16]' > synthetic_events
++if [ ! -d events/synthetic/wakeup_latency ]; then
++ fail "Failed to create wakeup_latency synthetic event"
++fi
++
++echo "Test create histogram for synthetic event"
++echo "Test histogram variables,simple expression support and onmatch action"
++
++echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_wakeup/trigger
++echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_wakeup).wakeup_latency($wakeup_lat,next_pid,next_comm) if next_comm=="ping"' > events/sched/sched_switch/trigger
++echo 'hist:keys=comm,pid,lat:wakeup_lat=lat:sort=lat' > events/synthetic/wakeup_latency/trigger
++ping localhost -c 5
++if ! grep -q "ping" events/synthetic/wakeup_latency/hist; then
++ fail "Failed to create onmatch action inter-event histogram"
++fi
++
++do_reset
++
++exit 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-onmax-action-hist.tc linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-onmax-action-hist.tc
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-onmax-action-hist.tc 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmatch-onmax-action-hist.tc 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,50 @@
++#!/bin/sh
++# description: event trigger - test inter-event histogram trigger onmatch-onmax action
++
++do_reset() {
++ reset_trigger
++ echo > set_event
++ clear_trace
++}
++
++fail() { #msg
++ do_reset
++ echo $1
++ exit_fail
++}
++
++if [ ! -f set_event ]; then
++ echo "event tracing is not supported"
++ exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++ echo "synthetic event is not supported"
++ exit_unsupported
++fi
++
++clear_synthetic_events
++reset_tracer
++do_reset
++
++echo "Test create synthetic event"
++
++echo 'wakeup_latency u64 lat pid_t pid char comm[16]' > synthetic_events
++if [ ! -d events/synthetic/wakeup_latency ]; then
++ fail "Failed to create wakeup_latency synthetic event"
++fi
++
++echo "Test create histogram for synthetic event"
++echo "Test histogram variables,simple expression support and onmatch-onmax action"
++
++echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_wakeup/trigger
++echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_wakeup).wakeup_latency($wakeup_lat,next_pid,next_comm):onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) if next_comm=="ping"' >> events/sched/sched_switch/trigger
++echo 'hist:keys=comm,pid,lat:wakeup_lat=lat:sort=lat' > events/synthetic/wakeup_latency/trigger
++ping localhost -c 5
++if [ ! grep -q "ping" events/synthetic/wakeup_latency/hist -o ! grep -q "max:" events/sched/sched_switch/hist]; then
++ fail "Failed to create onmatch-onmax action inter-event histogram"
++fi
++
++do_reset
++
++exit 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmax-action-hist.tc linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmax-action-hist.tc
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmax-action-hist.tc 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-onmax-action-hist.tc 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,48 @@
++#!/bin/sh
++# description: event trigger - test inter-event histogram trigger onmax action
++
++do_reset() {
++ reset_trigger
++ echo > set_event
++ clear_trace
++}
++
++fail() { #msg
++ do_reset
++ echo $1
++ exit_fail
++}
++
++if [ ! -f set_event ]; then
++ echo "event tracing is not supported"
++ exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++ echo "synthetic event is not supported"
++ exit_unsupported
++fi
++
++clear_synthetic_events
++reset_tracer
++do_reset
++
++echo "Test create synthetic event"
++
++echo 'wakeup_latency u64 lat pid_t pid char comm[16]' > synthetic_events
++if [ ! -d events/synthetic/wakeup_latency ]; then
++ fail "Failed to create wakeup_latency synthetic event"
++fi
++
++echo "Test onmax action"
++
++echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_waking/trigger
++echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) if next_comm=="ping"' >> events/sched/sched_switch/trigger
++ping localhost -c 3
++if ! grep -q "max:" events/sched/sched_switch/hist; then
++ fail "Failed to create onmax action inter-event histogram"
++fi
++
++do_reset
++
++exit 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc
+--- linux-4.14.orig/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc 1970-01-01 01:00:00.000000000 +0100
++++ linux-4.14/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic-event-createremove.tc 2018-09-05 11:05:07.000000000 +0200
+@@ -0,0 +1,54 @@
++#!/bin/sh
++# description: event trigger - test synthetic event create remove
++do_reset() {
++ reset_trigger
++ echo > set_event
++ clear_trace
++}
++
++fail() { #msg
++ do_reset
++ echo $1
++ exit_fail
++}
++
++if [ ! -f set_event ]; then
++ echo "event tracing is not supported"
++ exit_unsupported
++fi
++
++if [ ! -f synthetic_events ]; then
++ echo "synthetic event is not supported"
++ exit_unsupported
++fi
++
++clear_synthetic_events
++reset_tracer
++do_reset
++
++echo "Test create synthetic event"
++
++echo 'wakeup_latency u64 lat pid_t pid char comm[16]' > synthetic_events
++if [ ! -d events/synthetic/wakeup_latency ]; then
++ fail "Failed to create wakeup_latency synthetic event"
++fi
++
++reset_trigger
++
++echo "Test create synthetic event with an error"
++echo 'wakeup_latency u64 lat pid_t pid char' > synthetic_events > /dev/null
++if [ -d events/synthetic/wakeup_latency ]; then
++ fail "Created wakeup_latency synthetic event with an invalid format"
++fi
++
++reset_trigger
++
++echo "Test remove synthetic event"
++echo '!wakeup_latency u64 lat pid_t pid char comm[16]' > synthetic_events
++if [ -d events/synthetic/wakeup_latency ]; then
++ fail "Failed to delete wakeup_latency synthetic event"
++fi
++
++do_reset
++
++exit 0
+diff -durN -x '*~' -x '*.orig' linux-4.14.orig/virt/kvm/arm/arm.c linux-4.14/virt/kvm/arm/arm.c
+--- linux-4.14.orig/virt/kvm/arm/arm.c 2018-09-05 11:03:25.000000000 +0200
++++ linux-4.14/virt/kvm/arm/arm.c 2018-09-05 11:05:07.000000000 +0200
+@@ -69,7 +69,6 @@
+
+ static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
+ {
+- BUG_ON(preemptible());
+ __this_cpu_write(kvm_arm_running_vcpu, vcpu);
+ }
+
+@@ -79,7 +78,6 @@
+ */
+ struct kvm_vcpu *kvm_arm_get_running_vcpu(void)
+ {
+- BUG_ON(preemptible());
+ return __this_cpu_read(kvm_arm_running_vcpu);
+ }
+
+@@ -653,7 +651,7 @@
+ * involves poking the GIC, which must be done in a
+ * non-preemptible context.
+ */
+- preempt_disable();
++ migrate_disable();
+
+ kvm_pmu_flush_hwstate(vcpu);
+
+@@ -690,7 +688,7 @@
+ kvm_pmu_sync_hwstate(vcpu);
+ kvm_timer_sync_hwstate(vcpu);
+ kvm_vgic_sync_hwstate(vcpu);
+- preempt_enable();
++ migrate_enable();
+ continue;
+ }
+
+@@ -745,7 +743,7 @@
+
+ kvm_vgic_sync_hwstate(vcpu);
+
+- preempt_enable();
++ migrate_enable();
+
+ ret = handle_exit(vcpu, run, ret);
+ }