LTS Refresh: Merge upto 2aa774a66fd1773483366a263575bd711d115f6c (buildnumber: 17162.210.0) from main-R101-cos-5.15 to release-R101-cos-5.15.
BUG=b/277819731
TEST=presubmit
RELEASE_NOTE=LTS Refresh from main-R101-cos-5.15 to release-R101-cos-5.15
Change-Id: I60612bdc83f14f68f678c08e01aee028a9711dca
Signed-off-by: He Gao <hegao@google.com>
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-uvc b/Documentation/ABI/testing/configfs-usb-gadget-uvc
index 889ed45..2d5a591 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-uvc
+++ b/Documentation/ABI/testing/configfs-usb-gadget-uvc
@@ -51,7 +51,7 @@
KernelVersion: 4.0
Description: Default output terminal descriptors
- All attributes read only:
+ All attributes read only except bSourceID:
============== =============================================
iTerminal index of string descriptor
diff --git a/Documentation/ABI/testing/sysfs-kernel-oops_count b/Documentation/ABI/testing/sysfs-kernel-oops_count
new file mode 100644
index 0000000..156cca9
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-kernel-oops_count
@@ -0,0 +1,6 @@
+What: /sys/kernel/oops_count
+Date: November 2022
+KernelVersion: 6.2.0
+Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
+Description:
+ Shows how many times the system has Oopsed since last boot.
diff --git a/Documentation/ABI/testing/sysfs-kernel-warn_count b/Documentation/ABI/testing/sysfs-kernel-warn_count
new file mode 100644
index 0000000..90a0298
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-kernel-warn_count
@@ -0,0 +1,6 @@
+What: /sys/kernel/warn_count
+Date: November 2022
+KernelVersion: 6.2.0
+Contact: Linux Kernel Hardening List <linux-hardening@vger.kernel.org>
+Description:
+ Shows how many times the system has Warned since last boot.
diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
index 41191b5..dd913ee 100644
--- a/Documentation/admin-guide/cgroup-v1/memory.rst
+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
@@ -84,6 +84,8 @@
memory.swappiness set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate set/show controls of moving charges
+ This knob is deprecated and shouldn't be
+ used.
memory.oom_control set/show oom controls.
memory.numa_stat show the number of memory usage per numa
node
@@ -723,8 +725,15 @@
It is recommended to set the soft limit always below the hard limit,
otherwise the hard limit will take precedence.
-8. Move charges at task migration
-=================================
+8. Move charges at task migration (DEPRECATED!)
+===============================================
+
+THIS IS DEPRECATED!
+
+It's expensive and unreliable! It's better practice to launch workload
+tasks directly from inside their target cgroup. Use dedicated workload
+cgroups to allow fine-grained policy adjustments without having to
+move physical pages between control domains.
Users can move charges associated with a task along with task migration, that
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
diff --git a/Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst b/Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst
new file mode 100644
index 0000000..ec6e9f5
--- /dev/null
+++ b/Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst
@@ -0,0 +1,92 @@
+
+.. SPDX-License-Identifier: GPL-2.0
+
+Cross-Thread Return Address Predictions
+=======================================
+
+Certain AMD and Hygon processors are subject to a cross-thread return address
+predictions vulnerability. When running in SMT mode and one sibling thread
+transitions out of C0 state, the other sibling thread could use return target
+predictions from the sibling thread that transitioned out of C0.
+
+The Spectre v2 mitigations protect the Linux kernel, as it fills the return
+address prediction entries with safe targets when context switching to the idle
+thread. However, KVM does allow a VMM to prevent exiting guest mode when
+transitioning out of C0. This could result in a guest-controlled return target
+being consumed by the sibling thread.
+
+Affected processors
+-------------------
+
+The following CPUs are vulnerable:
+
+ - AMD Family 17h processors
+ - Hygon Family 18h processors
+
+Related CVEs
+------------
+
+The following CVE entry is related to this issue:
+
+ ============== =======================================
+ CVE-2022-27672 Cross-Thread Return Address Predictions
+ ============== =======================================
+
+Problem
+-------
+
+Affected SMT-capable processors support 1T and 2T modes of execution when SMT
+is enabled. In 2T mode, both threads in a core are executing code. For the
+processor core to enter 1T mode, it is required that one of the threads
+requests to transition out of the C0 state. This can be communicated with the
+HLT instruction or with an MWAIT instruction that requests non-C0.
+When the thread re-enters the C0 state, the processor transitions back
+to 2T mode, assuming the other thread is also still in C0 state.
+
+In affected processors, the return address predictor (RAP) is partitioned
+depending on the SMT mode. For instance, in 2T mode each thread uses a private
+16-entry RAP, but in 1T mode, the active thread uses a 32-entry RAP. Upon
+transition between 1T/2T mode, the RAP contents are not modified but the RAP
+pointers (which control the next return target to use for predictions) may
+change. This behavior may result in return targets from one SMT thread being
+used by RET predictions in the sibling thread following a 1T/2T switch. In
+particular, a RET instruction executed immediately after a transition to 1T may
+use a return target from the thread that just became idle. In theory, this
+could lead to information disclosure if the return targets used do not come
+from trustworthy code.
+
+Attack scenarios
+----------------
+
+An attack can be mounted on affected processors by performing a series of CALL
+instructions with targeted return locations and then transitioning out of C0
+state.
+
+Mitigation mechanism
+--------------------
+
+Before entering idle state, the kernel context switches to the idle thread. The
+context switch fills the RAP entries (referred to as the RSB in Linux) with safe
+targets by performing a sequence of CALL instructions.
+
+Prevent a guest VM from directly putting the processor into an idle state by
+intercepting HLT and MWAIT instructions.
+
+Both mitigations are required to fully address this issue.
+
+Mitigation control on the kernel command line
+---------------------------------------------
+
+Use existing Spectre v2 mitigations that will fill the RSB on context switch.
+
+Mitigation control for KVM - module parameter
+---------------------------------------------
+
+By default, the KVM hypervisor mitigates this issue by intercepting guest
+attempts to transition out of C0. A VMM can use the KVM_CAP_X86_DISABLE_EXITS
+capability to override those interceptions, but since this is not common, the
+mitigation that covers this path is not enabled by default.
+
+The mitigation for the KVM_CAP_X86_DISABLE_EXITS capability can be turned on
+using the boolean module parameter mitigate_smt_rsb, e.g.:
+ kvm.mitigate_smt_rsb=1
diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
index 4df436e..e061476 100644
--- a/Documentation/admin-guide/hw-vuln/index.rst
+++ b/Documentation/admin-guide/hw-vuln/index.rst
@@ -18,3 +18,4 @@
core-scheduling.rst
l1d_flush.rst
processor_mmio_stale_data.rst
+ cross-thread-rsb.rst
diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index 7e061ed..0fba375 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -479,8 +479,16 @@
On Intel Skylake-era systems the mitigation covers most, but not all,
cases. See :ref:`[3] <spec_ref3>` for more details.
- On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
- IBRS on x86), retpoline is automatically disabled at run time.
+ On CPUs with hardware mitigation for Spectre variant 2 (e.g. IBRS
+ or enhanced IBRS on x86), retpoline is automatically disabled at run time.
+
+ Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at
+ boot, by setting the IBRS bit, and they're automatically protected against
+ Spectre v2 variant attacks, including cross-thread branch target injections
+ on SMT systems (STIBP). In other words, eIBRS enables STIBP too.
+
+ Legacy IBRS systems clear the IBRS bit on exit to userspace and
+ therefore explicitly enable STIBP for that
The retpoline mitigation is turned on by default on vulnerable
CPUs. It can be forced on or off by the administrator
@@ -504,9 +512,12 @@
For Spectre variant 2 mitigation, individual user programs
can be compiled with return trampolines for indirect branches.
This protects them from consuming poisoned entries in the branch
- target buffer left by malicious software. Alternatively, the
- programs can disable their indirect branch speculation via prctl()
- (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
+ target buffer left by malicious software.
+
+ On legacy IBRS systems, at return to userspace, implicit STIBP is disabled
+ because the kernel clears the IBRS bit. In this case, the userspace programs
+ can disable indirect branch speculation via prctl() (See
+ :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
On x86, this will turn on STIBP to guard against attacks from the
sibling thread when the user program is running, and use IBPB to
flush the branch target buffer when switching to/from the program.
diff --git a/Documentation/admin-guide/kdump/gdbmacros.txt b/Documentation/admin-guide/kdump/gdbmacros.txt
index 82aecdc..030de95 100644
--- a/Documentation/admin-guide/kdump/gdbmacros.txt
+++ b/Documentation/admin-guide/kdump/gdbmacros.txt
@@ -312,10 +312,10 @@
set var $prev_flags = $info->flags
end
- set var $id = ($id + 1) & $id_mask
if ($id == $end_id)
loop_break
end
+ set var $id = ($id + 1) & $id_mask
end
end
document dmesg
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 6e070c5..d6b7be38 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2200,24 +2200,57 @@
ivrs_ioapic [HW,X86-64]
Provide an override to the IOAPIC-ID<->DEVICE-ID
- mapping provided in the IVRS ACPI table. For
- example, to map IOAPIC-ID decimal 10 to
- PCI device 00:14.0 write the parameter as:
+ mapping provided in the IVRS ACPI table.
+ By default, PCI segment is 0, and can be omitted.
+
+ For example, to map IOAPIC-ID decimal 10 to
+ PCI segment 0x1 and PCI device 00:14.0,
+ write the parameter as:
+ ivrs_ioapic=10@0001:00:14.0
+
+ Deprecated formats:
+ * To map IOAPIC-ID decimal 10 to PCI device 00:14.0
+ write the parameter as:
ivrs_ioapic[10]=00:14.0
+ * To map IOAPIC-ID decimal 10 to PCI segment 0x1 and
+ PCI device 00:14.0 write the parameter as:
+ ivrs_ioapic[10]=0001:00:14.0
ivrs_hpet [HW,X86-64]
Provide an override to the HPET-ID<->DEVICE-ID
- mapping provided in the IVRS ACPI table. For
- example, to map HPET-ID decimal 0 to
- PCI device 00:14.0 write the parameter as:
+ mapping provided in the IVRS ACPI table.
+ By default, PCI segment is 0, and can be omitted.
+
+ For example, to map HPET-ID decimal 10 to
+ PCI segment 0x1 and PCI device 00:14.0,
+ write the parameter as:
+ ivrs_hpet=10@0001:00:14.0
+
+ Deprecated formats:
+ * To map HPET-ID decimal 0 to PCI device 00:14.0
+ write the parameter as:
ivrs_hpet[0]=00:14.0
+ * To map HPET-ID decimal 10 to PCI segment 0x1 and
+ PCI device 00:14.0 write the parameter as:
+ ivrs_ioapic[10]=0001:00:14.0
ivrs_acpihid [HW,X86-64]
Provide an override to the ACPI-HID:UID<->DEVICE-ID
- mapping provided in the IVRS ACPI table. For
- example, to map UART-HID:UID AMD0020:0 to
- PCI device 00:14.5 write the parameter as:
+ mapping provided in the IVRS ACPI table.
+ By default, PCI segment is 0, and can be omitted.
+
+ For example, to map UART-HID:UID AMD0020:0 to
+ PCI segment 0x1 and PCI device ID 00:14.5,
+ write the parameter as:
+ ivrs_acpihid=AMD0020:0@0001:00:14.5
+
+ Deprecated formats:
+ * To map UART-HID:UID AMD0020:0 to PCI segment is 0,
+ PCI device ID 00:14.5, write the parameter as:
ivrs_acpihid[00:14.5]=AMD0020:0
+ * To map UART-HID:UID AMD0020:0 to PCI segment 0x1 and
+ PCI device ID 00:14.5, write the parameter as:
+ ivrs_acpihid[0001:00:14.5]=AMD0020:0
js= [HW,JOY] Analog joystick
See Documentation/input/joydev/joystick.rst.
diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index 609b891..48b91c4 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -671,6 +671,15 @@
an oops event is detected.
+oops_limit
+==========
+
+Number of kernel oopses after which the kernel should panic when
+``panic_on_oops`` is not set. Setting this to 0 disables checking
+the count. Setting this to 1 has the same effect as setting
+``panic_on_oops=1``. The default value is 10000.
+
+
osrelease, ostype & version
===========================
@@ -1485,6 +1494,16 @@
2 Unprivileged calls to ``bpf()`` are disabled
= =============================================================
+
+warn_limit
+==========
+
+Number of kernel warnings after which the kernel should panic when
+``panic_on_warn`` is not set. Setting this to 0 disables checking
+the warning count. Setting this to 1 has the same effect as setting
+``panic_on_warn=1``. The default value is 0.
+
+
watchdog
========
diff --git a/Documentation/dev-tools/gdb-kernel-debugging.rst b/Documentation/dev-tools/gdb-kernel-debugging.rst
index 8e0f1fe..895285c 100644
--- a/Documentation/dev-tools/gdb-kernel-debugging.rst
+++ b/Documentation/dev-tools/gdb-kernel-debugging.rst
@@ -39,6 +39,10 @@
this mode. In this case, you should build the kernel with
CONFIG_RANDOMIZE_BASE disabled if the architecture supports KASLR.
+- Build the gdb scripts (required on kernels v5.1 and above)::
+
+ make scripts_gdb
+
- Enable the gdb stub of QEMU/KVM, either
- at VM startup time by appending "-s" to the QEMU command line
diff --git a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml
similarity index 85%
rename from Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml
rename to Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml
index 399ebde..ff86c87 100644
--- a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb2-phy.yaml
+++ b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb2-phy.yaml
@@ -2,7 +2,7 @@
# Copyright 2019 BayLibre, SAS
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/phy/amlogic,meson-g12a-usb2-phy.yaml#"
+$id: "http://devicetree.org/schemas/phy/amlogic,g12a-usb2-phy.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Amlogic G12A USB2 PHY
@@ -13,8 +13,8 @@
properties:
compatible:
enum:
- - amlogic,meson-g12a-usb2-phy
- - amlogic,meson-a1-usb2-phy
+ - amlogic,g12a-usb2-phy
+ - amlogic,a1-usb2-phy
reg:
maxItems: 1
@@ -68,7 +68,7 @@
examples:
- |
phy@36000 {
- compatible = "amlogic,meson-g12a-usb2-phy";
+ compatible = "amlogic,g12a-usb2-phy";
reg = <0x36000 0x2000>;
clocks = <&xtal>;
clock-names = "xtal";
diff --git a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml
similarity index 82%
rename from Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml
rename to Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml
index 453c083..8473864 100644
--- a/Documentation/devicetree/bindings/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml
+++ b/Documentation/devicetree/bindings/phy/amlogic,g12a-usb3-pcie-phy.yaml
@@ -2,7 +2,7 @@
# Copyright 2019 BayLibre, SAS
%YAML 1.2
---
-$id: "http://devicetree.org/schemas/phy/amlogic,meson-g12a-usb3-pcie-phy.yaml#"
+$id: "http://devicetree.org/schemas/phy/amlogic,g12a-usb3-pcie-phy.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: Amlogic G12A USB3 + PCIE Combo PHY
@@ -13,7 +13,7 @@
properties:
compatible:
enum:
- - amlogic,meson-g12a-usb3-pcie-phy
+ - amlogic,g12a-usb3-pcie-phy
reg:
maxItems: 1
@@ -49,7 +49,7 @@
examples:
- |
phy@46000 {
- compatible = "amlogic,meson-g12a-usb3-pcie-phy";
+ compatible = "amlogic,g12a-usb3-pcie-phy";
reg = <0x46000 0x2000>;
clocks = <&ref_clk>;
clock-names = "ref_clk";
diff --git a/Documentation/devicetree/bindings/serial/renesas,scif.yaml b/Documentation/devicetree/bindings/serial/renesas,scif.yaml
index 6b8731f..1a8d9bf 100644
--- a/Documentation/devicetree/bindings/serial/renesas,scif.yaml
+++ b/Documentation/devicetree/bindings/serial/renesas,scif.yaml
@@ -79,7 +79,7 @@
- description: Error interrupt
- description: Receive buffer full interrupt
- description: Transmit buffer empty interrupt
- - description: Transmit End interrupt
+ - description: Break interrupt
- items:
- description: Error interrupt
- description: Receive buffer full interrupt
@@ -94,7 +94,7 @@
- const: eri
- const: rxi
- const: txi
- - const: tei
+ - const: bri
- items:
- const: eri
- const: rxi
diff --git a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
index 2e35aea..89e3819 100644
--- a/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
+++ b/Documentation/devicetree/bindings/sound/amlogic,gx-sound-card.yaml
@@ -61,7 +61,7 @@
description: phandle of the CPU DAI
patternProperties:
- "^codec-[0-9]+$":
+ "^codec(-[0-9]+)?$":
type: object
description: |-
Codecs:
diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
index a99c1c3..a155279 100644
--- a/Documentation/filesystems/vfs.rst
+++ b/Documentation/filesystems/vfs.rst
@@ -1210,7 +1210,7 @@
return
-ECHILD and it will be called again in ref-walk mode.
-``_weak_revalidate``
+``d_weak_revalidate``
called when the VFS needs to revalidate a "jumped" dentry. This
is called when a path-walk ends at dentry that was not acquired
by doing a lookup in the parent directory. This includes "/",
diff --git a/Documentation/hwmon/ftsteutates.rst b/Documentation/hwmon/ftsteutates.rst
index 58a2483..198fa8e 100644
--- a/Documentation/hwmon/ftsteutates.rst
+++ b/Documentation/hwmon/ftsteutates.rst
@@ -22,6 +22,10 @@
8 fans. It also contains an integrated watchdog which is currently
implemented in this driver.
+The 4 voltages require a board-specific multiplier, since the BMC can
+only measure voltages up to 3.3V and thus relies on voltage dividers.
+Consult your motherboard manual for details.
+
To clear a temperature or fan alarm, execute the following command with the
correct path to the alarm file::
diff --git a/Documentation/trace/ftrace.rst b/Documentation/trace/ftrace.rst
index 4e5b26f..d036946 100644
--- a/Documentation/trace/ftrace.rst
+++ b/Documentation/trace/ftrace.rst
@@ -2929,7 +2929,7 @@
bash-1994 [000] .... 4342.324898: ima_get_action <-process_measurement
bash-1994 [000] .... 4342.324898: ima_match_policy <-ima_get_action
bash-1994 [000] .... 4342.324899: do_truncate <-do_last
- bash-1994 [000] .... 4342.324899: should_remove_suid <-do_truncate
+ bash-1994 [000] .... 4342.324899: setattr_should_drop_suidgid <-do_truncate
bash-1994 [000] .... 4342.324899: notify_change <-do_truncate
bash-1994 [000] .... 4342.324900: current_fs_time <-notify_change
bash-1994 [000] .... 4342.324900: current_kernel_time <-current_fs_time
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index b550f43..ec38299 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -4117,6 +4117,18 @@
:Parameters: struct kvm_s390_cmma_log (in, out)
:Returns: 0 on success, a negative value on error
+Errors:
+
+ ====== =============================================================
+ ENOMEM not enough memory can be allocated to complete the task
+ ENXIO if CMMA is not enabled
+ EINVAL if KVM_S390_CMMA_PEEK is not set but migration mode was not enabled
+ EINVAL if KVM_S390_CMMA_PEEK is not set but dirty tracking has been
+ disabled (and thus migration mode was automatically disabled)
+ EFAULT if the userspace address is invalid or if no page table is
+ present for the addresses (e.g. when using hugepages).
+ ====== =============================================================
+
This ioctl is used to get the values of the CMMA bits on the s390
architecture. It is meant to be used in two scenarios:
@@ -4197,12 +4209,6 @@
values points to the userspace buffer where the result will be stored.
-This ioctl can fail with -ENOMEM if not enough memory can be allocated to
-complete the task, with -ENXIO if CMMA is not enabled, with -EINVAL if
-KVM_S390_CMMA_PEEK is not set but migration mode was not enabled, with
--EFAULT if the userspace address is invalid or if no page table is
-present for the addresses (e.g. when using hugepages).
-
4.108 KVM_S390_SET_CMMA_BITS
----------------------------
diff --git a/Documentation/virt/kvm/devices/vm.rst b/Documentation/virt/kvm/devices/vm.rst
index 60acc39..147efec6 100644
--- a/Documentation/virt/kvm/devices/vm.rst
+++ b/Documentation/virt/kvm/devices/vm.rst
@@ -302,6 +302,10 @@
Setting this attribute when migration mode is already active will have
no effects.
+Dirty tracking must be enabled on all memslots, else -EINVAL is returned. When
+dirty tracking is disabled on any memslot, migration mode is automatically
+stopped.
+
:Parameters: none
:Returns: -ENOMEM if there is not enough free memory to start migration mode;
-EINVAL if the state of the VM is invalid (e.g. no memory defined);
diff --git a/MAINTAINERS b/MAINTAINERS
index 4f50a45..d0884a5 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3114,7 +3114,7 @@
AUDIT SUBSYSTEM
M: Paul Moore <paul@paul-moore.com>
M: Eric Paris <eparis@redhat.com>
-L: linux-audit@redhat.com (moderated for non-subscribers)
+L: audit@vger.kernel.org
S: Supported
W: https://github.com/linux-audit
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/audit.git
@@ -3407,6 +3407,7 @@
F: net/sched/cls_bpf.c
F: samples/bpf/
F: scripts/bpf_doc.py
+F: scripts/pahole-version.sh
F: tools/bpf/
F: tools/lib/bpf/
F: tools/testing/selftests/bpf/
diff --git a/Makefile b/Makefile
index 9ebe9dd..b063245 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 5
PATCHLEVEL = 15
-SUBLEVEL = 89
+SUBLEVEL = 107
EXTRAVERSION =
NAME = Trick or Treat
@@ -93,10 +93,17 @@
# If the user is running make -s (silent mode), suppress echoing of
# commands
+# make-4.0 (and later) keep single letter options in the 1st word of MAKEFLAGS.
-ifneq ($(findstring s,$(filter-out --%,$(MAKEFLAGS))),)
- quiet=silent_
- KBUILD_VERBOSE = 0
+ifeq ($(filter 3.%,$(MAKE_VERSION)),)
+silence:=$(findstring s,$(firstword -$(MAKEFLAGS)))
+else
+silence:=$(findstring s,$(filter-out --%,$(MAKEFLAGS)))
+endif
+
+ifeq ($(silence),s)
+quiet=silent_
+KBUILD_VERBOSE = 0
endif
export quiet Q KBUILD_VERBOSE
@@ -430,6 +437,7 @@
HOSTCC = gcc
HOSTCXX = g++
endif
+HOSTPKG_CONFIG = pkg-config
export KBUILD_USERCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
-O2 -fomit-frame-pointer -std=gnu89
@@ -525,7 +533,7 @@
KBUILD_LDFLAGS :=
CLANG_FLAGS :=
-export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC
+export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC HOSTPKG_CONFIG
export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AWK INSTALLKERNEL
export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
@@ -880,6 +888,7 @@
dwarf-version-$(CONFIG_DEBUG_INFO_DWARF4) := 4
dwarf-version-$(CONFIG_DEBUG_INFO_DWARF5) := 5
DEBUG_CFLAGS += -gdwarf-$(dwarf-version-y)
+KBUILD_AFLAGS += -gdwarf-$(dwarf-version-y)
endif
ifdef CONFIG_DEBUG_INFO_REDUCED
@@ -1841,6 +1850,8 @@
@echo >&2 '***'
@exit 1
+KBUILD_MODULES :=
+
endif # CONFIG_MODULES
# Single targets
@@ -1866,18 +1877,12 @@
$(single-no-ko): descend
@:
-ifeq ($(KBUILD_EXTMOD),)
-# For the single build of in-tree modules, use a temporary file to avoid
-# the situation of modules_install installing an invalid modules.order.
-MODORDER := .modules.tmp
-endif
-
+# Remove MODORDER when done because it is not the real one.
PHONY += single_modpost
single_modpost: $(single-no-ko) modules_prepare
$(Q){ $(foreach m, $(single-ko), echo $(extmod_prefix)$m;) } > $(MODORDER)
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.modpost
-
-KBUILD_MODULES := 1
+ $(Q)rm -f $(MODORDER)
export KBUILD_SINGLE_TARGETS := $(addprefix $(extmod_prefix), $(single-no-ko))
@@ -1885,10 +1890,8 @@
build-dirs := $(foreach d, $(build-dirs), \
$(if $(filter $(d)/%, $(KBUILD_SINGLE_TARGETS)), $(d)))
-endif
+KBUILD_MODULES := 1
-ifndef CONFIG_MODULES
-KBUILD_MODULES :=
endif
# Handle descending into subdirectories listed in $(build-dirs)
diff --git a/arch/alpha/boot/tools/objstrip.c b/arch/alpha/boot/tools/objstrip.c
index 08b430d2..7cf92d1 100644
--- a/arch/alpha/boot/tools/objstrip.c
+++ b/arch/alpha/boot/tools/objstrip.c
@@ -148,7 +148,7 @@ main (int argc, char *argv[])
#ifdef __ELF__
elf = (struct elfhdr *) buf;
- if (elf->e_ident[0] == 0x7f && str_has_prefix((char *)elf->e_ident + 1, "ELF")) {
+ if (memcmp(&elf->e_ident[EI_MAG0], ELFMAG, SELFMAG) == 0) {
if (elf->e_type != ET_EXEC) {
fprintf(stderr, "%s: %s is not an ELF executable\n",
prog_name, inname);
diff --git a/arch/alpha/kernel/irq.c b/arch/alpha/kernel/irq.c
index f6d2946..15f2eff 100644
--- a/arch/alpha/kernel/irq.c
+++ b/arch/alpha/kernel/irq.c
@@ -60,7 +60,7 @@ int irq_select_affinity(unsigned int irq)
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu;
- cpumask_copy(irq_data_get_affinity_mask(data), cpumask_of(cpu));
+ irq_data_update_affinity(data, cpumask_of(cpu));
chip->irq_set_affinity(data, cpumask_of(cpu), false);
return 0;
}
diff --git a/arch/alpha/kernel/module.c b/arch/alpha/kernel/module.c
index 5b60c24..cbefa5a 100644
--- a/arch/alpha/kernel/module.c
+++ b/arch/alpha/kernel/module.c
@@ -146,10 +146,8 @@ apply_relocate_add(Elf64_Shdr *sechdrs, const char *strtab,
base = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr;
symtab = (Elf64_Sym *)sechdrs[symindex].sh_addr;
- /* The small sections were sorted to the end of the segment.
- The following should definitely cover them. */
- gp = (u64)me->core_layout.base + me->core_layout.size - 0x8000;
got = sechdrs[me->arch.gotsecindex].sh_addr;
+ gp = got + 0x8000;
for (i = 0; i < n; i++) {
unsigned long r_sym = ELF64_R_SYM (rela[i].r_info);
diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
index e8051064..afaf4f6 100644
--- a/arch/alpha/kernel/traps.c
+++ b/arch/alpha/kernel/traps.c
@@ -192,7 +192,7 @@ die_if_kernel(char * str, struct pt_regs *regs, long err, unsigned long *r9_15)
local_irq_enable();
while (1);
}
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
#ifndef CONFIG_MATHEMU
@@ -235,7 +235,21 @@ do_entIF(unsigned long type, struct pt_regs *regs)
{
int signo, code;
- if ((regs->ps & ~IPL_MAX) == 0) {
+ if (type == 3) { /* FEN fault */
+ /* Irritating users can call PAL_clrfen to disable the
+ FPU for the process. The kernel will then trap in
+ do_switch_stack and undo_switch_stack when we try
+ to save and restore the FP registers.
+
+ Given that GCC by default generates code that uses the
+ FP registers, PAL_clrfen is not useful except for DoS
+ attacks. So turn the bleeding FPU back on and be done
+ with it. */
+ current_thread_info()->pcb.flags |= 1;
+ __reload_thread(¤t_thread_info()->pcb);
+ return;
+ }
+ if (!user_mode(regs)) {
if (type == 1) {
const unsigned int *data
= (const unsigned int *) regs->pc;
@@ -368,20 +382,6 @@ do_entIF(unsigned long type, struct pt_regs *regs)
}
break;
- case 3: /* FEN fault */
- /* Irritating users can call PAL_clrfen to disable the
- FPU for the process. The kernel will then trap in
- do_switch_stack and undo_switch_stack when we try
- to save and restore the FP registers.
-
- Given that GCC by default generates code that uses the
- FP registers, PAL_clrfen is not useful except for DoS
- attacks. So turn the bleeding FPU back on and be done
- with it. */
- current_thread_info()->pcb.flags |= 1;
- __reload_thread(¤t_thread_info()->pcb);
- return;
-
case 5: /* illoc */
default: /* unexpected instruction-fault type */
;
@@ -577,7 +577,7 @@ do_entUna(void * va, unsigned long opcode, unsigned long reg,
printk("Bad unaligned kernel access at %016lx: %p %lx %lu\n",
pc, va, opcode, reg);
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
got_exception:
/* Ok, we caught the exception, but we don't want it. Is there
@@ -632,7 +632,7 @@ do_entUna(void * va, unsigned long opcode, unsigned long reg,
local_irq_enable();
while (1);
}
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
/*
diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
index eee5102..e9193d5 100644
--- a/arch/alpha/mm/fault.c
+++ b/arch/alpha/mm/fault.c
@@ -204,7 +204,7 @@ do_page_fault(unsigned long address, unsigned long mmcsr,
printk(KERN_ALERT "Unable to handle kernel paging request at "
"virtual address %016lx\n", address);
die_if_kernel("Oops", regs, cause, (unsigned long*)regs - 16);
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
/* We ran out of memory, or some other thing happened to us that
made us unable to handle the page fault gracefully. */
diff --git a/arch/arm/boot/dts/e60k02.dtsi b/arch/arm/boot/dts/e60k02.dtsi
index cfb239d..54b4de6 100644
--- a/arch/arm/boot/dts/e60k02.dtsi
+++ b/arch/arm/boot/dts/e60k02.dtsi
@@ -302,6 +302,7 @@ &usdhc3 {
&usbotg1 {
pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg1>;
disable-over-current;
srp-disable;
hnp-disable;
diff --git a/arch/arm/boot/dts/exynos3250-rinato.dts b/arch/arm/boot/dts/exynos3250-rinato.dts
index f6ba5e4..7562497 100644
--- a/arch/arm/boot/dts/exynos3250-rinato.dts
+++ b/arch/arm/boot/dts/exynos3250-rinato.dts
@@ -249,7 +249,7 @@ &fimd {
i80-if-timings {
cs-setup = <0>;
wr-setup = <0>;
- wr-act = <1>;
+ wr-active = <1>;
wr-hold = <0>;
};
};
diff --git a/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi b/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
index 021d9fc..27a1a89 100644
--- a/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
+++ b/arch/arm/boot/dts/exynos4-cpu-thermal.dtsi
@@ -10,7 +10,7 @@
/ {
thermal-zones {
cpu_thermal: cpu-thermal {
- thermal-sensors = <&tmu 0>;
+ thermal-sensors = <&tmu>;
polling-delay-passive = <0>;
polling-delay = <0>;
trips {
diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi
index eab77a6..201e2fe 100644
--- a/arch/arm/boot/dts/exynos4.dtsi
+++ b/arch/arm/boot/dts/exynos4.dtsi
@@ -605,7 +605,7 @@ i2c_8: i2c@138e0000 {
status = "disabled";
hdmi_i2c_phy: hdmiphy@38 {
- compatible = "exynos4210-hdmiphy";
+ compatible = "samsung,exynos4210-hdmiphy";
reg = <0x38>;
};
};
diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
index 7e7d65c..ac62d8d 100644
--- a/arch/arm/boot/dts/exynos4210.dtsi
+++ b/arch/arm/boot/dts/exynos4210.dtsi
@@ -393,7 +393,6 @@ &cpu_alert2 {
&cpu_thermal {
polling-delay-passive = <0>;
polling-delay = <0>;
- thermal-sensors = <&tmu 0>;
};
&gic {
diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
index 4ffa925..de0275d 100644
--- a/arch/arm/boot/dts/exynos5250.dtsi
+++ b/arch/arm/boot/dts/exynos5250.dtsi
@@ -1119,7 +1119,7 @@ timer {
&cpu_thermal {
polling-delay-passive = <0>;
polling-delay = <0>;
- thermal-sensors = <&tmu 0>;
+ thermal-sensors = <&tmu>;
cooling-maps {
map0 {
diff --git a/arch/arm/boot/dts/exynos5410-odroidxu.dts b/arch/arm/boot/dts/exynos5410-odroidxu.dts
index 884fef5..3765f5b 100644
--- a/arch/arm/boot/dts/exynos5410-odroidxu.dts
+++ b/arch/arm/boot/dts/exynos5410-odroidxu.dts
@@ -120,7 +120,6 @@ &clock_audss {
};
&cpu0_thermal {
- thermal-sensors = <&tmu_cpu0 0>;
polling-delay-passive = <0>;
polling-delay = <0>;
diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi
index e23e8ff..4fb4804 100644
--- a/arch/arm/boot/dts/exynos5420.dtsi
+++ b/arch/arm/boot/dts/exynos5420.dtsi
@@ -602,7 +602,7 @@ dp_phy: dp-video-phy {
};
mipi_phy: mipi-video-phy {
- compatible = "samsung,s5pv210-mipi-video-phy";
+ compatible = "samsung,exynos5420-mipi-video-phy";
syscon = <&pmu_system_controller>;
#phy-cells = <1>;
};
diff --git a/arch/arm/boot/dts/exynos5422-odroidhc1.dts b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
index d91f7fa..e57d3e4 100644
--- a/arch/arm/boot/dts/exynos5422-odroidhc1.dts
+++ b/arch/arm/boot/dts/exynos5422-odroidhc1.dts
@@ -29,7 +29,7 @@ led-1 {
thermal-zones {
cpu0_thermal: cpu0-thermal {
- thermal-sensors = <&tmu_cpu0 0>;
+ thermal-sensors = <&tmu_cpu0>;
trips {
cpu0_alert0: cpu-alert-0 {
temperature = <70000>; /* millicelsius */
@@ -84,7 +84,7 @@ map1 {
};
};
cpu1_thermal: cpu1-thermal {
- thermal-sensors = <&tmu_cpu1 0>;
+ thermal-sensors = <&tmu_cpu1>;
trips {
cpu1_alert0: cpu-alert-0 {
temperature = <70000>;
@@ -128,7 +128,7 @@ map1 {
};
};
cpu2_thermal: cpu2-thermal {
- thermal-sensors = <&tmu_cpu2 0>;
+ thermal-sensors = <&tmu_cpu2>;
trips {
cpu2_alert0: cpu-alert-0 {
temperature = <70000>;
@@ -172,7 +172,7 @@ map1 {
};
};
cpu3_thermal: cpu3-thermal {
- thermal-sensors = <&tmu_cpu3 0>;
+ thermal-sensors = <&tmu_cpu3>;
trips {
cpu3_alert0: cpu-alert-0 {
temperature = <70000>;
@@ -216,7 +216,7 @@ map1 {
};
};
gpu_thermal: gpu-thermal {
- thermal-sensors = <&tmu_gpu 0>;
+ thermal-sensors = <&tmu_gpu>;
trips {
gpu_alert0: gpu-alert-0 {
temperature = <70000>;
diff --git a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
index e35af40..0b27e96 100644
--- a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
+++ b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
@@ -50,7 +50,7 @@ fan0: pwm-fan {
thermal-zones {
cpu0_thermal: cpu0-thermal {
- thermal-sensors = <&tmu_cpu0 0>;
+ thermal-sensors = <&tmu_cpu0>;
polling-delay-passive = <250>;
polling-delay = <0>;
trips {
@@ -139,7 +139,7 @@ cpu0_cooling_map4: map4 {
};
};
cpu1_thermal: cpu1-thermal {
- thermal-sensors = <&tmu_cpu1 0>;
+ thermal-sensors = <&tmu_cpu1>;
polling-delay-passive = <250>;
polling-delay = <0>;
trips {
@@ -212,7 +212,7 @@ cpu1_cooling_map4: map4 {
};
};
cpu2_thermal: cpu2-thermal {
- thermal-sensors = <&tmu_cpu2 0>;
+ thermal-sensors = <&tmu_cpu2>;
polling-delay-passive = <250>;
polling-delay = <0>;
trips {
@@ -285,7 +285,7 @@ cpu2_cooling_map4: map4 {
};
};
cpu3_thermal: cpu3-thermal {
- thermal-sensors = <&tmu_cpu3 0>;
+ thermal-sensors = <&tmu_cpu3>;
polling-delay-passive = <250>;
polling-delay = <0>;
trips {
@@ -358,7 +358,7 @@ cpu3_cooling_map4: map4 {
};
};
gpu_thermal: gpu-thermal {
- thermal-sensors = <&tmu_gpu 0>;
+ thermal-sensors = <&tmu_gpu>;
polling-delay-passive = <250>;
polling-delay = <0>;
trips {
diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts
index 37d0cff..70c4a48 100644
--- a/arch/arm/boot/dts/imx53-ppd.dts
+++ b/arch/arm/boot/dts/imx53-ppd.dts
@@ -488,7 +488,7 @@ &i2c1 {
scl-gpios = <&gpio3 21 GPIO_ACTIVE_HIGH>;
status = "okay";
- i2c-switch@70 {
+ i2c-mux@70 {
compatible = "nxp,pca9547";
#address-cells = <1>;
#size-cells = <0>;
diff --git a/arch/arm/boot/dts/imx6qdl-gw560x.dtsi b/arch/arm/boot/dts/imx6qdl-gw560x.dtsi
index 4bc4371..4b81a97 100644
--- a/arch/arm/boot/dts/imx6qdl-gw560x.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-gw560x.dtsi
@@ -632,7 +632,6 @@ &ssi1 {
&uart1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_uart1>;
- uart-has-rtscts;
rts-gpios = <&gpio7 1 GPIO_ACTIVE_HIGH>;
status = "okay";
};
diff --git a/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts b/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts
index a17b8bb..f2231cb 100644
--- a/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts
+++ b/arch/arm/boot/dts/imx6sl-tolino-shine2hd.dts
@@ -597,6 +597,7 @@ &usdhc3 {
&usbotg1 {
pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg1>;
disable-over-current;
srp-disable;
hnp-disable;
diff --git a/arch/arm/boot/dts/imx6ul-pico-dwarf.dts b/arch/arm/boot/dts/imx6ul-pico-dwarf.dts
index 162dc25..5a74c7f 100644
--- a/arch/arm/boot/dts/imx6ul-pico-dwarf.dts
+++ b/arch/arm/boot/dts/imx6ul-pico-dwarf.dts
@@ -32,7 +32,7 @@ sys_mclk: clock-sys-mclk {
};
&i2c2 {
- clock_frequency = <100000>;
+ clock-frequency = <100000>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c2>;
status = "okay";
diff --git a/arch/arm/boot/dts/imx7d-pico-dwarf.dts b/arch/arm/boot/dts/imx7d-pico-dwarf.dts
index 5162fe2..fdc1056 100644
--- a/arch/arm/boot/dts/imx7d-pico-dwarf.dts
+++ b/arch/arm/boot/dts/imx7d-pico-dwarf.dts
@@ -32,7 +32,7 @@ sys_mclk: clock-sys-mclk {
};
&i2c1 {
- clock_frequency = <100000>;
+ clock-frequency = <100000>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c1>;
status = "okay";
@@ -52,7 +52,7 @@ pressure-sensor@60 {
};
&i2c4 {
- clock_frequency = <100000>;
+ clock-frequency = <100000>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c1>;
status = "okay";
diff --git a/arch/arm/boot/dts/imx7d-pico-nymph.dts b/arch/arm/boot/dts/imx7d-pico-nymph.dts
index 104a852..5afb167 100644
--- a/arch/arm/boot/dts/imx7d-pico-nymph.dts
+++ b/arch/arm/boot/dts/imx7d-pico-nymph.dts
@@ -43,7 +43,7 @@ sys_mclk: clock-sys-mclk {
};
&i2c1 {
- clock_frequency = <100000>;
+ clock-frequency = <100000>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c1>;
status = "okay";
@@ -64,7 +64,7 @@ adc@52 {
};
&i2c2 {
- clock_frequency = <100000>;
+ clock-frequency = <100000>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c2>;
status = "okay";
diff --git a/arch/arm/boot/dts/imx7s.dtsi b/arch/arm/boot/dts/imx7s.dtsi
index 95f2251..f4d2009 100644
--- a/arch/arm/boot/dts/imx7s.dtsi
+++ b/arch/arm/boot/dts/imx7s.dtsi
@@ -497,7 +497,7 @@ gpr: iomuxc-gpr@30340000 {
mux: mux-controller {
compatible = "mmio-mux";
- #mux-control-cells = <0>;
+ #mux-control-cells = <1>;
mux-reg-masks = <0x14 0x00000010>;
};
diff --git a/arch/arm/boot/dts/qcom-sdx55.dtsi b/arch/arm/boot/dts/qcom-sdx55.dtsi
index 0e76d03..9d62487 100644
--- a/arch/arm/boot/dts/qcom-sdx55.dtsi
+++ b/arch/arm/boot/dts/qcom-sdx55.dtsi
@@ -502,7 +502,7 @@ pil-reloc@94c {
};
apps_smmu: iommu@15000000 {
- compatible = "qcom,sdx55-smmu-500", "arm,mmu-500";
+ compatible = "qcom,sdx55-smmu-500", "qcom,smmu-500", "arm,mmu-500";
reg = <0x15000000 0x20000>;
#iommu-cells = <2>;
#global-interrupts = <1>;
diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
index 66ff5db..8670c94 100644
--- a/arch/arm/boot/dts/rk3288.dtsi
+++ b/arch/arm/boot/dts/rk3288.dtsi
@@ -1180,6 +1180,7 @@ edp: dp@ff970000 {
clock-names = "dp", "pclk";
phys = <&edp_phy>;
phy-names = "dp";
+ power-domains = <&power RK3288_PD_VIO>;
resets = <&cru SRST_EDP>;
reset-names = "dp";
rockchip,grf = <&grf>;
diff --git a/arch/arm/boot/dts/sam9x60.dtsi b/arch/arm/boot/dts/sam9x60.dtsi
index ec45ced..e1e0dec 100644
--- a/arch/arm/boot/dts/sam9x60.dtsi
+++ b/arch/arm/boot/dts/sam9x60.dtsi
@@ -567,7 +567,7 @@ pmecc: ecc-engine@ffffe000 {
mpddrc: mpddrc@ffffe800 {
compatible = "microchip,sam9x60-ddramc", "atmel,sama5d3-ddramc";
reg = <0xffffe800 0x200>;
- clocks = <&pmc PMC_TYPE_SYSTEM 2>, <&pmc PMC_TYPE_CORE PMC_MCK>;
+ clocks = <&pmc PMC_TYPE_SYSTEM 2>, <&pmc PMC_TYPE_PERIPHERAL 49>;
clock-names = "ddrck", "mpddr";
};
diff --git a/arch/arm/boot/dts/spear320-hmi.dts b/arch/arm/boot/dts/spear320-hmi.dts
index 367ba48..5c562fb 100644
--- a/arch/arm/boot/dts/spear320-hmi.dts
+++ b/arch/arm/boot/dts/spear320-hmi.dts
@@ -242,7 +242,7 @@ stmpe811@41 {
irq-trigger = <0x1>;
stmpegpio: stmpe-gpio {
- compatible = "stmpe,gpio";
+ compatible = "st,stmpe-gpio";
reg = <0>;
gpio-controller;
#gpio-cells = <2>;
diff --git a/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts b/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
index 8e7dfcf..355f784 100644
--- a/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
+++ b/arch/arm/boot/dts/sun8i-h3-nanopi-duo2.dts
@@ -57,7 +57,7 @@ reg_vdd_cpux: vdd-cpux-regulator {
regulator-ramp-delay = <50>; /* 4ms */
enable-active-high;
- enable-gpio = <&r_pio 0 8 GPIO_ACTIVE_HIGH>; /* PL8 */
+ enable-gpios = <&r_pio 0 8 GPIO_ACTIVE_HIGH>; /* PL8 */
gpios = <&r_pio 0 6 GPIO_ACTIVE_HIGH>; /* PL6 */
gpios-states = <0x1>;
states = <1100000 0>, <1300000 1>;
diff --git a/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts b/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
index 043ddd7..36d5299 100644
--- a/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
+++ b/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
@@ -343,7 +343,7 @@ gpio6: io-expander@22 {
};
&i2c2 {
- tca9548@70 {
+ i2c-mux@70 {
compatible = "nxp,pca9548";
pinctrl-0 = <&pinctrl_i2c_mux_reset>;
pinctrl-names = "default";
diff --git a/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts b/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts
index de79dcf..ba2001f 100644
--- a/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts
+++ b/arch/arm/boot/dts/vf610-zii-dev-rev-c.dts
@@ -340,7 +340,7 @@ eeprom@50 {
};
&i2c2 {
- tca9548@70 {
+ i2c-mux@70 {
compatible = "nxp,pca9548";
pinctrl-0 = <&pinctrl_i2c_mux_reset>;
pinctrl-names = "default";
diff --git a/arch/arm/configs/bcm2835_defconfig b/arch/arm/configs/bcm2835_defconfig
index 383c632..1e244a9 100644
--- a/arch/arm/configs/bcm2835_defconfig
+++ b/arch/arm/configs/bcm2835_defconfig
@@ -108,6 +108,7 @@
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_DRM=y
CONFIG_DRM_VC4=y
+CONFIG_FB=y
CONFIG_FB_SIMPLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_SOUND=y
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index 54abd87..91e757b 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -334,7 +334,7 @@ static void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
if (panic_on_oops)
panic("Fatal exception");
if (signr)
- do_exit(signr);
+ make_task_dead(signr);
}
/*
diff --git a/arch/arm/mach-imx/cpu-imx25.c b/arch/arm/mach-imx/cpu-imx25.c
index b2e1963..2ee2d28 100644
--- a/arch/arm/mach-imx/cpu-imx25.c
+++ b/arch/arm/mach-imx/cpu-imx25.c
@@ -23,6 +23,7 @@ static int mx25_read_cpu_rev(void)
np = of_find_compatible_node(NULL, NULL, "fsl,imx25-iim");
iim_base = of_iomap(np, 0);
+ of_node_put(np);
BUG_ON(!iim_base);
rev = readl(iim_base + MXC_IIMSREV);
iounmap(iim_base);
diff --git a/arch/arm/mach-imx/cpu-imx27.c b/arch/arm/mach-imx/cpu-imx27.c
index bf70e13..1d289390 100644
--- a/arch/arm/mach-imx/cpu-imx27.c
+++ b/arch/arm/mach-imx/cpu-imx27.c
@@ -28,6 +28,7 @@ static int mx27_read_cpu_rev(void)
np = of_find_compatible_node(NULL, NULL, "fsl,imx27-ccm");
ccm_base = of_iomap(np, 0);
+ of_node_put(np);
BUG_ON(!ccm_base);
/*
* now we have access to the IO registers. As we need
diff --git a/arch/arm/mach-imx/cpu-imx31.c b/arch/arm/mach-imx/cpu-imx31.c
index b9c24b8..35c5449 100644
--- a/arch/arm/mach-imx/cpu-imx31.c
+++ b/arch/arm/mach-imx/cpu-imx31.c
@@ -39,6 +39,7 @@ static int mx31_read_cpu_rev(void)
np = of_find_compatible_node(NULL, NULL, "fsl,imx31-iim");
iim_base = of_iomap(np, 0);
+ of_node_put(np);
BUG_ON(!iim_base);
/* read SREV register from IIM module */
diff --git a/arch/arm/mach-imx/cpu-imx35.c b/arch/arm/mach-imx/cpu-imx35.c
index 80e7d8a..1fe75b3 100644
--- a/arch/arm/mach-imx/cpu-imx35.c
+++ b/arch/arm/mach-imx/cpu-imx35.c
@@ -21,6 +21,7 @@ static int mx35_read_cpu_rev(void)
np = of_find_compatible_node(NULL, NULL, "fsl,imx35-iim");
iim_base = of_iomap(np, 0);
+ of_node_put(np);
BUG_ON(!iim_base);
rev = imx_readl(iim_base + MXC_IIMSREV);
diff --git a/arch/arm/mach-imx/cpu-imx5.c b/arch/arm/mach-imx/cpu-imx5.c
index ad56263..a67c89b 100644
--- a/arch/arm/mach-imx/cpu-imx5.c
+++ b/arch/arm/mach-imx/cpu-imx5.c
@@ -28,6 +28,7 @@ static u32 imx5_read_srev_reg(const char *compat)
np = of_find_compatible_node(NULL, NULL, compat);
iim_base = of_iomap(np, 0);
+ of_node_put(np);
WARN_ON(!iim_base);
srev = readl(iim_base + IIM_SREV) & 0xff;
diff --git a/arch/arm/mach-imx/mmdc.c b/arch/arm/mach-imx/mmdc.c
index af12668..b9efe9d 100644
--- a/arch/arm/mach-imx/mmdc.c
+++ b/arch/arm/mach-imx/mmdc.c
@@ -99,6 +99,7 @@ struct mmdc_pmu {
cpumask_t cpu;
struct hrtimer hrtimer;
unsigned int active_events;
+ int id;
struct device *dev;
struct perf_event *mmdc_events[MMDC_NUM_COUNTERS];
struct hlist_node node;
@@ -433,8 +434,6 @@ static enum hrtimer_restart mmdc_pmu_timer_handler(struct hrtimer *hrtimer)
static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
void __iomem *mmdc_base, struct device *dev)
{
- int mmdc_num;
-
*pmu_mmdc = (struct mmdc_pmu) {
.pmu = (struct pmu) {
.task_ctx_nr = perf_invalid_context,
@@ -452,15 +451,16 @@ static int mmdc_pmu_init(struct mmdc_pmu *pmu_mmdc,
.active_events = 0,
};
- mmdc_num = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
+ pmu_mmdc->id = ida_simple_get(&mmdc_ida, 0, 0, GFP_KERNEL);
- return mmdc_num;
+ return pmu_mmdc->id;
}
static int imx_mmdc_remove(struct platform_device *pdev)
{
struct mmdc_pmu *pmu_mmdc = platform_get_drvdata(pdev);
+ ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
perf_pmu_unregister(&pmu_mmdc->pmu);
iounmap(pmu_mmdc->mmdc_base);
@@ -474,7 +474,6 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
{
struct mmdc_pmu *pmu_mmdc;
char *name;
- int mmdc_num;
int ret;
const struct of_device_id *of_id =
of_match_device(imx_mmdc_dt_ids, &pdev->dev);
@@ -497,14 +496,14 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
cpuhp_mmdc_state = ret;
}
- mmdc_num = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
- pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
- if (mmdc_num == 0)
- name = "mmdc";
- else
- name = devm_kasprintf(&pdev->dev,
- GFP_KERNEL, "mmdc%d", mmdc_num);
+ ret = mmdc_pmu_init(pmu_mmdc, mmdc_base, &pdev->dev);
+ if (ret < 0)
+ goto pmu_free;
+ name = devm_kasprintf(&pdev->dev,
+ GFP_KERNEL, "mmdc%d", ret);
+
+ pmu_mmdc->mmdc_ipg_clk = mmdc_ipg_clk;
pmu_mmdc->devtype_data = (struct fsl_mmdc_devtype_data *)of_id->data;
hrtimer_init(&pmu_mmdc->hrtimer, CLOCK_MONOTONIC,
@@ -525,6 +524,7 @@ static int imx_mmdc_perf_init(struct platform_device *pdev, void __iomem *mmdc_b
pmu_register_err:
pr_warn("MMDC Perf PMU failed (%d), disabled\n", ret);
+ ida_simple_remove(&mmdc_ida, pmu_mmdc->id);
cpuhp_state_remove_instance_nocalls(cpuhp_mmdc_state, &pmu_mmdc->node);
hrtimer_cancel(&pmu_mmdc->hrtimer);
pmu_free:
diff --git a/arch/arm/mach-omap1/timer.c b/arch/arm/mach-omap1/timer.c
index 0411d55..7046d7f 100644
--- a/arch/arm/mach-omap1/timer.c
+++ b/arch/arm/mach-omap1/timer.c
@@ -165,7 +165,7 @@ static int __init omap1_dm_timer_init(void)
kfree(pdata);
err_free_pdev:
- platform_device_unregister(pdev);
+ platform_device_put(pdev);
return ret;
}
diff --git a/arch/arm/mach-omap2/timer.c b/arch/arm/mach-omap2/timer.c
index 620ba69..5677c4a 100644
--- a/arch/arm/mach-omap2/timer.c
+++ b/arch/arm/mach-omap2/timer.c
@@ -76,6 +76,7 @@ static void __init realtime_counter_init(void)
}
rate = clk_get_rate(sys_clk);
+ clk_put(sys_clk);
if (soc_is_dra7xx()) {
/*
diff --git a/arch/arm/mach-s3c/s3c64xx.c b/arch/arm/mach-s3c/s3c64xx.c
index 4dfb648..17f0065 100644
--- a/arch/arm/mach-s3c/s3c64xx.c
+++ b/arch/arm/mach-s3c/s3c64xx.c
@@ -173,7 +173,8 @@ static struct samsung_pwm_variant s3c64xx_pwm_variant = {
.tclk_mask = (1 << 7) | (1 << 6) | (1 << 5),
};
-void __init s3c64xx_set_timer_source(unsigned int event, unsigned int source)
+void __init s3c64xx_set_timer_source(enum s3c64xx_timer_mode event,
+ enum s3c64xx_timer_mode source)
{
s3c64xx_pwm_variant.output_mask = BIT(SAMSUNG_PWM_NUM) - 1;
s3c64xx_pwm_variant.output_mask &= ~(BIT(event) | BIT(source));
diff --git a/arch/arm/mach-zynq/slcr.c b/arch/arm/mach-zynq/slcr.c
index 3770761..9765b3f 100644
--- a/arch/arm/mach-zynq/slcr.c
+++ b/arch/arm/mach-zynq/slcr.c
@@ -213,6 +213,7 @@ int __init zynq_early_slcr_init(void)
zynq_slcr_regmap = syscon_regmap_lookup_by_compatible("xlnx,zynq-slcr");
if (IS_ERR(zynq_slcr_regmap)) {
pr_err("%s: failed to find zynq-slcr\n", __func__);
+ of_node_put(np);
return -ENODEV;
}
diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c
index efa4020..af51778 100644
--- a/arch/arm/mm/fault.c
+++ b/arch/arm/mm/fault.c
@@ -125,7 +125,7 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr,
show_pte(KERN_ALERT, mm, addr);
die("Oops", regs, fsr);
bust_spinlocks(0);
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
}
/*
diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
index 88feffe..8061367 100644
--- a/arch/arm/mm/nommu.c
+++ b/arch/arm/mm/nommu.c
@@ -161,7 +161,7 @@ void __init paging_init(const struct machine_desc *mdesc)
mpu_setup();
/* allocate the zero page. */
- zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ zero_page = (void *)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
if (!zero_page)
panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
diff --git a/arch/arm/probes/kprobes/core.c b/arch/arm/probes/kprobes/core.c
index 9d8634e..9bcae72 100644
--- a/arch/arm/probes/kprobes/core.c
+++ b/arch/arm/probes/kprobes/core.c
@@ -11,6 +11,8 @@
* Copyright (C) 2007 Marvell Ltd.
*/
+#define pr_fmt(fmt) "kprobes: " fmt
+
#include <linux/kernel.h>
#include <linux/kprobes.h>
#include <linux/module.h>
@@ -278,7 +280,7 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
break;
case KPROBE_REENTER:
/* A nested probe was hit in FIQ, it is a BUG */
- pr_warn("Unrecoverable kprobe detected.\n");
+ pr_warn("Failed to recover from reentered kprobes.\n");
dump_kprobe(p);
fallthrough;
default:
diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
index 3f5254e..db5a1f4 100644
--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
@@ -152,7 +152,7 @@ scpi {
scpi_clocks: clocks {
compatible = "arm,scpi-clocks";
- scpi_dvfs: clock-controller {
+ scpi_dvfs: clocks-0 {
compatible = "arm,scpi-dvfs-clocks";
#clock-cells = <1>;
clock-indices = <0>;
@@ -161,7 +161,7 @@ scpi_dvfs: clock-controller {
};
scpi_sensors: sensors {
- compatible = "amlogic,meson-gxbb-scpi-sensors";
+ compatible = "amlogic,meson-gxbb-scpi-sensors", "arm,scpi-sensors";
#thermal-sensor-cells = <1>;
};
};
@@ -1885,7 +1885,7 @@ apb: bus@ffe00000 {
sd_emmc_b: sd@5000 {
compatible = "amlogic,meson-axg-mmc";
reg = <0x0 0x5000 0x0 0x800>;
- interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
clocks = <&clkc CLKID_SD_EMMC_B>,
<&clkc CLKID_SD_EMMC_B_CLK0>,
@@ -1897,7 +1897,7 @@ sd_emmc_b: sd@5000 {
sd_emmc_c: mmc@7000 {
compatible = "amlogic,meson-axg-mmc";
reg = <0x0 0x7000 0x0 0x800>;
- interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
clocks = <&clkc CLKID_SD_EMMC_C>,
<&clkc CLKID_SD_EMMC_C_CLK0>,
diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
index a3a1ea0..899cfe4 100644
--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
@@ -1733,7 +1733,7 @@ int_mdio: mdio@1 {
#address-cells = <1>;
#size-cells = <0>;
- internal_ephy: ethernet_phy@8 {
+ internal_ephy: ethernet-phy@8 {
compatible = "ethernet-phy-id0180.3301",
"ethernet-phy-ieee802.3-c22";
interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>;
@@ -2330,7 +2330,7 @@ uart_A: serial@24000 {
sd_emmc_a: sd@ffe03000 {
compatible = "amlogic,meson-axg-mmc";
reg = <0x0 0xffe03000 0x0 0x800>;
- interrupts = <GIC_SPI 189 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
clocks = <&clkc CLKID_SD_EMMC_A>,
<&clkc CLKID_SD_EMMC_A_CLK0>,
@@ -2342,7 +2342,7 @@ sd_emmc_a: sd@ffe03000 {
sd_emmc_b: sd@ffe05000 {
compatible = "amlogic,meson-axg-mmc";
reg = <0x0 0xffe05000 0x0 0x800>;
- interrupts = <GIC_SPI 190 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
clocks = <&clkc CLKID_SD_EMMC_B>,
<&clkc CLKID_SD_EMMC_B_CLK0>,
@@ -2354,7 +2354,7 @@ sd_emmc_b: sd@ffe05000 {
sd_emmc_c: mmc@ffe07000 {
compatible = "amlogic,meson-axg-mmc";
reg = <0x0 0xffe07000 0x0 0x800>;
- interrupts = <GIC_SPI 191 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
clocks = <&clkc CLKID_SD_EMMC_C>,
<&clkc CLKID_SD_EMMC_C_CLK0>,
diff --git a/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
index fb0ab27..6eaceb7 100644
--- a/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-g12a.dtsi
@@ -57,26 +57,6 @@ cpu_opp_table: opp-table {
compatible = "operating-points-v2";
opp-shared;
- opp-100000000 {
- opp-hz = /bits/ 64 <100000000>;
- opp-microvolt = <731000>;
- };
-
- opp-250000000 {
- opp-hz = /bits/ 64 <250000000>;
- opp-microvolt = <731000>;
- };
-
- opp-500000000 {
- opp-hz = /bits/ 64 <500000000>;
- opp-microvolt = <731000>;
- };
-
- opp-667000000 {
- opp-hz = /bits/ 64 <666666666>;
- opp-microvolt = <731000>;
- };
-
opp-1000000000 {
opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <731000>;
diff --git a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
index 2d7032f..772c220 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gx-libretech-pc.dtsi
@@ -17,7 +17,7 @@ adc-keys {
io-channel-names = "buttons";
keyup-threshold-microvolt = <1800000>;
- update-button {
+ button-update {
label = "update";
linux,code = <KEY_VENDOR>;
press-threshold-microvolt = <1300000>;
diff --git a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
index aa14ea0..32cc9fa 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gx.dtsi
@@ -232,7 +232,7 @@ sn: sn@14 {
reg = <0x14 0x10>;
};
- eth_mac: eth_mac@34 {
+ eth_mac: eth-mac@34 {
reg = <0x34 0x10>;
};
@@ -249,7 +249,7 @@ scpi {
scpi_clocks: clocks {
compatible = "arm,scpi-clocks";
- scpi_dvfs: scpi_clocks@0 {
+ scpi_dvfs: clocks-0 {
compatible = "arm,scpi-dvfs-clocks";
#clock-cells = <1>;
clock-indices = <0>;
@@ -531,7 +531,7 @@ periphs: bus@c8834000 {
#size-cells = <2>;
ranges = <0x0 0x0 0x0 0xc8834000 0x0 0x2000>;
- hwrng: rng {
+ hwrng: rng@0 {
compatible = "amlogic,meson-rng";
reg = <0x0 0x0 0x0 0x4>;
};
@@ -602,21 +602,21 @@ apb: apb@d0000000 {
sd_emmc_a: mmc@70000 {
compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
reg = <0x0 0x70000 0x0 0x800>;
- interrupts = <GIC_SPI 216 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 216 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
sd_emmc_b: mmc@72000 {
compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
reg = <0x0 0x72000 0x0 0x800>;
- interrupts = <GIC_SPI 217 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
sd_emmc_c: mmc@74000 {
compatible = "amlogic,meson-gx-mmc", "amlogic,meson-gxbb-mmc";
reg = <0x0 0x74000 0x0 0x800>;
- interrupts = <GIC_SPI 218 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <GIC_SPI 218 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
index e8394a8..802faf7 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
@@ -16,7 +16,7 @@ / {
leds {
compatible = "gpio-leds";
- status {
+ led {
gpios = <&gpio_ao GPIOAO_13 GPIO_ACTIVE_LOW>;
default-state = "off";
color = <LED_COLOR_ID_RED>;
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
index 9ef210f..393d3cb 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-phicomm-n1.dts
@@ -18,7 +18,7 @@ cvbs-connector {
leds {
compatible = "gpio-leds";
- status {
+ led {
label = "n1:white:status";
gpios = <&gpio_ao GPIOAO_9 GPIO_ACTIVE_HIGH>;
default-state = "on";
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
index b331a01..c490dbb 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-gxl-s905d-sml5442tw.dts
@@ -79,6 +79,5 @@ bluetooth {
enable-gpios = <&gpio GPIOX_17 GPIO_ACTIVE_HIGH>;
max-speed = <2000000>;
clocks = <&wifi32k>;
- clock-names = "lpo";
};
};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
index c3ac531..3500229 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
@@ -759,7 +759,7 @@ mux {
};
};
- eth-phy-mux {
+ eth-phy-mux@55c {
compatible = "mdio-mux-mmioreg", "mdio-mux";
#address-cells = <1>;
#size-cells = <0>;
diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
index cadba194..38ebe98 100644
--- a/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-sm1-bananapi-m5.dts
@@ -17,13 +17,13 @@ / {
compatible = "bananapi,bpi-m5", "amlogic,sm1";
model = "Banana Pi BPI-M5";
- adc_keys {
+ adc-keys {
compatible = "adc-keys";
io-channels = <&saradc 2>;
io-channel-names = "buttons";
keyup-threshold-microvolt = <1800000>;
- key {
+ button-sw3 {
label = "SW3";
linux,code = <BTN_3>;
press-threshold-microvolt = <1700000>;
@@ -123,7 +123,7 @@ vddio_c: regulator-vddio_c {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
- enable-gpio = <&gpio_ao GPIOE_2 GPIO_ACTIVE_HIGH>;
+ enable-gpio = <&gpio_ao GPIOE_2 GPIO_OPEN_DRAIN>;
enable-active-high;
regulator-always-on;
diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts
index f3f9532..15fece2 100644
--- a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-hc4.dts
@@ -76,9 +76,17 @@ sound {
};
&cpu_thermal {
+ trips {
+ cpu_active: cpu-active {
+ temperature = <60000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "active";
+ };
+ };
+
cooling-maps {
map {
- trip = <&cpu_passive>;
+ trip = <&cpu_active>;
cooling-device = <&fan0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
};
};
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1012a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1012a-qds.dts
index e22c5e7..9615f3b 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1012a-qds.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1012a-qds.dts
@@ -110,7 +110,7 @@ &esdhc1 {
&i2c0 {
status = "okay";
- pca9547@77 {
+ i2c-mux@77 {
compatible = "nxp,pca9547";
reg = <0x77>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts
index fea167d..14856bc 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts
@@ -70,7 +70,7 @@ fpga: board-control@2,0 {
&i2c0 {
status = "okay";
- pca9547@77 {
+ i2c-mux@77 {
compatible = "nxp,pca9547";
reg = <0x77>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-qds.dts
index eec62c6..9ee9928 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-qds.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-qds.dts
@@ -76,7 +76,7 @@ &duart1 {
&i2c0 {
status = "okay";
- pca9547@77 {
+ i2c-mux@77 {
compatible = "nxp,pca9547";
reg = <0x77>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1088a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1088a-qds.dts
index 41d8b15..aa52ff7 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1088a-qds.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1088a-qds.dts
@@ -53,7 +53,7 @@ flash@2 {
&i2c0 {
status = "okay";
- i2c-switch@77 {
+ i2c-mux@77 {
compatible = "nxp,pca9547";
reg = <0x77>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1088a-rdb.dts b/arch/arm64/boot/dts/freescale/fsl-ls1088a-rdb.dts
index 1bfbce6..ee8e932 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1088a-rdb.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1088a-rdb.dts
@@ -136,7 +136,7 @@ mdio2_aquantia_phy: ethernet-phy@0 {
&i2c0 {
status = "okay";
- i2c-switch@77 {
+ i2c-mux@77 {
compatible = "nxp,pca9547";
reg = <0x77>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1088a-ten64.dts b/arch/arm64/boot/dts/freescale/fsl-ls1088a-ten64.dts
index d3f03dc..a9c6682 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1088a-ten64.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1088a-ten64.dts
@@ -245,7 +245,7 @@ rx8035: rtc@32 {
&i2c3 {
status = "okay";
- i2c-switch@70 {
+ i2c-mux@70 {
compatible = "nxp,pca9540";
#address-cells = <1>;
#size-cells = <0>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls208xa-qds.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls208xa-qds.dtsi
index 10d2fe0..8d96d18 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls208xa-qds.dtsi
+++ b/arch/arm64/boot/dts/freescale/fsl-ls208xa-qds.dtsi
@@ -44,7 +44,7 @@ cpld@3,0 {
&i2c0 {
status = "okay";
- pca9547@77 {
+ i2c-mux@77 {
compatible = "nxp,pca9547";
reg = <0x77>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls208xa-rdb.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls208xa-rdb.dtsi
index 4b71c4f..787e408 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls208xa-rdb.dtsi
+++ b/arch/arm64/boot/dts/freescale/fsl-ls208xa-rdb.dtsi
@@ -44,7 +44,7 @@ cpld@3,0 {
&i2c0 {
status = "okay";
- pca9547@75 {
+ i2c-mux@75 {
compatible = "nxp,pca9547";
reg = <0x75>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/fsl-lx2160a-cex7.dtsi b/arch/arm64/boot/dts/freescale/fsl-lx2160a-cex7.dtsi
index afb4552..d32a52a 100644
--- a/arch/arm64/boot/dts/freescale/fsl-lx2160a-cex7.dtsi
+++ b/arch/arm64/boot/dts/freescale/fsl-lx2160a-cex7.dtsi
@@ -54,7 +54,7 @@ &esdhc1 {
&i2c0 {
status = "okay";
- i2c-switch@77 {
+ i2c-mux@77 {
compatible = "nxp,pca9547";
#address-cells = <1>;
#size-cells = <0>;
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
index 94e5fa8..bb18354 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mm-beacon-baseboard.dtsi
@@ -70,7 +70,7 @@ sound {
&ecspi2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_espi2>;
- cs-gpios = <&gpio5 9 GPIO_ACTIVE_LOW>;
+ cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>;
status = "okay";
eeprom@0 {
@@ -186,7 +186,7 @@ pinctrl_espi2: espi2grp {
MX8MM_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0x82
MX8MM_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0x82
MX8MM_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0x82
- MX8MM_IOMUXC_ECSPI1_SS0_GPIO5_IO9 0x41
+ MX8MM_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0x41
>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-nitrogen-r2.dts b/arch/arm64/boot/dts/freescale/imx8mm-nitrogen-r2.dts
index 74c0989..0e8f0d7 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm-nitrogen-r2.dts
+++ b/arch/arm64/boot/dts/freescale/imx8mm-nitrogen-r2.dts
@@ -214,7 +214,7 @@ &i2c3 {
pinctrl-0 = <&pinctrl_i2c3>;
status = "okay";
- i2cmux@70 {
+ i2c-mux@70 {
compatible = "nxp,pca9540";
reg = <0x70>;
#address-cells = <1>;
@@ -247,7 +247,7 @@ wm8960: codec@1a {
compatible = "wlf,wm8960";
reg = <0x1a>;
clocks = <&clk IMX8MM_CLK_SAI1_ROOT>;
- clock-names = "mclk1";
+ clock-names = "mclk";
wlf,shared-lrclk;
#sound-dai-cells = <0>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
index a003e6a..56271ab 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
+++ b/arch/arm64/boot/dts/freescale/imx8mm-pinfunc.h
@@ -601,7 +601,7 @@
#define MX8MM_IOMUXC_UART1_RXD_GPIO5_IO22 0x234 0x49C 0x000 0x5 0x0
#define MX8MM_IOMUXC_UART1_RXD_TPSMP_HDATA24 0x234 0x49C 0x000 0x7 0x0
#define MX8MM_IOMUXC_UART1_TXD_UART1_DCE_TX 0x238 0x4A0 0x000 0x0 0x0
-#define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX 0x238 0x4A0 0x4F4 0x0 0x0
+#define MX8MM_IOMUXC_UART1_TXD_UART1_DTE_RX 0x238 0x4A0 0x4F4 0x0 0x1
#define MX8MM_IOMUXC_UART1_TXD_ECSPI3_MOSI 0x238 0x4A0 0x000 0x1 0x0
#define MX8MM_IOMUXC_UART1_TXD_GPIO5_IO23 0x238 0x4A0 0x000 0x5 0x0
#define MX8MM_IOMUXC_UART1_TXD_TPSMP_HDATA25 0x238 0x4A0 0x000 0x7 0x0
diff --git a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
index bafd5c8..f7e41e5c2c 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
+++ b/arch/arm64/boot/dts/freescale/imx8mm-venice-gw7901.dts
@@ -675,6 +675,7 @@ &usbotg1 {
&usbotg2 {
dr_mode = "host";
vbus-supply = <®_usb2_vbus>;
+ over-current-active-low;
status = "okay";
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mm.dtsi b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
index 67e91fd..2a67122 100644
--- a/arch/arm64/boot/dts/freescale/imx8mm.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mm.dtsi
@@ -530,7 +530,7 @@ ocotp: efuse@30350000 {
#address-cells = <1>;
#size-cells = <1>;
- imx8mm_uid: unique-id@410 {
+ imx8mm_uid: unique-id@4 {
reg = <0x4 0x8>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
index 6dcead5b..16a5efb 100644
--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
@@ -265,6 +265,7 @@ spba2: spba-bus@30000000 {
sai2: sai@30020000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30020000 0x10000>;
+ #sound-dai-cells = <0>;
interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI2_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@@ -279,6 +280,7 @@ sai2: sai@30020000 {
sai3: sai@30030000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30030000 0x10000>;
+ #sound-dai-cells = <0>;
interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI3_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@@ -293,6 +295,7 @@ sai3: sai@30030000 {
sai5: sai@30050000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30050000 0x10000>;
+ #sound-dai-cells = <0>;
interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI5_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@@ -309,6 +312,7 @@ sai5: sai@30050000 {
sai6: sai@30060000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x30060000 0x10000>;
+ #sound-dai-cells = <0>;
interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI6_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@@ -366,6 +370,7 @@ spdif1: spdif@30090000 {
sai7: sai@300b0000 {
compatible = "fsl,imx8mn-sai", "fsl,imx8mq-sai";
reg = <0x300b0000 0x10000>;
+ #sound-dai-cells = <0>;
interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk IMX8MN_CLK_SAI7_IPG>,
<&clk IMX8MN_CLK_DUMMY>,
@@ -533,7 +538,7 @@ ocotp: efuse@30350000 {
#address-cells = <1>;
#size-cells = <1>;
- imx8mn_uid: unique-id@410 {
+ imx8mn_uid: unique-id@4 {
reg = <0x4 0x8>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi
index fc178ee..8e189d8 100644
--- a/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mp-phycore-som.dtsi
@@ -98,7 +98,6 @@ pmic: pmic@25 {
regulators {
buck1: BUCK1 {
- regulator-compatible = "BUCK1";
regulator-min-microvolt = <600000>;
regulator-max-microvolt = <2187500>;
regulator-boot-on;
@@ -107,7 +106,6 @@ buck1: BUCK1 {
};
buck2: BUCK2 {
- regulator-compatible = "BUCK2";
regulator-min-microvolt = <600000>;
regulator-max-microvolt = <2187500>;
regulator-boot-on;
@@ -116,7 +114,6 @@ buck2: BUCK2 {
};
buck4: BUCK4 {
- regulator-compatible = "BUCK4";
regulator-min-microvolt = <600000>;
regulator-max-microvolt = <3400000>;
regulator-boot-on;
@@ -124,7 +121,6 @@ buck4: BUCK4 {
};
buck5: BUCK5 {
- regulator-compatible = "BUCK5";
regulator-min-microvolt = <600000>;
regulator-max-microvolt = <3400000>;
regulator-boot-on;
@@ -132,7 +128,6 @@ buck5: BUCK5 {
};
buck6: BUCK6 {
- regulator-compatible = "BUCK6";
regulator-min-microvolt = <600000>;
regulator-max-microvolt = <3400000>;
regulator-boot-on;
@@ -140,7 +135,6 @@ buck6: BUCK6 {
};
ldo1: LDO1 {
- regulator-compatible = "LDO1";
regulator-min-microvolt = <1600000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
@@ -148,7 +142,6 @@ ldo1: LDO1 {
};
ldo2: LDO2 {
- regulator-compatible = "LDO2";
regulator-min-microvolt = <800000>;
regulator-max-microvolt = <1150000>;
regulator-boot-on;
@@ -156,7 +149,6 @@ ldo2: LDO2 {
};
ldo3: LDO3 {
- regulator-compatible = "LDO3";
regulator-min-microvolt = <800000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
@@ -164,7 +156,6 @@ ldo3: LDO3 {
};
ldo4: LDO4 {
- regulator-compatible = "LDO4";
regulator-min-microvolt = <800000>;
regulator-max-microvolt = <3300000>;
regulator-boot-on;
@@ -172,7 +163,6 @@ ldo4: LDO4 {
};
ldo5: LDO5 {
- regulator-compatible = "LDO5";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
index 664177e..ab670b5 100644
--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi
@@ -358,7 +358,7 @@ ocotp: efuse@30350000 {
#address-cells = <1>;
#size-cells = <1>;
- imx8mp_uid: unique-id@420 {
+ imx8mp_uid: unique-id@8 {
reg = <0x8 0x8>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mq-nitrogen.dts b/arch/arm64/boot/dts/freescale/imx8mq-nitrogen.dts
index f70fb32..cf14ab5 100644
--- a/arch/arm64/boot/dts/freescale/imx8mq-nitrogen.dts
+++ b/arch/arm64/boot/dts/freescale/imx8mq-nitrogen.dts
@@ -133,7 +133,7 @@ &i2c1 {
pinctrl-0 = <&pinctrl_i2c1>;
status = "okay";
- i2cmux@70 {
+ i2c-mux@70 {
compatible = "nxp,pca9546";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_i2c1_pca9546>;
@@ -216,7 +216,7 @@ &i2c4 {
pinctrl-0 = <&pinctrl_i2c4>;
status = "okay";
- pca9546: i2cmux@70 {
+ pca9546: i2c-mux@70 {
compatible = "nxp,pca9546";
reg = <0x70>;
#address-cells = <1>;
diff --git a/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts b/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts
index 5d5aa65..6e61827 100644
--- a/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts
+++ b/arch/arm64/boot/dts/freescale/imx8mq-thor96.dts
@@ -339,7 +339,7 @@ &usdhc1 {
bus-width = <4>;
non-removable;
no-sd;
- no-emmc;
+ no-mmc;
status = "okay";
brcmf: wifi@1 {
@@ -359,7 +359,7 @@ &usdhc2 {
cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>;
bus-width = <4>;
no-sdio;
- no-emmc;
+ no-mmc;
disable-wp;
status = "okay";
};
diff --git a/arch/arm64/boot/dts/freescale/imx8mq.dtsi b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
index fd38092..2a698c5 100644
--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
@@ -557,7 +557,7 @@ ocotp: efuse@30350000 {
#address-cells = <1>;
#size-cells = <1>;
- imx8mq_uid: soc-uid@410 {
+ imx8mq_uid: soc-uid@4 {
reg = <0x4 0x8>;
};
diff --git a/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts b/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts
index 863232a..4497763 100644
--- a/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts
+++ b/arch/arm64/boot/dts/freescale/imx8qxp-mek.dts
@@ -61,7 +61,7 @@ &i2c1 {
pinctrl-0 = <&pinctrl_lpi2c1 &pinctrl_ioexp_rst>;
status = "okay";
- i2c-switch@71 {
+ i2c-mux@71 {
compatible = "nxp,pca9646", "nxp,pca9546";
#address-cells = <1>;
#size-cells = <0>;
diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
index 890a942..a4c48b2 100644
--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
@@ -428,6 +428,7 @@ uart3: serial@11005000 {
pwm: pwm@11006000 {
compatible = "mediatek,mt7622-pwm";
reg = <0 0x11006000 0 0x1000>;
+ #pwm-cells = <2>;
interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_LOW>;
clocks = <&topckgen CLK_TOP_PWM_SEL>,
<&pericfg CLK_PERI_PWM_PD>,
diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
index f4e0bea..81fde34 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
@@ -299,6 +299,15 @@ psci {
method = "smc";
};
+ clk13m: fixed-factor-clock-13m {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clocks = <&clk26m>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ clock-output-names = "clk13m";
+ };
+
clk26m: oscillator {
compatible = "fixed-clock";
#clock-cells = <0>;
@@ -610,8 +619,7 @@ systimer: timer@10017000 {
"mediatek,mt6765-timer";
reg = <0 0x10017000 0 0x1000>;
interrupts = <GIC_SPI 200 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&topckgen CLK_TOP_CLK13M>;
- clock-names = "clk13m";
+ clocks = <&clk13m>;
};
iommu: iommu@10205000 {
diff --git a/arch/arm64/boot/dts/mediatek/mt8192.dtsi b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
index d1e6352..9ed1a72 100644
--- a/arch/arm64/boot/dts/mediatek/mt8192.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8192.dtsi
@@ -135,19 +135,16 @@ core2 {
core3 {
cpu = <&cpu3>;
};
- };
-
- cluster1 {
- core0 {
+ core4 {
cpu = <&cpu4>;
};
- core1 {
+ core5 {
cpu = <&cpu5>;
};
- core2 {
+ core6 {
cpu = <&cpu6>;
};
- core3 {
+ core7 {
cpu = <&cpu7>;
};
};
diff --git a/arch/arm64/boot/dts/qcom/ipq8074.dtsi b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
index 9d4019e..68e82c7 100644
--- a/arch/arm64/boot/dts/qcom/ipq8074.dtsi
+++ b/arch/arm64/boot/dts/qcom/ipq8074.dtsi
@@ -114,7 +114,7 @@ usb1_ssphy: phy@58200 {
#phy-cells = <0>;
clocks = <&gcc GCC_USB1_PIPE_CLK>;
clock-names = "pipe0";
- clock-output-names = "gcc_usb1_pipe_clk_src";
+ clock-output-names = "usb3phy_1_cc_pipe_clk";
};
};
@@ -157,7 +157,7 @@ usb0_ssphy: phy@78200 {
#phy-cells = <0>;
clocks = <&gcc GCC_USB0_PIPE_CLK>;
clock-names = "pipe0";
- clock-output-names = "gcc_usb0_pipe_clk_src";
+ clock-output-names = "usb3phy_0_cc_pipe_clk";
};
};
@@ -174,34 +174,61 @@ qusb_phy_0: phy@79000 {
status = "disabled";
};
- pcie_phy0: phy@86000 {
- compatible = "qcom,ipq8074-qmp-pcie-phy";
- reg = <0x00086000 0x1000>;
- #phy-cells = <0>;
- clocks = <&gcc GCC_PCIE0_PIPE_CLK>;
- clock-names = "pipe_clk";
- clock-output-names = "pcie20_phy0_pipe_clk";
+ pcie_qmp0: phy@84000 {
+ compatible = "qcom,ipq8074-qmp-gen3-pcie-phy";
+ reg = <0x00084000 0x1bc>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ clocks = <&gcc GCC_PCIE0_AUX_CLK>,
+ <&gcc GCC_PCIE0_AHB_CLK>;
+ clock-names = "aux", "cfg_ahb";
resets = <&gcc GCC_PCIE0_PHY_BCR>,
<&gcc GCC_PCIE0PHY_PHY_BCR>;
reset-names = "phy",
"common";
status = "disabled";
+
+ pcie_phy0: phy@84200 {
+ reg = <0x84200 0x16c>,
+ <0x84400 0x200>,
+ <0x84800 0x1f0>,
+ <0x84c00 0xf4>;
+ #phy-cells = <0>;
+ #clock-cells = <0>;
+ clocks = <&gcc GCC_PCIE0_PIPE_CLK>;
+ clock-names = "pipe0";
+ clock-output-names = "pcie20_phy0_pipe_clk";
+ };
};
- pcie_phy1: phy@8e000 {
+ pcie_qmp1: phy@8e000 {
compatible = "qcom,ipq8074-qmp-pcie-phy";
- reg = <0x0008e000 0x1000>;
- #phy-cells = <0>;
- clocks = <&gcc GCC_PCIE1_PIPE_CLK>;
- clock-names = "pipe_clk";
- clock-output-names = "pcie20_phy1_pipe_clk";
+ reg = <0x0008e000 0x1c4>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ clocks = <&gcc GCC_PCIE1_AUX_CLK>,
+ <&gcc GCC_PCIE1_AHB_CLK>;
+ clock-names = "aux", "cfg_ahb";
resets = <&gcc GCC_PCIE1_PHY_BCR>,
<&gcc GCC_PCIE1PHY_PHY_BCR>;
reset-names = "phy",
"common";
status = "disabled";
+
+ pcie_phy1: phy@8e200 {
+ reg = <0x8e200 0x130>,
+ <0x8e400 0x200>,
+ <0x8e800 0x1f8>;
+ #phy-cells = <0>;
+ #clock-cells = <0>;
+ clocks = <&gcc GCC_PCIE1_PIPE_CLK>;
+ clock-names = "pipe0";
+ clock-output-names = "pcie20_phy1_pipe_clk";
+ };
};
prng: rng@e3000 {
@@ -635,9 +662,9 @@ pcie1: pci@10000000 {
phy-names = "pciephy";
ranges = <0x81000000 0 0x10200000 0x10200000
- 0 0x100000 /* downstream I/O */
- 0x82000000 0 0x10300000 0x10300000
- 0 0xd00000>; /* non-prefetchable memory */
+ 0 0x10000>, /* downstream I/O */
+ <0x82000000 0 0x10220000 0x10220000
+ 0 0xfde0000>; /* non-prefetchable memory */
interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
@@ -680,16 +707,18 @@ IRQ_TYPE_LEVEL_HIGH>, /* int_c */
};
pcie0: pci@20000000 {
- compatible = "qcom,pcie-ipq8074";
+ compatible = "qcom,pcie-ipq8074-gen3";
reg = <0x20000000 0xf1d>,
<0x20000f20 0xa8>,
- <0x00080000 0x2000>,
+ <0x20001000 0x1000>,
+ <0x00080000 0x4000>,
<0x20100000 0x1000>;
- reg-names = "dbi", "elbi", "parf", "config";
+ reg-names = "dbi", "elbi", "atu", "parf", "config";
device_type = "pci";
linux,pci-domain = <0>;
bus-range = <0x00 0xff>;
num-lanes = <1>;
+ max-link-speed = <3>;
#address-cells = <3>;
#size-cells = <2>;
@@ -697,9 +726,9 @@ pcie0: pci@20000000 {
phy-names = "pciephy";
ranges = <0x81000000 0 0x20200000 0x20200000
- 0 0x100000 /* downstream I/O */
- 0x82000000 0 0x20300000 0x20300000
- 0 0xd00000>; /* non-prefetchable memory */
+ 0 0x10000>, /* downstream I/O */
+ <0x82000000 0 0x20220000 0x20220000
+ 0 0xfde0000>; /* non-prefetchable memory */
interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi";
@@ -717,28 +746,30 @@ IRQ_TYPE_LEVEL_HIGH>, /* int_c */
clocks = <&gcc GCC_SYS_NOC_PCIE0_AXI_CLK>,
<&gcc GCC_PCIE0_AXI_M_CLK>,
<&gcc GCC_PCIE0_AXI_S_CLK>,
- <&gcc GCC_PCIE0_AHB_CLK>,
- <&gcc GCC_PCIE0_AUX_CLK>;
-
+ <&gcc GCC_PCIE0_AXI_S_BRIDGE_CLK>,
+ <&gcc GCC_PCIE0_RCHNG_CLK>;
clock-names = "iface",
"axi_m",
"axi_s",
- "ahb",
- "aux";
+ "axi_bridge",
+ "rchng";
+
resets = <&gcc GCC_PCIE0_PIPE_ARES>,
<&gcc GCC_PCIE0_SLEEP_ARES>,
<&gcc GCC_PCIE0_CORE_STICKY_ARES>,
<&gcc GCC_PCIE0_AXI_MASTER_ARES>,
<&gcc GCC_PCIE0_AXI_SLAVE_ARES>,
<&gcc GCC_PCIE0_AHB_ARES>,
- <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>;
+ <&gcc GCC_PCIE0_AXI_MASTER_STICKY_ARES>,
+ <&gcc GCC_PCIE0_AXI_SLAVE_STICKY_ARES>;
reset-names = "pipe",
"sleep",
"sticky",
"axi_m",
"axi_s",
"ahb",
- "axi_m_sticky";
+ "axi_m_sticky",
+ "axi_s_sticky";
status = "disabled";
};
};
diff --git a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
index c7d191dc..60fcb02 100644
--- a/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
+++ b/arch/arm64/boot/dts/qcom/msm8992-bullhead-rev-101.dts
@@ -1,7 +1,8 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2015, LGE Inc. All rights reserved.
* Copyright (c) 2016, The Linux Foundation. All rights reserved.
- * Copyright (c) 2021, Petr Vorel <petr.vorel@gmail.com>
+ * Copyright (c) 2021-2022, Petr Vorel <petr.vorel@gmail.com>
+ * Copyright (c) 2022, Dominik Kobinski <dominikkobinski314@gmail.com>
*/
/dts-v1/;
@@ -13,6 +14,9 @@
/* cont_splash_mem has different memory mapping */
/delete-node/ &cont_splash_mem;
+/* disabled on downstream, conflicts with cont_splash_mem */
+/delete-node/ &dfps_data_mem;
+
/ {
model = "LG Nexus 5X";
compatible = "lg,bullhead", "qcom,msm8992";
@@ -47,7 +51,17 @@ ramoops@1ff00000 {
};
cont_splash_mem: memory@3400000 {
- reg = <0 0x03400000 0 0x1200000>;
+ reg = <0 0x03400000 0 0xc00000>;
+ no-map;
+ };
+
+ reserved@5000000 {
+ reg = <0x0 0x05000000 0x0 0x1a00000>;
+ no-map;
+ };
+
+ reserved@6c00000 {
+ reg = <0x0 0x06c00000 0x0 0x400000>;
no-map;
};
};
diff --git a/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts b/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
index a3d6340..d08659c 100644
--- a/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
+++ b/arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
@@ -11,6 +11,12 @@
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/input/gpio-keys.h>
+/delete-node/ &adsp_mem;
+/delete-node/ &audio_mem;
+/delete-node/ &mpss_mem;
+/delete-node/ &peripheral_region;
+/delete-node/ &rmtfs_mem;
+
/ {
model = "Xiaomi Mi 4C";
compatible = "xiaomi,libra", "qcom,msm8992";
@@ -60,25 +66,67 @@ reserved-memory {
#size-cells = <2>;
ranges;
- /* This is for getting crash logs using Android downstream kernels */
+ memory_hole: hole@6400000 {
+ reg = <0 0x06400000 0 0x600000>;
+ no-map;
+ };
+
+ memory_hole2: hole2@6c00000 {
+ reg = <0 0x06c00000 0 0x2400000>;
+ no-map;
+ };
+
+ mpss_mem: mpss@9000000 {
+ reg = <0 0x09000000 0 0x5a00000>;
+ no-map;
+ };
+
+ tzapp: tzapp@ea00000 {
+ reg = <0 0x0ea00000 0 0x1900000>;
+ no-map;
+ };
+
+ mdm_rfsa_mem: mdm-rfsa@ca0b0000 {
+ reg = <0 0xca0b0000 0 0x10000>;
+ no-map;
+ };
+
+ rmtfs_mem: rmtfs@ca100000 {
+ compatible = "qcom,rmtfs-mem";
+ reg = <0 0xca100000 0 0x180000>;
+ no-map;
+
+ qcom,client-id = <1>;
+ };
+
+ audio_mem: audio@cb400000 {
+ reg = <0 0xcb000000 0 0x400000>;
+ no-mem;
+ };
+
+ qseecom_mem: qseecom@cb400000 {
+ reg = <0 0xcb400000 0 0x1c00000>;
+ no-mem;
+ };
+
+ adsp_rfsa_mem: adsp-rfsa@cd000000 {
+ reg = <0 0xcd000000 0 0x10000>;
+ no-map;
+ };
+
+ sensor_rfsa_mem: sensor-rfsa@cd010000 {
+ reg = <0 0xcd010000 0 0x10000>;
+ no-map;
+ };
+
ramoops@dfc00000 {
compatible = "ramoops";
- reg = <0x0 0xdfc00000 0x0 0x40000>;
+ reg = <0 0xdfc00000 0 0x40000>;
console-size = <0x10000>;
record-size = <0x10000>;
ftrace-size = <0x10000>;
pmsg-size = <0x20000>;
};
-
- modem_region: modem_region@9000000 {
- reg = <0x0 0x9000000 0x0 0x5a00000>;
- no-map;
- };
-
- tzapp: modem_region@ea00000 {
- reg = <0x0 0xea00000 0x0 0x1900000>;
- no-map;
- };
};
};
@@ -120,9 +168,21 @@ &blsp2_uart2 {
status = "okay";
};
-&peripheral_region {
- reg = <0x0 0x7400000 0x0 0x1c00000>;
- no-map;
+&pm8994_spmi_regulators {
+ VDD_APC0: s8 {
+ regulator-min-microvolt = <680000>;
+ regulator-max-microvolt = <1180000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ /* APC1 is 3-phase, but quoting downstream, s11 is "the gang leader" */
+ VDD_APC1: s11 {
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1225000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
};
&rpm_requests {
diff --git a/arch/arm64/boot/dts/qcom/msm8992.dtsi b/arch/arm64/boot/dts/qcom/msm8992.dtsi
index 58fe58c..765e1f1 100644
--- a/arch/arm64/boot/dts/qcom/msm8992.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8992.dtsi
@@ -14,10 +14,6 @@ &rpmcc {
compatible = "qcom,rpmcc-msm8992";
};
-&tcsr_mutex {
- compatible = "qcom,sfpb-mutex";
-};
-
&timer {
interrupts = <GIC_PPI 2 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 3 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
diff --git a/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi b/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi
index 507396c..7802aba 100644
--- a/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8996-sony-xperia-tone.dtsi
@@ -938,10 +938,6 @@ touch_int_sleep: touch-int-sleep {
};
};
-/*
- * For reasons that are currently unknown (but probably related to fusb301), USB takes about
- * 6 minutes to wake up (nothing interesting in kernel logs), but then it works as it should.
- */
&usb3 {
status = "okay";
qcom,select-utmi-as-pipe-clk;
@@ -950,6 +946,7 @@ &usb3 {
&usb3_dwc3 {
extcon = <&usb3_id>;
dr_mode = "peripheral";
+ maximum-speed = "high-speed";
phys = <&hsusb_phy1>;
phy-names = "usb2-phy";
snps,hird-threshold = /bits/ 8 <0>;
diff --git a/arch/arm64/boot/dts/qcom/pmk8350.dtsi b/arch/arm64/boot/dts/qcom/pmk8350.dtsi
index 04fc263..fc38f77 100644
--- a/arch/arm64/boot/dts/qcom/pmk8350.dtsi
+++ b/arch/arm64/boot/dts/qcom/pmk8350.dtsi
@@ -16,8 +16,9 @@ pmk8350: pmic@0 {
#size-cells = <0>;
pmk8350_pon: pon@1300 {
- compatible = "qcom,pm8998-pon";
- reg = <0x1300>;
+ compatible = "qcom,pmk8350-pon";
+ reg = <0x1300>, <0x800>;
+ reg-names = "hlos", "pbs";
pwrkey {
compatible = "qcom,pmk8350-pwrkey";
diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
index 18cc8e3b..fd0d634 100644
--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
+++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
@@ -775,7 +775,7 @@ pcie_phy: phy@7786000 {
clocks = <&gcc GCC_PCIE_0_PIPE_CLK>;
resets = <&gcc GCC_PCIEPHY_0_PHY_BCR>,
- <&gcc 21>;
+ <&gcc GCC_PCIE_0_PIPE_ARES>;
reset-names = "phy", "pipe";
clock-output-names = "pcie_0_pipe_clk";
@@ -1305,12 +1305,12 @@ pcie: pci@10000000 {
<&gcc GCC_PCIE_0_SLV_AXI_CLK>;
clock-names = "iface", "aux", "master_bus", "slave_bus";
- resets = <&gcc 18>,
- <&gcc 17>,
- <&gcc 15>,
- <&gcc 19>,
+ resets = <&gcc GCC_PCIE_0_AXI_MASTER_ARES>,
+ <&gcc GCC_PCIE_0_AXI_SLAVE_ARES>,
+ <&gcc GCC_PCIE_0_AXI_MASTER_STICKY_ARES>,
+ <&gcc GCC_PCIE_0_CORE_STICKY_ARES>,
<&gcc GCC_PCIE_0_BCR>,
- <&gcc 16>;
+ <&gcc GCC_PCIE_0_AHB_ARES>;
reset-names = "axi_m",
"axi_s",
"axi_m_sticky",
diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
index de86ae3..12816d6 100644
--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
@@ -3241,8 +3241,8 @@ spmi_bus: spmi@c440000 {
interrupts-extended = <&pdc 1 IRQ_TYPE_LEVEL_HIGH>;
qcom,ee = <0>;
qcom,channel = <0>;
- #address-cells = <1>;
- #size-cells = <1>;
+ #address-cells = <2>;
+ #size-cells = <0>;
interrupt-controller;
#interrupt-cells = <4>;
cell-index = <0>;
diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
index b795a999..fb6473a0 100644
--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
+++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
@@ -1494,8 +1494,8 @@ spmi_bus: spmi@c440000 {
interrupts-extended = <&pdc 1 IRQ_TYPE_LEVEL_HIGH>;
qcom,ee = <0>;
qcom,channel = <0>;
- #address-cells = <1>;
- #size-cells = <1>;
+ #address-cells = <2>;
+ #size-cells = <0>;
interrupt-controller;
#interrupt-cells = <4>;
};
diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
index 146d3cd..5ce270f 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
@@ -896,7 +896,7 @@ sdc2_card_det_n: sd-card-det-n {
};
wcd_intr_default: wcd_intr_default {
- pins = <54>;
+ pins = "gpio54";
function = "gpio";
input-enable;
diff --git a/arch/arm64/boot/dts/qcom/sm6125.dtsi b/arch/arm64/boot/dts/qcom/sm6125.dtsi
index dc3bddc..2e4fe2bc 100644
--- a/arch/arm64/boot/dts/qcom/sm6125.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm6125.dtsi
@@ -392,9 +392,9 @@ hsusb_phy1: phy@1613000 {
reg = <0x01613000 0x180>;
#phy-cells = <0>;
- clocks = <&rpmcc RPM_SMD_XO_CLK_SRC>,
- <&gcc GCC_AHB2PHY_USB_CLK>;
- clock-names = "ref", "cfg_ahb";
+ clocks = <&gcc GCC_AHB2PHY_USB_CLK>,
+ <&rpmcc RPM_SMD_XO_CLK_SRC>;
+ clock-names = "cfg_ahb", "ref";
resets = <&gcc GCC_QUSB2PHY_PRIM_BCR>;
status = "disabled";
diff --git a/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi b/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi
index fb6e5a1..04c71f7 100644
--- a/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi
@@ -33,9 +33,10 @@ chosen {
framebuffer: framebuffer@9c000000 {
compatible = "simple-framebuffer";
reg = <0 0x9c000000 0 0x2300000>;
- width = <1644>;
- height = <3840>;
- stride = <(1644 * 4)>;
+ /* Griffin BL initializes in 2.5k mode, not 4k */
+ width = <1096>;
+ height = <2560>;
+ stride = <(1096 * 4)>;
format = "a8r8g8b8";
/*
* That's (going to be) a lot of clocks, but it's necessary due
diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
index 48e0c04..f1ab494 100644
--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
+++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi
@@ -432,20 +432,6 @@ wm8962_endpoint: endpoint {
};
};
- /* 0 - lcd_reset */
- /* 1 - lcd_pwr */
- /* 2 - lcd_select */
- /* 3 - backlight-enable */
- /* 4 - Touch_shdwn */
- /* 5 - LCD_H_pol */
- /* 6 - lcd_V_pol */
- gpio_exp1: gpio@20 {
- compatible = "onnn,pca9654";
- reg = <0x20>;
- gpio-controller;
- #gpio-cells = <2>;
- };
-
touchscreen@26 {
compatible = "ilitek,ili2117";
reg = <0x26>;
@@ -477,6 +463,16 @@ hd3ss3220_out_ep: endpoint {
};
};
};
+
+ gpio_exp1: gpio@70 {
+ compatible = "nxp,pca9538";
+ reg = <0x70>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-line-names = "lcd_reset", "lcd_pwr", "lcd_select",
+ "backlight-enable", "Touch_shdwn",
+ "LCD_H_pol", "lcd_V_pol";
+ };
};
&lvds0 {
diff --git a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
index aa22a0c..5d5d957 100644
--- a/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
+++ b/arch/arm64/boot/dts/rockchip/rk3328-roc-cc.dts
@@ -96,7 +96,6 @@ power_led: led-0 {
linux,default-trigger = "heartbeat";
gpios = <&rk805 1 GPIO_ACTIVE_LOW>;
default-state = "on";
- mode = <0x23>;
};
user_led: led-1 {
@@ -104,7 +103,6 @@ user_led: led-1 {
linux,default-trigger = "mmc1";
gpios = <&rk805 0 GPIO_ACTIVE_LOW>;
default-state = "off";
- mode = <0x05>;
};
};
};
diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
index c3406e7..4417fe8 100644
--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
@@ -77,7 +77,7 @@ vdd_sd_dv: gpio-regulator-TLV71033 {
};
};
-&wkup_pmx0 {
+&wkup_pmx2 {
mcu_cpsw_pins_default: mcu-cpsw-pins-default {
pinctrl-single,pins = <
J721E_WKUP_IOPAD(0x0068, PIN_OUTPUT, 0) /* MCU_RGMII1_TX_CTL */
diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
index 1044ec6..8185c16 100644
--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
@@ -56,7 +56,34 @@ chipid@43000014 {
wkup_pmx0: pinctrl@4301c000 {
compatible = "pinctrl-single";
/* Proxy 0 addressing */
- reg = <0x00 0x4301c000 0x00 0x178>;
+ reg = <0x00 0x4301c000 0x00 0x34>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+ wkup_pmx1: pinctrl@0x4301c038 {
+ compatible = "pinctrl-single";
+ /* Proxy 0 addressing */
+ reg = <0x00 0x4301c038 0x00 0x8>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+ wkup_pmx2: pinctrl@0x4301c068 {
+ compatible = "pinctrl-single";
+ /* Proxy 0 addressing */
+ reg = <0x00 0x4301c068 0x00 0xec>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+ wkup_pmx3: pinctrl@0x4301c174 {
+ compatible = "pinctrl-single";
+ /* Proxy 0 addressing */
+ reg = <0x00 0x4301c174 0x00 0x20>;
#pinctrl-cells = <1>;
pinctrl-single,register-width = <32>;
pinctrl-single,function-mask = <0xffffffff>;
diff --git a/arch/arm64/configs/google/xfstest.config b/arch/arm64/configs/google/xfstest.config
new file mode 100644
index 0000000..4460c03
--- /dev/null
+++ b/arch/arm64/configs/google/xfstest.config
@@ -0,0 +1,18 @@
+# Configurations required to run xfs tests
+CONFIG_MODULE_SIG=n
+CONFIG_MODULE_SIG_ALL=n
+CONFIG_SECURITY_LOADPIN=n
+CONFIG_SECURITY_LOADPIN_ENFORCE=n
+CONFIG_SECURITY_YAMA=n
+CONFIG_SECURITY_LOCKDOWN_LSM=n
+CONFIG_LOCK_DOWN_KERNEL_FORCE_NONE=n
+CONFIG_LSM=""
+CONFIG_SYSTEM_TRUSTED_KEYRING=n
+CONFIG_SECONDARY_TRUSTED_KEYRING=n
+CONFIG_VFAT_FS=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_NLS_UTF8=y
+CONFIG_FAT_DEFAULT_UTF8=y
diff --git a/arch/arm64/configs/lakitu_defconfig b/arch/arm64/configs/lakitu_defconfig
index 23b4687..d6ec57f 100644
--- a/arch/arm64/configs/lakitu_defconfig
+++ b/arch/arm64/configs/lakitu_defconfig
@@ -1,8 +1,8 @@
#
# Automatically generated file; DO NOT EDIT.
-# Linux/arm64 5.15.57 Kernel Configuration
+# Linux/arm64 5.15.90 Kernel Configuration
#
-CONFIG_CC_VERSION_TEXT="Chromium OS 14.0_pre445002_p20220217-r1 clang version 14.0.0 (/var/tmp/portage/sys-devel/llvm-14.0_pre445002_p20220217-r1/work/llvm-14.0_pre445002_p20220217/clang 18308e171b5b1dd99627a4d88c7d6c5ff21b8c96)"
+CONFIG_CC_VERSION_TEXT="Chromium OS 14.0_pre445002_p20220217-r3 clang version 14.0.0 (/var/tmp/portage/sys-devel/llvm-14.0_pre445002_p20220217-r3/work/llvm-14.0_pre445002_p20220217/clang 18308e171b5b1dd99627a4d88c7d6c5ff21b8c96)"
CONFIG_GCC_VERSION=0
CONFIG_CC_IS_CLANG=y
CONFIG_CLANG_VERSION=140000
@@ -345,6 +345,7 @@
CONFIG_ARM64_ERRATUM_824069=y
CONFIG_ARM64_ERRATUM_819472=y
CONFIG_ARM64_ERRATUM_832075=y
+CONFIG_ARM64_ERRATUM_1742098=y
CONFIG_ARM64_ERRATUM_845719=y
CONFIG_ARM64_ERRATUM_843419=y
CONFIG_ARM64_LD_HAS_FIX_ERRATUM_843419=y
@@ -355,6 +356,7 @@
CONFIG_ARM64_ERRATUM_1319367=y
CONFIG_ARM64_ERRATUM_1530923=y
CONFIG_ARM64_WORKAROUND_REPEAT_TLBI=y
+CONFIG_ARM64_ERRATUM_2441007=y
CONFIG_ARM64_ERRATUM_1286807=y
CONFIG_ARM64_ERRATUM_1463225=y
CONFIG_ARM64_ERRATUM_1542419=y
@@ -647,6 +649,7 @@
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
+CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
@@ -983,7 +986,7 @@
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
-# CONFIG_XFRM_INTERFACE is not set
+CONFIG_XFRM_INTERFACE=m
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
CONFIG_XFRM_STATISTICS=y
@@ -1021,13 +1024,14 @@
# CONFIG_INET_ESP_OFFLOAD is not set
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
+CONFIG_INET_TABLE_PERTURB_ORDER=16
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
# CONFIG_INET_RAW_DIAG is not set
-# CONFIG_INET_DIAG_DESTROY is not set
+CONFIG_INET_DIAG_DESTROY=y
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
@@ -1450,7 +1454,6 @@
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
-CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
@@ -1708,7 +1711,6 @@
# end of ARM System Control and Management Interface Protocol
# CONFIG_ARM_SCPI_PROTOCOL is not set
-# CONFIG_ARM_SDE_INTERFACE is not set
# CONFIG_FIRMWARE_MEMMAP is not set
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
@@ -3409,6 +3411,7 @@
# Memory initialization
#
CONFIG_CC_HAS_AUTO_VAR_INIT_PATTERN=y
+CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER=y
CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO=y
# CONFIG_INIT_STACK_NONE is not set
# CONFIG_INIT_STACK_ALL_PATTERN is not set
@@ -3507,7 +3510,6 @@
# CONFIG_CRYPTO_CRC32 is not set
# CONFIG_CRYPTO_XXHASH is not set
# CONFIG_CRYPTO_BLAKE2B is not set
-# CONFIG_CRYPTO_BLAKE2S is not set
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_GHASH=y
# CONFIG_CRYPTO_POLY1305 is not set
@@ -3764,6 +3766,8 @@
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options
+CONFIG_AS_HAS_NON_CONST_LEB128=y
+
#
# Compile-time checks and compiler options
#
@@ -3774,6 +3778,8 @@
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_DEBUG_INFO_BTF=y
+CONFIG_PAHOLE_HAS_SPLIT_BTF=y
+CONFIG_DEBUG_INFO_BTF_MODULES=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index ad55079..53cbbb9 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -25,6 +25,7 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
({ \
efi_virtmap_load(); \
__efi_fpsimd_begin(); \
+ raw_spin_lock(&efi_rt_lock); \
})
#define arch_efi_call_virt(p, f, args...) \
@@ -36,10 +37,12 @@ int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
#define arch_efi_call_virt_teardown() \
({ \
+ raw_spin_unlock(&efi_rt_lock); \
__efi_fpsimd_end(); \
efi_virtmap_unload(); \
})
+extern raw_spinlock_t efi_rt_lock;
efi_status_t __efi_rt_asm_wrapper(void *, const char *, ...);
#define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S
index 75691a2..2d3c4b0 100644
--- a/arch/arm64/kernel/efi-rt-wrapper.S
+++ b/arch/arm64/kernel/efi-rt-wrapper.S
@@ -4,6 +4,7 @@
*/
#include <linux/linkage.h>
+#include <asm/assembler.h>
SYM_FUNC_START(__efi_rt_asm_wrapper)
stp x29, x30, [sp, #-32]!
@@ -16,6 +17,12 @@
*/
stp x1, x18, [sp, #16]
+ ldr_l x16, efi_rt_stack_top
+ mov sp, x16
+#ifdef CONFIG_SHADOW_CALL_STACK
+ str x18, [sp, #-16]!
+#endif
+
/*
* We are lucky enough that no EFI runtime services take more than
* 5 arguments, so all are passed in registers rather than via the
@@ -29,6 +36,7 @@
mov x4, x6
blr x8
+ mov sp, x29
ldp x1, x2, [sp, #16]
cmp x2, x18
ldp x29, x30, [sp], #32
@@ -42,6 +50,10 @@
* called with preemption disabled and a separate shadow stack is used
* for interrupts.
*/
- mov x18, x2
+#ifdef CONFIG_SHADOW_CALL_STACK
+ ldr_l x18, efi_rt_stack_top
+ ldr x18, [x18, #-16]
+#endif
+
b efi_handle_corrupted_x18 // tail call
SYM_FUNC_END(__efi_rt_asm_wrapper)
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index a908a37..9669f3f 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -144,3 +144,30 @@ asmlinkage efi_status_t efi_handle_corrupted_x18(efi_status_t s, const char *f)
pr_err_ratelimited(FW_BUG "register x18 corrupted by EFI %s\n", f);
return s;
}
+
+DEFINE_RAW_SPINLOCK(efi_rt_lock);
+
+asmlinkage u64 *efi_rt_stack_top __ro_after_init;
+
+/* EFI requires 8 KiB of stack space for runtime services */
+static_assert(THREAD_SIZE >= SZ_8K);
+
+static int __init arm64_efi_rt_init(void)
+{
+ void *p;
+
+ if (!efi_enabled(EFI_RUNTIME_SERVICES))
+ return 0;
+
+ p = __vmalloc_node(THREAD_SIZE, THREAD_ALIGN, GFP_KERNEL,
+ NUMA_NO_NODE, &&l);
+l: if (!p) {
+ pr_warn("Failed to allocate EFI runtime stack\n");
+ clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
+ return -ENOMEM;
+ }
+
+ efi_rt_stack_top = p + THREAD_SIZE;
+ return 0;
+}
+core_initcall(arm64_efi_rt_init);
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index b7404db..2162b6f 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -7,6 +7,9 @@
* Copyright (C) 2013 Linaro Limited.
* Author: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
*/
+
+#define pr_fmt(fmt) "kprobes: " fmt
+
#include <linux/extable.h>
#include <linux/kasan.h>
#include <linux/kernel.h>
@@ -218,7 +221,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
break;
case KPROBE_HIT_SS:
case KPROBE_REENTER:
- pr_warn("Unrecoverable kprobe detected.\n");
+ pr_warn("Failed to recover from reentered kprobes.\n");
dump_kprobe(p);
BUG();
break;
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index f859cc8..21e69a9 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -235,7 +235,7 @@ void die(const char *str, struct pt_regs *regs, int err)
raw_spin_unlock_irqrestore(&die_lock, flags);
if (ret != NOTIFY_STOP)
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
static void arm64_show_signal(int signo, const char *str)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 69bd173..9b465cd 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -468,14 +468,33 @@ static int get_user_mapping_size(struct kvm *kvm, u64 addr)
CONFIG_PGTABLE_LEVELS),
.mm_ops = &kvm_user_mm_ops,
};
+ unsigned long flags;
kvm_pte_t pte = 0; /* Keep GCC quiet... */
u32 level = ~0;
int ret;
+ /*
+ * Disable IRQs so that we hazard against a concurrent
+ * teardown of the userspace page tables (which relies on
+ * IPI-ing threads).
+ */
+ local_irq_save(flags);
ret = kvm_pgtable_get_leaf(&pgt, addr, &pte, &level);
- VM_BUG_ON(ret);
- VM_BUG_ON(level >= KVM_PGTABLE_MAX_LEVELS);
- VM_BUG_ON(!(pte & PTE_VALID));
+ local_irq_restore(flags);
+
+ if (ret)
+ return ret;
+
+ /*
+ * Not seeing an error, but not updating level? Something went
+ * deeply wrong...
+ */
+ if (WARN_ON(level >= KVM_PGTABLE_MAX_LEVELS))
+ return -EFAULT;
+
+ /* Oops, the userspace PTs are gone... Replay the fault */
+ if (!kvm_pte_valid(pte))
+ return -EAGAIN;
return BIT(ARM64_HW_PGTABLE_LEVEL_SHIFT(level));
}
@@ -826,7 +845,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
*
* Returns the size of the mapping.
*/
-static unsigned long
+static long
transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
unsigned long hva, kvm_pfn_t *pfnp,
phys_addr_t *ipap)
@@ -838,8 +857,15 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
* sure that the HVA and IPA are sufficiently aligned and that the
* block map is contained within the memslot.
*/
- if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE) &&
- get_user_mapping_size(kvm, hva) >= PMD_SIZE) {
+ if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
+ int sz = get_user_mapping_size(kvm, hva);
+
+ if (sz < 0)
+ return sz;
+
+ if (sz < PMD_SIZE)
+ return PAGE_SIZE;
+
/*
* The address we faulted on is backed by a transparent huge
* page. However, because we map the compound huge page and
@@ -957,7 +983,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
kvm_pfn_t pfn;
bool logging_active = memslot_is_logging(memslot);
unsigned long fault_level = kvm_vcpu_trap_get_fault_level(vcpu);
- unsigned long vma_pagesize, fault_granule;
+ long vma_pagesize, fault_granule;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt;
@@ -1104,6 +1130,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
vma_pagesize = transparent_hugepage_adjust(kvm, memslot,
hva, &pfn,
&fault_ipa);
+
+ if (vma_pagesize < 0) {
+ ret = vma_pagesize;
+ goto out_unlock;
+ }
}
if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 21a6207..8eb7045 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -347,26 +347,23 @@ int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq)
* The deactivation of the doorbell interrupt will trigger the
* unmapping of the associated vPE.
*/
-static void unmap_all_vpes(struct vgic_dist *dist)
+static void unmap_all_vpes(struct kvm *kvm)
{
- struct irq_desc *desc;
+ struct vgic_dist *dist = &kvm->arch.vgic;
int i;
- for (i = 0; i < dist->its_vm.nr_vpes; i++) {
- desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
- irq_domain_deactivate_irq(irq_desc_get_irq_data(desc));
- }
+ for (i = 0; i < dist->its_vm.nr_vpes; i++)
+ free_irq(dist->its_vm.vpes[i]->irq, kvm_get_vcpu(kvm, i));
}
-static void map_all_vpes(struct vgic_dist *dist)
+static void map_all_vpes(struct kvm *kvm)
{
- struct irq_desc *desc;
+ struct vgic_dist *dist = &kvm->arch.vgic;
int i;
- for (i = 0; i < dist->its_vm.nr_vpes; i++) {
- desc = irq_to_desc(dist->its_vm.vpes[i]->irq);
- irq_domain_activate_irq(irq_desc_get_irq_data(desc), false);
- }
+ for (i = 0; i < dist->its_vm.nr_vpes; i++)
+ WARN_ON(vgic_v4_request_vpe_irq(kvm_get_vcpu(kvm, i),
+ dist->its_vm.vpes[i]->irq));
}
/**
@@ -391,7 +388,7 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
* and enabling of the doorbells have already been done.
*/
if (kvm_vgic_global_state.has_gicv4_1) {
- unmap_all_vpes(dist);
+ unmap_all_vpes(kvm);
vlpi_avail = true;
}
@@ -441,7 +438,7 @@ int vgic_v3_save_pending_tables(struct kvm *kvm)
out:
if (vlpi_avail)
- map_all_vpes(dist);
+ map_all_vpes(kvm);
return ret;
}
diff --git a/arch/arm64/kvm/vgic/vgic-v4.c b/arch/arm64/kvm/vgic/vgic-v4.c
index c1845d8..f507e3f 100644
--- a/arch/arm64/kvm/vgic/vgic-v4.c
+++ b/arch/arm64/kvm/vgic/vgic-v4.c
@@ -222,6 +222,11 @@ void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val)
*val = !!(*ptr & mask);
}
+int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq)
+{
+ return request_irq(irq, vgic_v4_doorbell_handler, 0, "vcpu", vcpu);
+}
+
/**
* vgic_v4_init - Initialize the GICv4 data structures
* @kvm: Pointer to the VM being initialized
@@ -282,8 +287,7 @@ int vgic_v4_init(struct kvm *kvm)
irq_flags &= ~IRQ_NOAUTOEN;
irq_set_status_flags(irq, irq_flags);
- ret = request_irq(irq, vgic_v4_doorbell_handler,
- 0, "vcpu", vcpu);
+ ret = vgic_v4_request_vpe_irq(vcpu, irq);
if (ret) {
kvm_err("failed to allocate vcpu IRQ%d\n", irq);
/*
diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
index 14a9218..36021c3 100644
--- a/arch/arm64/kvm/vgic/vgic.h
+++ b/arch/arm64/kvm/vgic/vgic.h
@@ -321,5 +321,6 @@ int vgic_v4_init(struct kvm *kvm);
void vgic_v4_teardown(struct kvm *kvm);
void vgic_v4_configure_vsgis(struct kvm *kvm);
void vgic_v4_get_vlpi_state(struct vgic_irq *irq, bool *val);
+int vgic_v4_request_vpe_irq(struct kvm_vcpu *vcpu, int irq);
#endif
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index d09b21f..97a93ee 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -302,7 +302,7 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
show_pte(addr);
die("Oops", regs, esr);
bust_spinlocks(0);
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
}
#ifdef CONFIG_KASAN_HW_TAGS
diff --git a/arch/csky/abiv1/alignment.c b/arch/csky/abiv1/alignment.c
index cb2a0d9..2df115d0 100644
--- a/arch/csky/abiv1/alignment.c
+++ b/arch/csky/abiv1/alignment.c
@@ -294,7 +294,7 @@ void csky_alignment(struct pt_regs *regs)
__func__, opcode, rz, rx, imm, addr);
show_regs(regs);
bust_spinlocks(0);
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
}
force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)addr);
diff --git a/arch/csky/kernel/probes/kprobes.c b/arch/csky/kernel/probes/kprobes.c
index 584ed9f3..bd92ac3 100644
--- a/arch/csky/kernel/probes/kprobes.c
+++ b/arch/csky/kernel/probes/kprobes.c
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0+
+#define pr_fmt(fmt) "kprobes: " fmt
+
#include <linux/kprobes.h>
#include <linux/extable.h>
#include <linux/slab.h>
@@ -77,10 +79,8 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
{
unsigned long probe_addr = (unsigned long)p->addr;
- if (probe_addr & 0x1) {
- pr_warn("Address not aligned.\n");
- return -EINVAL;
- }
+ if (probe_addr & 0x1)
+ return -EILSEQ;
/* copy instruction */
p->opcode = le32_to_cpu(*p->addr);
@@ -229,7 +229,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
break;
case KPROBE_HIT_SS:
case KPROBE_REENTER:
- pr_warn("Unrecoverable kprobe detected.\n");
+ pr_warn("Failed to recover from reentered kprobes.\n");
dump_kprobe(p);
BUG();
break;
diff --git a/arch/csky/kernel/traps.c b/arch/csky/kernel/traps.c
index 2020af8..6e426fb 100644
--- a/arch/csky/kernel/traps.c
+++ b/arch/csky/kernel/traps.c
@@ -109,7 +109,7 @@ void die(struct pt_regs *regs, const char *str)
if (panic_on_oops)
panic("Fatal exception");
if (ret != NOTIFY_STOP)
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
index 466ad949..7215a46 100644
--- a/arch/csky/mm/fault.c
+++ b/arch/csky/mm/fault.c
@@ -67,7 +67,7 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr)
pr_alert("Unable to handle kernel paging request at virtual "
"addr 0x%08lx, pc: 0x%08lx\n", addr, regs->pc);
die(regs, "Oops");
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
}
static inline void mm_fault_error(struct pt_regs *regs, unsigned long addr, vm_fault_t fault)
diff --git a/arch/h8300/kernel/traps.c b/arch/h8300/kernel/traps.c
index bdbe988..a92c39e 100644
--- a/arch/h8300/kernel/traps.c
+++ b/arch/h8300/kernel/traps.c
@@ -17,6 +17,7 @@
#include <linux/types.h>
#include <linux/sched.h>
#include <linux/sched/debug.h>
+#include <linux/sched/task.h>
#include <linux/mm_types.h>
#include <linux/kernel.h>
#include <linux/errno.h>
@@ -106,7 +107,7 @@ void die(const char *str, struct pt_regs *fp, unsigned long err)
dump(fp);
spin_unlock_irq(&die_lock);
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
static int kstack_depth_to_print = 24;
diff --git a/arch/h8300/mm/fault.c b/arch/h8300/mm/fault.c
index d4bc9c1..b465441 100644
--- a/arch/h8300/mm/fault.c
+++ b/arch/h8300/mm/fault.c
@@ -51,7 +51,7 @@ asmlinkage int do_page_fault(struct pt_regs *regs, unsigned long address,
printk(" at virtual address %08lx\n", address);
if (!user_mode(regs))
die("Oops", regs, error_code);
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
return 1;
}
diff --git a/arch/hexagon/kernel/traps.c b/arch/hexagon/kernel/traps.c
index edfc35d..1240f038 100644
--- a/arch/hexagon/kernel/traps.c
+++ b/arch/hexagon/kernel/traps.c
@@ -214,7 +214,7 @@ int die(const char *str, struct pt_regs *regs, long err)
panic("Fatal exception");
oops_exit();
- do_exit(err);
+ make_task_dead(err);
return 0;
}
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 1e33666..b1f2b6a 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -323,7 +323,7 @@
depends on PROC_KCORE
config IA64_MCA_RECOVERY
- tristate "MCA recovery from errors other than TLB."
+ bool "MCA recovery from errors other than TLB."
config IA64_PALINFO
tristate "/proc/pal support"
diff --git a/arch/ia64/kernel/iosapic.c b/arch/ia64/kernel/iosapic.c
index 35adcf8..9930085 100644
--- a/arch/ia64/kernel/iosapic.c
+++ b/arch/ia64/kernel/iosapic.c
@@ -834,7 +834,7 @@ iosapic_unregister_intr (unsigned int gsi)
if (iosapic_intr_info[irq].count == 0) {
#ifdef CONFIG_SMP
/* Clear affinity */
- cpumask_setall(irq_get_affinity_mask(irq));
+ irq_data_update_affinity(irq_get_irq_data(irq), cpu_all_mask);
#endif
/* Clear the interrupt information */
iosapic_intr_info[irq].dest = 0;
diff --git a/arch/ia64/kernel/irq.c b/arch/ia64/kernel/irq.c
index ecef17c..275b9ea 100644
--- a/arch/ia64/kernel/irq.c
+++ b/arch/ia64/kernel/irq.c
@@ -57,8 +57,8 @@ static char irq_redir [NR_IRQS]; // = { [0 ... NR_IRQS-1] = 1 };
void set_irq_affinity_info (unsigned int irq, int hwid, int redir)
{
if (irq < NR_IRQS) {
- cpumask_copy(irq_get_affinity_mask(irq),
- cpumask_of(cpu_logical_id(hwid)));
+ irq_data_update_affinity(irq_get_irq_data(irq),
+ cpumask_of(cpu_logical_id(hwid)));
irq_redir[irq] = (char) (redir & 0xff);
}
}
diff --git a/arch/ia64/kernel/mca_drv.c b/arch/ia64/kernel/mca_drv.c
index 5bfc79b..23c2036 100644
--- a/arch/ia64/kernel/mca_drv.c
+++ b/arch/ia64/kernel/mca_drv.c
@@ -176,7 +176,7 @@ mca_handler_bh(unsigned long paddr, void *iip, unsigned long ipsr)
spin_unlock(&mca_bh_lock);
/* This process is about to be killed itself */
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
}
/**
diff --git a/arch/ia64/kernel/msi_ia64.c b/arch/ia64/kernel/msi_ia64.c
index df5c28f..025e513 100644
--- a/arch/ia64/kernel/msi_ia64.c
+++ b/arch/ia64/kernel/msi_ia64.c
@@ -37,7 +37,7 @@ static int ia64_set_msi_irq_affinity(struct irq_data *idata,
msg.data = data;
pci_write_msi_msg(irq, &msg);
- cpumask_copy(irq_data_get_affinity_mask(idata), cpumask_of(cpu));
+ irq_data_update_affinity(idata, cpumask_of(cpu));
return 0;
}
@@ -132,7 +132,7 @@ static int dmar_msi_set_affinity(struct irq_data *data,
msg.address_lo |= MSI_ADDR_DEST_ID_CPU(cpu_physical_id(cpu));
dmar_msi_write(irq, &msg);
- cpumask_copy(irq_data_get_affinity_mask(data), mask);
+ irq_data_update_affinity(data, mask);
return 0;
}
diff --git a/arch/ia64/kernel/traps.c b/arch/ia64/kernel/traps.c
index e13cb905..7536423 100644
--- a/arch/ia64/kernel/traps.c
+++ b/arch/ia64/kernel/traps.c
@@ -85,7 +85,7 @@ die (const char *str, struct pt_regs *regs, long err)
if (panic_on_oops)
panic("Fatal exception");
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
return 0;
}
diff --git a/arch/ia64/mm/fault.c b/arch/ia64/mm/fault.c
index 02de2e7..4796ccc 100644
--- a/arch/ia64/mm/fault.c
+++ b/arch/ia64/mm/fault.c
@@ -259,7 +259,7 @@ ia64_do_page_fault (unsigned long address, unsigned long isr, struct pt_regs *re
regs = NULL;
bust_spinlocks(0);
if (regs)
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
return;
out_of_memory:
diff --git a/arch/m68k/68000/entry.S b/arch/m68k/68000/entry.S
index 997b549..7d63e2f 100644
--- a/arch/m68k/68000/entry.S
+++ b/arch/m68k/68000/entry.S
@@ -45,6 +45,8 @@
jbsr syscall_trace_enter
RESTORE_SWITCH_STACK
addql #4,%sp
+ addql #1,%d0
+ jeq ret_from_exception
movel %sp@(PT_OFF_ORIG_D0),%d1
movel #-ENOSYS,%d0
cmpl #NR_syscalls,%d1
diff --git a/arch/m68k/Kconfig.devices b/arch/m68k/Kconfig.devices
index 6a87b4a..e6e3efa 100644
--- a/arch/m68k/Kconfig.devices
+++ b/arch/m68k/Kconfig.devices
@@ -19,6 +19,7 @@
# We have a dedicated heartbeat LED. :-)
config PROC_HARDWARE
bool "/proc/hardware support"
+ depends on PROC_FS
help
Say Y here to support the /proc/hardware file, which gives you
access to information about the machine you're running on,
diff --git a/arch/m68k/coldfire/entry.S b/arch/m68k/coldfire/entry.S
index 9f337c7..35104c5 100644
--- a/arch/m68k/coldfire/entry.S
+++ b/arch/m68k/coldfire/entry.S
@@ -90,6 +90,8 @@
jbsr syscall_trace_enter
RESTORE_SWITCH_STACK
addql #4,%sp
+ addql #1,%d0
+ jeq ret_from_exception
movel %d3,%a0
jbsr %a0@
movel %d0,%sp@(PT_OFF_D0) /* save the return value */
diff --git a/arch/m68k/kernel/entry.S b/arch/m68k/kernel/entry.S
index 9434fca..9f3663f 100644
--- a/arch/m68k/kernel/entry.S
+++ b/arch/m68k/kernel/entry.S
@@ -184,9 +184,12 @@
jbsr syscall_trace
RESTORE_SWITCH_STACK
addql #4,%sp
+ addql #1,%d0 | optimization for cmpil #-1,%d0
+ jeq ret_from_syscall
movel %sp@(PT_OFF_ORIG_D0),%d0
cmpl #NR_syscalls,%d0
jcs syscall
+ jra ret_from_syscall
badsys:
movel #-ENOSYS,%sp@(PT_OFF_D0)
jra ret_from_syscall
diff --git a/arch/m68k/kernel/traps.c b/arch/m68k/kernel/traps.c
index 34d6458..6f64774 100644
--- a/arch/m68k/kernel/traps.c
+++ b/arch/m68k/kernel/traps.c
@@ -30,6 +30,7 @@
#include <linux/init.h>
#include <linux/ptrace.h>
#include <linux/kallsyms.h>
+#include <linux/extable.h>
#include <asm/setup.h>
#include <asm/fpu.h>
@@ -544,7 +545,8 @@ static inline void bus_error030 (struct frame *fp)
errorcode |= 2;
if (mmusr & (MMU_I | MMU_WP)) {
- if (ssw & 4) {
+ /* We might have an exception table for this PC */
+ if (ssw & 4 && !search_exception_tables(fp->ptregs.pc)) {
pr_err("Data %s fault at %#010lx in %s (pc=%#lx)\n",
ssw & RW ? "read" : "write",
fp->un.fmtb.daddr,
@@ -1131,7 +1133,7 @@ void die_if_kernel (char *str, struct pt_regs *fp, int nr)
pr_crit("%s: %08x\n", str, nr);
show_registers(fp);
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
asmlinkage void set_esp0(unsigned long ssp)
diff --git a/arch/m68k/mm/fault.c b/arch/m68k/mm/fault.c
index ef46e77..fcb3a0d 100644
--- a/arch/m68k/mm/fault.c
+++ b/arch/m68k/mm/fault.c
@@ -48,7 +48,7 @@ int send_fault_sig(struct pt_regs *regs)
pr_alert("Unable to handle kernel access");
pr_cont(" at virtual address %p\n", addr);
die_if_kernel("Oops", regs, 0 /*error_code*/);
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
}
return 1;
diff --git a/arch/microblaze/kernel/exceptions.c b/arch/microblaze/kernel/exceptions.c
index 9087884..fd153d5 100644
--- a/arch/microblaze/kernel/exceptions.c
+++ b/arch/microblaze/kernel/exceptions.c
@@ -44,10 +44,10 @@ void die(const char *str, struct pt_regs *fp, long err)
pr_warn("Oops: %s, sig: %ld\n", str, err);
show_regs(fp);
spin_unlock_irq(&die_lock);
- /* do_exit() should take care of panic'ing from an interrupt
+ /* make_task_dead() should take care of panic'ing from an interrupt
* context so we don't handle it here
*/
- do_exit(err);
+ make_task_dead(err);
}
/* for user application debugging */
diff --git a/arch/mips/bmips/dma.c b/arch/mips/bmips/dma.c
index 915ce4b1..76c5d8e 100644
--- a/arch/mips/bmips/dma.c
+++ b/arch/mips/bmips/dma.c
@@ -64,6 +64,8 @@ phys_addr_t dma_to_phys(struct device *dev, dma_addr_t dma_addr)
return dma_addr;
}
+bool bmips_rac_flush_disable;
+
void arch_sync_dma_for_cpu_all(void)
{
void __iomem *cbr = BMIPS_GET_CBR();
@@ -74,6 +76,9 @@ void arch_sync_dma_for_cpu_all(void)
boot_cpu_type() != CPU_BMIPS4380)
return;
+ if (unlikely(bmips_rac_flush_disable))
+ return;
+
/* Flush stale data out of the readahead cache */
cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
__raw_writel(cfg | 0x100, cbr + BMIPS_RAC_CONFIG);
diff --git a/arch/mips/bmips/setup.c b/arch/mips/bmips/setup.c
index 31bcfa4..45c7cf5 100644
--- a/arch/mips/bmips/setup.c
+++ b/arch/mips/bmips/setup.c
@@ -34,6 +34,8 @@
#define REG_BCM6328_OTP ((void __iomem *)CKSEG1ADDR(0x1000062c))
#define BCM6328_TP1_DISABLED BIT(9)
+extern bool bmips_rac_flush_disable;
+
static const unsigned long kbase = VMLINUX_LOAD_ADDRESS & 0xfff00000;
struct bmips_quirk {
@@ -103,6 +105,12 @@ static void bcm6358_quirks(void)
* disable SMP for now
*/
bmips_smp_enabled = 0;
+
+ /*
+ * RAC flush causes kernel panics on BCM6358 when booting from TP1
+ * because the bootloader is not initializing it properly.
+ */
+ bmips_rac_flush_disable = !!(read_c0_brcm_cmt_local() & (1 << 31));
}
static void bcm6368_quirks(void)
diff --git a/arch/mips/boot/dts/ingenic/ci20.dts b/arch/mips/boot/dts/ingenic/ci20.dts
index a688809..74d49dc 100644
--- a/arch/mips/boot/dts/ingenic/ci20.dts
+++ b/arch/mips/boot/dts/ingenic/ci20.dts
@@ -99,7 +99,7 @@ otg_power: fixedregulator@2 {
regulator-min-microvolt = <5000000>;
regulator-max-microvolt = <5000000>;
- gpio = <&gpf 14 GPIO_ACTIVE_LOW>;
+ gpio = <&gpf 15 GPIO_ACTIVE_LOW>;
enable-active-high;
};
};
diff --git a/arch/mips/include/asm/mach-rc32434/pci.h b/arch/mips/include/asm/mach-rc32434/pci.h
index 9a6eefd..3eb767c 100644
--- a/arch/mips/include/asm/mach-rc32434/pci.h
+++ b/arch/mips/include/asm/mach-rc32434/pci.h
@@ -374,7 +374,7 @@ struct pci_msu {
PCI_CFG04_STAT_SSE | \
PCI_CFG04_STAT_PE)
-#define KORINA_CNFG1 ((KORINA_STAT<<16)|KORINA_CMD)
+#define KORINA_CNFG1 (KORINA_STAT | KORINA_CMD)
#define KORINA_REVID 0
#define KORINA_CLASS_CODE 0
diff --git a/arch/mips/include/asm/syscall.h b/arch/mips/include/asm/syscall.h
index 25fa651..ebdf4d9 100644
--- a/arch/mips/include/asm/syscall.h
+++ b/arch/mips/include/asm/syscall.h
@@ -38,7 +38,7 @@ static inline bool mips_syscall_is_indirect(struct task_struct *task,
static inline long syscall_get_nr(struct task_struct *task,
struct pt_regs *regs)
{
- return current_thread_info()->syscall;
+ return task_thread_info(task)->syscall;
}
static inline void mips_syscall_update_nr(struct task_struct *task,
diff --git a/arch/mips/kernel/kprobes.c b/arch/mips/kernel/kprobes.c
index 75bff0f..b0934a0 100644
--- a/arch/mips/kernel/kprobes.c
+++ b/arch/mips/kernel/kprobes.c
@@ -11,6 +11,8 @@
* Copyright (C) IBM Corporation, 2002, 2004
*/
+#define pr_fmt(fmt) "kprobes: " fmt
+
#include <linux/kprobes.h>
#include <linux/preempt.h>
#include <linux/uaccess.h>
@@ -80,8 +82,7 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p)
insn = p->addr[0];
if (insn_has_ll_or_sc(insn)) {
- pr_notice("Kprobes for ll and sc instructions are not"
- "supported\n");
+ pr_notice("Kprobes for ll and sc instructions are not supported\n");
ret = -EINVAL;
goto out;
}
@@ -219,7 +220,7 @@ static int evaluate_branch_instruction(struct kprobe *p, struct pt_regs *regs,
return 0;
unaligned:
- pr_notice("%s: unaligned epc - sending SIGBUS.\n", current->comm);
+ pr_notice("Failed to emulate branch instruction because of unaligned epc - sending SIGBUS to %s.\n", current->comm);
force_sig(SIGBUS);
return -EFAULT;
@@ -238,10 +239,8 @@ static void prepare_singlestep(struct kprobe *p, struct pt_regs *regs,
regs->cp0_epc = (unsigned long)p->addr;
else if (insn_has_delayslot(p->opcode)) {
ret = evaluate_branch_instruction(p, regs, kcb);
- if (ret < 0) {
- pr_notice("Kprobes: Error in evaluating branch\n");
+ if (ret < 0)
return;
- }
}
regs->cp0_epc = (unsigned long)&p->ainsn.insn[0];
}
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index edd9343..afb2c95 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -416,7 +416,7 @@ void __noreturn die(const char *str, struct pt_regs *regs)
if (regs && kexec_should_crash(current))
crash_kexec(regs);
- do_exit(sig);
+ make_task_dead(sig);
}
extern struct exception_table_entry __start___dbe_table[];
diff --git a/arch/nds32/kernel/fpu.c b/arch/nds32/kernel/fpu.c
index 9edd7ed..701c09a 100644
--- a/arch/nds32/kernel/fpu.c
+++ b/arch/nds32/kernel/fpu.c
@@ -223,7 +223,7 @@ inline void handle_fpu_exception(struct pt_regs *regs)
}
} else if (fpcsr & FPCSR_mskRIT) {
if (!user_mode(regs))
- do_exit(SIGILL);
+ make_task_dead(SIGILL);
si_signo = SIGILL;
}
diff --git a/arch/nds32/kernel/traps.c b/arch/nds32/kernel/traps.c
index f06421c..b90030e 100644
--- a/arch/nds32/kernel/traps.c
+++ b/arch/nds32/kernel/traps.c
@@ -141,7 +141,7 @@ void die(const char *str, struct pt_regs *regs, int err)
bust_spinlocks(0);
spin_unlock_irq(&die_lock);
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
EXPORT_SYMBOL(die);
@@ -240,7 +240,7 @@ void unhandled_interruption(struct pt_regs *regs)
pr_emerg("unhandled_interruption\n");
show_regs(regs);
if (!user_mode(regs))
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
force_sig(SIGKILL);
}
@@ -251,7 +251,7 @@ void unhandled_exceptions(unsigned long entry, unsigned long addr,
addr, type);
show_regs(regs);
if (!user_mode(regs))
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
force_sig(SIGKILL);
}
@@ -278,7 +278,7 @@ void do_revinsn(struct pt_regs *regs)
pr_emerg("Reserved Instruction\n");
show_regs(regs);
if (!user_mode(regs))
- do_exit(SIGILL);
+ make_task_dead(SIGILL);
force_sig(SIGILL);
}
diff --git a/arch/nios2/kernel/traps.c b/arch/nios2/kernel/traps.c
index 596986a..85ac49d 100644
--- a/arch/nios2/kernel/traps.c
+++ b/arch/nios2/kernel/traps.c
@@ -37,10 +37,10 @@ void die(const char *str, struct pt_regs *regs, long err)
show_regs(regs);
spin_unlock_irq(&die_lock);
/*
- * do_exit() should take care of panic'ing from an interrupt
+ * make_task_dead() should take care of panic'ing from an interrupt
* context so we don't handle it here
*/
- do_exit(err);
+ make_task_dead(err);
}
void _exception(int signo, struct pt_regs *regs, int code, unsigned long addr)
diff --git a/arch/openrisc/kernel/traps.c b/arch/openrisc/kernel/traps.c
index aa1e709..9df1d85 100644
--- a/arch/openrisc/kernel/traps.c
+++ b/arch/openrisc/kernel/traps.c
@@ -212,7 +212,7 @@ void die(const char *str, struct pt_regs *regs, long err)
__asm__ __volatile__("l.nop 1");
do {} while (1);
#endif
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
/* This is normally the 'Oops' routine */
diff --git a/arch/parisc/kernel/firmware.c b/arch/parisc/kernel/firmware.c
index 7034227..8e5a906 100644
--- a/arch/parisc/kernel/firmware.c
+++ b/arch/parisc/kernel/firmware.c
@@ -1230,7 +1230,7 @@ static char __attribute__((aligned(64))) iodc_dbuf[4096];
*/
int pdc_iodc_print(const unsigned char *str, unsigned count)
{
- unsigned int i;
+ unsigned int i, found = 0;
unsigned long flags;
for (i = 0; i < count;) {
@@ -1239,6 +1239,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
iodc_dbuf[i+0] = '\r';
iodc_dbuf[i+1] = '\n';
i += 2;
+ found = 1;
goto print;
default:
iodc_dbuf[i] = str[i];
@@ -1255,7 +1256,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count)
__pa(iodc_retbuf), 0, __pa(iodc_dbuf), i, 0);
spin_unlock_irqrestore(&pdc_lock, flags);
- return i;
+ return i - found;
}
#if !defined(BOOTLOADER)
diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
index 0d46b19..e6cc38e 100644
--- a/arch/parisc/kernel/irq.c
+++ b/arch/parisc/kernel/irq.c
@@ -333,7 +333,7 @@ unsigned long txn_affinity_addr(unsigned int irq, int cpu)
{
#ifdef CONFIG_SMP
struct irq_data *d = irq_get_irq_data(irq);
- cpumask_copy(irq_data_get_affinity_mask(d), cpumask_of(cpu));
+ irq_data_update_affinity(d, cpumask_of(cpu));
#endif
return per_cpu(cpu_data, cpu).txn_addr;
diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c
index 65de6c4..b9398e8 100644
--- a/arch/parisc/kernel/ptrace.c
+++ b/arch/parisc/kernel/ptrace.c
@@ -127,6 +127,12 @@ long arch_ptrace(struct task_struct *child, long request,
unsigned long tmp;
long ret = -EIO;
+ unsigned long user_regs_struct_size = sizeof(struct user_regs_struct);
+#ifdef CONFIG_64BIT
+ if (is_compat_task())
+ user_regs_struct_size /= 2;
+#endif
+
switch (request) {
/* Read the word at location addr in the USER area. For ptraced
@@ -182,14 +188,14 @@ long arch_ptrace(struct task_struct *child, long request,
return copy_regset_to_user(child,
task_user_regset_view(current),
REGSET_GENERAL,
- 0, sizeof(struct user_regs_struct),
+ 0, user_regs_struct_size,
datap);
case PTRACE_SETREGS: /* Set all gp regs in the child. */
return copy_regset_from_user(child,
task_user_regset_view(current),
REGSET_GENERAL,
- 0, sizeof(struct user_regs_struct),
+ 0, user_regs_struct_size,
datap);
case PTRACE_GETFPREGS: /* Get the child FPU state. */
@@ -303,6 +309,11 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
}
}
break;
+ case PTRACE_GETREGS:
+ case PTRACE_SETREGS:
+ case PTRACE_GETFPREGS:
+ case PTRACE_SETFPREGS:
+ return arch_ptrace(child, request, addr, data);
default:
ret = compat_ptrace_request(child, request, addr, data);
diff --git a/arch/parisc/kernel/traps.c b/arch/parisc/kernel/traps.c
index 6fe5a3e..70ace36 100644
--- a/arch/parisc/kernel/traps.c
+++ b/arch/parisc/kernel/traps.c
@@ -268,7 +268,7 @@ void die_if_kernel(char *str, struct pt_regs *regs, long err)
panic("Fatal exception");
oops_exit();
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
/* gdb uses break 4,8 */
diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 7859ae5..a8e52e6 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -92,7 +92,7 @@
ifeq ($(HAS_BIARCH),y)
KBUILD_CFLAGS += -m$(BITS)
-KBUILD_AFLAGS += -m$(BITS) -Wl,-a$(BITS)
+KBUILD_AFLAGS += -m$(BITS)
KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
endif
diff --git a/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi
new file mode 100644
index 0000000..437dab3
--- /dev/null
+++ b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-2.dtsi
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
+/*
+ * QorIQ FMan v3 10g port #2 device tree stub [ controller @ offset 0x400000 ]
+ *
+ * Copyright 2022 Sean Anderson <sean.anderson@seco.com>
+ * Copyright 2012 - 2015 Freescale Semiconductor Inc.
+ */
+
+fman@400000 {
+ fman0_rx_0x08: port@88000 {
+ cell-index = <0x8>;
+ compatible = "fsl,fman-v3-port-rx";
+ reg = <0x88000 0x1000>;
+ fsl,fman-10g-port;
+ };
+
+ fman0_tx_0x28: port@a8000 {
+ cell-index = <0x28>;
+ compatible = "fsl,fman-v3-port-tx";
+ reg = <0xa8000 0x1000>;
+ fsl,fman-10g-port;
+ };
+
+ ethernet@e0000 {
+ cell-index = <0>;
+ compatible = "fsl,fman-memac";
+ reg = <0xe0000 0x1000>;
+ fsl,fman-ports = <&fman0_rx_0x08 &fman0_tx_0x28>;
+ ptp-timer = <&ptp_timer0>;
+ pcsphy-handle = <&pcsphy0>;
+ };
+
+ mdio@e1000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
+ reg = <0xe1000 0x1000>;
+ fsl,erratum-a011043; /* must ignore read errors */
+
+ pcsphy0: ethernet-phy@0 {
+ reg = <0x0>;
+ };
+ };
+};
diff --git a/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi
new file mode 100644
index 0000000..ad116b1
--- /dev/null
+++ b/arch/powerpc/boot/dts/fsl/qoriq-fman3-0-10g-3.dtsi
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0-or-later
+/*
+ * QorIQ FMan v3 10g port #3 device tree stub [ controller @ offset 0x400000 ]
+ *
+ * Copyright 2022 Sean Anderson <sean.anderson@seco.com>
+ * Copyright 2012 - 2015 Freescale Semiconductor Inc.
+ */
+
+fman@400000 {
+ fman0_rx_0x09: port@89000 {
+ cell-index = <0x9>;
+ compatible = "fsl,fman-v3-port-rx";
+ reg = <0x89000 0x1000>;
+ fsl,fman-10g-port;
+ };
+
+ fman0_tx_0x29: port@a9000 {
+ cell-index = <0x29>;
+ compatible = "fsl,fman-v3-port-tx";
+ reg = <0xa9000 0x1000>;
+ fsl,fman-10g-port;
+ };
+
+ ethernet@e2000 {
+ cell-index = <1>;
+ compatible = "fsl,fman-memac";
+ reg = <0xe2000 0x1000>;
+ fsl,fman-ports = <&fman0_rx_0x09 &fman0_tx_0x29>;
+ ptp-timer = <&ptp_timer0>;
+ pcsphy-handle = <&pcsphy1>;
+ };
+
+ mdio@e3000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
+ reg = <0xe3000 0x1000>;
+ fsl,erratum-a011043; /* must ignore read errors */
+
+ pcsphy1: ethernet-phy@0 {
+ reg = <0x0>;
+ };
+ };
+};
diff --git a/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
index 73f8c99..d4f5f15 100644
--- a/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
+++ b/arch/powerpc/boot/dts/fsl/t1040rdb-rev-a.dts
@@ -10,7 +10,6 @@
/ {
model = "fsl,T1040RDB-REV-A";
- compatible = "fsl,T1040RDB-REV-A";
};
&seville_port0 {
diff --git a/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi b/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
index ecbb44792..27714dc 100644
--- a/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
@@ -609,8 +609,8 @@ usb1: usb@211000 {
/include/ "qoriq-bman1.dtsi"
/include/ "qoriq-fman3-0.dtsi"
-/include/ "qoriq-fman3-0-1g-0.dtsi"
-/include/ "qoriq-fman3-0-1g-1.dtsi"
+/include/ "qoriq-fman3-0-10g-2.dtsi"
+/include/ "qoriq-fman3-0-10g-3.dtsi"
/include/ "qoriq-fman3-0-1g-2.dtsi"
/include/ "qoriq-fman3-0-1g-3.dtsi"
/include/ "qoriq-fman3-0-1g-4.dtsi"
@@ -659,3 +659,19 @@ L2_1: l2-cache-controller@c20000 {
interrupts = <16 2 1 9>;
};
};
+
+&fman0_rx_0x08 {
+ /delete-property/ fsl,fman-10g-port;
+};
+
+&fman0_tx_0x28 {
+ /delete-property/ fsl,fman-10g-port;
+};
+
+&fman0_rx_0x09 {
+ /delete-property/ fsl,fman-10g-port;
+};
+
+&fman0_tx_0x29 {
+ /delete-property/ fsl,fman-10g-port;
+};
diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h
index 6e4af44..e92d39c0 100644
--- a/arch/powerpc/include/asm/sections.h
+++ b/arch/powerpc/include/asm/sections.h
@@ -6,22 +6,10 @@
#include <linux/elf.h>
#include <linux/uaccess.h>
-#define arch_is_kernel_initmem_freed arch_is_kernel_initmem_freed
-
#include <asm-generic/sections.h>
-extern bool init_mem_is_free;
-
-static inline int arch_is_kernel_initmem_freed(unsigned long addr)
-{
- if (!init_mem_is_free)
- return 0;
-
- return addr >= (unsigned long)__init_begin &&
- addr < (unsigned long)__init_end;
-}
-
extern char __head_end[];
+extern char __srwx_boundary[];
#ifdef __powerpc64__
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index df048e3..8703df7 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -53,16 +53,18 @@ static inline bool exit_must_hard_disable(void)
*/
static notrace __always_inline bool prep_irq_for_enabled_exit(bool restartable)
{
+ bool must_hard_disable = (exit_must_hard_disable() || !restartable);
+
/* This must be done with RI=1 because tracing may touch vmaps */
trace_hardirqs_on();
- if (exit_must_hard_disable() || !restartable)
+ if (must_hard_disable)
__hard_EE_RI_disable();
#ifdef CONFIG_PPC64
/* This pattern matches prep_irq_for_idle */
if (unlikely(lazy_irq_pending_nocheck())) {
- if (exit_must_hard_disable() || !restartable) {
+ if (must_hard_disable) {
local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
__hard_RI_enable();
}
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index a67fd54..8bea336 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -68,11 +68,9 @@ static void iommu_debugfs_add(struct iommu_table *tbl)
static void iommu_debugfs_del(struct iommu_table *tbl)
{
char name[10];
- struct dentry *liobn_entry;
sprintf(name, "%08lx", tbl->it_index);
- liobn_entry = debugfs_lookup(name, iommu_debugfs_dir);
- debugfs_remove(liobn_entry);
+ debugfs_lookup_and_remove(name, iommu_debugfs_dir);
}
#else
static void iommu_debugfs_add(struct iommu_table *tbl){}
diff --git a/arch/powerpc/kernel/ptrace/ptrace-view.c b/arch/powerpc/kernel/ptrace/ptrace-view.c
index b8be1d6..54dfa6a 100644
--- a/arch/powerpc/kernel/ptrace/ptrace-view.c
+++ b/arch/powerpc/kernel/ptrace/ptrace-view.c
@@ -290,6 +290,9 @@ static int gpr_set(struct task_struct *target, const struct user_regset *regset,
static int ppr_get(struct task_struct *target, const struct user_regset *regset,
struct membuf to)
{
+ if (!target->thread.regs)
+ return -EINVAL;
+
return membuf_write(&to, &target->thread.regs->ppr, sizeof(u64));
}
@@ -297,6 +300,9 @@ static int ppr_set(struct task_struct *target, const struct user_regset *regset,
unsigned int pos, unsigned int count, const void *kbuf,
const void __user *ubuf)
{
+ if (!target->thread.regs)
+ return -EINVAL;
+
return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&target->thread.regs->ppr, 0, sizeof(u64));
}
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 934d8ae6..4406d7a 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -450,7 +450,7 @@ void vtime_flush(struct task_struct *tsk)
#define calc_cputime_factors()
#endif
-void __delay(unsigned long loops)
+void __no_kcsan __delay(unsigned long loops)
{
unsigned long start;
@@ -471,7 +471,7 @@ void __delay(unsigned long loops)
}
EXPORT_SYMBOL(__delay);
-void udelay(unsigned long usecs)
+void __no_kcsan udelay(unsigned long usecs)
{
__delay(tb_ticks_per_usec * usecs);
}
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 1174170..a08bb7c 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -245,7 +245,7 @@ static void oops_end(unsigned long flags, struct pt_regs *regs,
if (panic_on_oops)
panic("Fatal exception");
- do_exit(signr);
+ make_task_dead(signr);
}
NOKPROBE_SYMBOL(oops_end);
@@ -792,9 +792,9 @@ int machine_check_generic(struct pt_regs *regs)
void die_mce(const char *str, struct pt_regs *regs, long err)
{
/*
- * The machine check wants to kill the interrupted context, but
- * do_exit() checks for in_interrupt() and panics in that case, so
- * exit the irq/nmi before calling die.
+ * The machine check wants to kill the interrupted context,
+ * but make_task_dead() checks for in_interrupt() and panics
+ * in that case, so exit the irq/nmi before calling die.
*/
if (in_nmi())
nmi_exit();
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 1a63e37..d453190 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -8,6 +8,7 @@
#define BSS_FIRST_SECTIONS *(.bss.prominit)
#define EMITS_PT_NOTE
#define RO_EXCEPTION_TABLE_ALIGN 0
+#define RUNTIME_DISCARD_EXIT
#define SOFT_MASK_TABLE(align) \
. = ALIGN(align); \
@@ -32,6 +33,10 @@
#define STRICT_ALIGN_SIZE (1 << CONFIG_DATA_SHIFT)
+#if STRICT_ALIGN_SIZE < PAGE_SIZE
+#error "CONFIG_DATA_SHIFT must be >= PAGE_SHIFT"
+#endif
+
ENTRY(_stext)
PHDRS {
@@ -204,12 +209,16 @@
}
#endif
+ /*
+ * Various code relies on __init_begin being at the strict RWX boundary.
+ */
+ . = ALIGN(STRICT_ALIGN_SIZE);
+ __srwx_boundary = .;
+ __init_begin = .;
+
/*
* Init sections discarded at runtime
*/
- . = ALIGN(STRICT_ALIGN_SIZE);
- __init_begin = .;
- . = ALIGN(PAGE_SIZE);
.init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
_sinittext = .;
INIT_TEXT
@@ -399,9 +408,12 @@
DISCARDS
/DISCARD/ : {
*(*.EMB.apuinfo)
- *(.glink .iplt .plt .rela* .comment)
+ *(.glink .iplt .plt .comment)
*(.gnu.version*)
*(.gnu.attributes)
*(.eh_frame)
+#ifndef CONFIG_RELOCATABLE
+ *(.rela*)
+#endif
}
}
diff --git a/arch/powerpc/mm/book3s32/mmu.c b/arch/powerpc/mm/book3s32/mmu.c
index bfca0af..692c336 100644
--- a/arch/powerpc/mm/book3s32/mmu.c
+++ b/arch/powerpc/mm/book3s32/mmu.c
@@ -159,7 +159,7 @@ static unsigned long __init __mmu_mapin_ram(unsigned long base, unsigned long to
unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
{
unsigned long done;
- unsigned long border = (unsigned long)__init_begin - PAGE_OFFSET;
+ unsigned long border = (unsigned long)__srwx_boundary - PAGE_OFFSET;
unsigned long size;
size = roundup_pow_of_two((unsigned long)_einittext - PAGE_OFFSET);
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index a339cb5..feb2431 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -232,6 +232,14 @@ void radix__mark_rodata_ro(void)
end = (unsigned long)__init_begin;
radix__change_memory_range(start, end, _PAGE_WRITE);
+
+ for (start = PAGE_OFFSET; start < (unsigned long)_stext; start += PAGE_SIZE) {
+ end = start + PAGE_SIZE;
+ if (overlaps_interrupt_vector_text(start, end))
+ radix__change_memory_range(start, end, _PAGE_WRITE);
+ else
+ break;
+ }
}
void radix__mark_initmem_nx(void)
@@ -260,8 +268,24 @@ print_mapping(unsigned long start, unsigned long end, unsigned long size, bool e
static unsigned long next_boundary(unsigned long addr, unsigned long end)
{
#ifdef CONFIG_STRICT_KERNEL_RWX
- if (addr < __pa_symbol(__init_begin))
- return __pa_symbol(__init_begin);
+ unsigned long stext_phys;
+
+ stext_phys = __pa_symbol(_stext);
+
+ // Relocatable kernel running at non-zero real address
+ if (stext_phys != 0) {
+ // The end of interrupts code at zero is a rodata boundary
+ unsigned long end_intr = __pa_symbol(__end_interrupts) - stext_phys;
+ if (addr < end_intr)
+ return end_intr;
+
+ // Start of relocated kernel text is a rodata boundary
+ if (addr < stext_phys)
+ return stext_phys;
+ }
+
+ if (addr < __pa_symbol(__srwx_boundary))
+ return __pa_symbol(__srwx_boundary);
#endif
return end;
}
diff --git a/arch/powerpc/mm/book3s64/radix_tlb.c b/arch/powerpc/mm/book3s64/radix_tlb.c
index 7724af1..5172d5c 100644
--- a/arch/powerpc/mm/book3s64/radix_tlb.c
+++ b/arch/powerpc/mm/book3s64/radix_tlb.c
@@ -1171,15 +1171,12 @@ static inline void __radix__flush_tlb_range(struct mm_struct *mm,
}
}
} else {
- bool hflush = false;
+ bool hflush;
unsigned long hstart, hend;
- if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
- hstart = (start + PMD_SIZE - 1) & PMD_MASK;
- hend = end & PMD_MASK;
- if (hstart < hend)
- hflush = true;
- }
+ hstart = (start + PMD_SIZE - 1) & PMD_MASK;
+ hend = end & PMD_MASK;
+ hflush = IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hstart < hend;
if (type == FLUSH_TYPE_LOCAL) {
asm volatile("ptesync": : :"memory");
diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
index 6ec93ee..b8a100b 100644
--- a/arch/powerpc/perf/imc-pmu.c
+++ b/arch/powerpc/perf/imc-pmu.c
@@ -21,7 +21,7 @@
* Used to avoid races in counting the nest-pmu units during hotplug
* register and unregister
*/
-static DEFINE_SPINLOCK(nest_init_lock);
+static DEFINE_MUTEX(nest_init_lock);
static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc);
static struct imc_pmu **per_nest_pmu_arr;
static cpumask_t nest_imc_cpumask;
@@ -1622,7 +1622,7 @@ static void imc_common_mem_free(struct imc_pmu *pmu_ptr)
static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
{
if (pmu_ptr->domain == IMC_DOMAIN_NEST) {
- spin_lock(&nest_init_lock);
+ mutex_lock(&nest_init_lock);
if (nest_pmus == 1) {
cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE);
kfree(nest_imc_refc);
@@ -1632,7 +1632,7 @@ static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr)
if (nest_pmus > 0)
nest_pmus--;
- spin_unlock(&nest_init_lock);
+ mutex_unlock(&nest_init_lock);
}
/* Free core_imc memory */
@@ -1789,11 +1789,11 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
* rest. To handle the cpuhotplug callback unregister, we track
* the number of nest pmus in "nest_pmus".
*/
- spin_lock(&nest_init_lock);
+ mutex_lock(&nest_init_lock);
if (nest_pmus == 0) {
ret = init_nest_pmu_ref();
if (ret) {
- spin_unlock(&nest_init_lock);
+ mutex_unlock(&nest_init_lock);
kfree(per_nest_pmu_arr);
per_nest_pmu_arr = NULL;
goto err_free_mem;
@@ -1801,7 +1801,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
/* Register for cpu hotplug notification. */
ret = nest_pmu_cpumask_init();
if (ret) {
- spin_unlock(&nest_init_lock);
+ mutex_unlock(&nest_init_lock);
kfree(nest_imc_refc);
kfree(per_nest_pmu_arr);
per_nest_pmu_arr = NULL;
@@ -1809,7 +1809,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id
}
}
nest_pmus++;
- spin_unlock(&nest_init_lock);
+ mutex_unlock(&nest_init_lock);
break;
case IMC_DOMAIN_CORE:
ret = core_imc_pmu_cpumask_init();
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 1b21f33..bf602e38 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -361,6 +361,28 @@
endmenu
+config TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
+ def_bool y
+ # https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=aed44286efa8ae8717a77d94b51ac3614e2ca6dc
+ depends on AS_IS_GNU && AS_VERSION >= 23800
+ help
+ Newer binutils versions default to ISA spec version 20191213 which
+ moves some instructions from the I extension to the Zicsr and Zifencei
+ extensions.
+
+config TOOLCHAIN_NEEDS_OLD_ISA_SPEC
+ def_bool y
+ depends on TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI
+ # https://github.com/llvm/llvm-project/commit/22e199e6afb1263c943c0c0d4498694e15bf8a16
+ depends on CC_IS_CLANG && CLANG_VERSION < 170000
+ help
+ Certain versions of clang do not support zicsr and zifencei via -march
+ but newer versions of binutils require it for the reasons noted in the
+ help text of CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI. This
+ option causes an older ISA spec compatible with these older versions
+ of clang to be passed to GAS, which has the same result as passing zicsr
+ and zifencei to -march.
+
config FPU
bool "FPU support"
default y
diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile
index dc77857..0f17c6b 100644
--- a/arch/riscv/Makefile
+++ b/arch/riscv/Makefile
@@ -13,7 +13,11 @@
ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
LDFLAGS_vmlinux := --no-relax
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
- CC_FLAGS_FTRACE := -fpatchable-function-entry=8
+ifeq ($(CONFIG_RISCV_ISA_C),y)
+ CC_FLAGS_FTRACE := -fpatchable-function-entry=4
+else
+ CC_FLAGS_FTRACE := -fpatchable-function-entry=2
+endif
endif
ifeq ($(CONFIG_CMODEL_MEDLOW),y)
@@ -55,10 +59,12 @@
riscv-march-$(CONFIG_FPU) := $(riscv-march-y)fd
riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c
-# Newer binutils versions default to ISA spec version 20191213 which moves some
-# instructions from the I extension to the Zicsr and Zifencei extensions.
-toolchain-need-zicsr-zifencei := $(call cc-option-yn, -march=$(riscv-march-y)_zicsr_zifencei)
-riscv-march-$(toolchain-need-zicsr-zifencei) := $(riscv-march-y)_zicsr_zifencei
+ifdef CONFIG_TOOLCHAIN_NEEDS_OLD_ISA_SPEC
+KBUILD_CFLAGS += -Wa,-misa-spec=2.2
+KBUILD_AFLAGS += -Wa,-misa-spec=2.2
+else
+riscv-march-$(CONFIG_TOOLCHAIN_NEEDS_EXPLICIT_ZICSR_ZIFENCEI) := $(riscv-march-y)_zicsr_zifencei
+endif
KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))
KBUILD_AFLAGS += -march=$(riscv-march-y)
@@ -76,6 +82,9 @@
KBUILD_CFLAGS += -fno-omit-frame-pointer
endif
+# Avoid generating .eh_frame sections.
+KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables
+
KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax)
diff --git a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
index 454079a..f72bb15 100644
--- a/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
+++ b/arch/riscv/boot/dts/sifive/fu740-c000.dtsi
@@ -328,7 +328,7 @@ pcie@e00000000 {
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x60080000 0x0 0x60080000 0x0 0x10000>, /* I/O */
<0x82000000 0x0 0x60090000 0x0 0x60090000 0x0 0xff70000>, /* mem */
- <0x82000000 0x0 0x70000000 0x0 0x70000000 0x0 0x1000000>, /* mem */
+ <0x82000000 0x0 0x70000000 0x0 0x70000000 0x0 0x10000000>, /* mem */
<0xc3000000 0x20 0x00000000 0x20 0x00000000 0x20 0x00000000>; /* mem prefetchable */
num-lanes = <0x8>;
interrupts = <56>, <57>, <58>, <59>, <60>, <61>, <62>, <63>, <64>;
diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index 04dad33..d47d87c 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -42,6 +42,14 @@ struct dyn_arch_ftrace {
* 2) jalr: setting low-12 offset to ra, jump to ra, and set ra to
* return address (original pc + 4)
*
+ *<ftrace enable>:
+ * 0: auipc t0/ra, 0x?
+ * 4: jalr t0/ra, ?(t0/ra)
+ *
+ *<ftrace disable>:
+ * 0: nop
+ * 4: nop
+ *
* Dynamic ftrace generates probes to call sites, so we must deal with
* both auipc and jalr at the same time.
*/
@@ -52,25 +60,43 @@ struct dyn_arch_ftrace {
#define AUIPC_OFFSET_MASK (0xfffff000)
#define AUIPC_PAD (0x00001000)
#define JALR_SHIFT 20
-#define JALR_BASIC (0x000080e7)
-#define AUIPC_BASIC (0x00000097)
+#define JALR_RA (0x000080e7)
+#define AUIPC_RA (0x00000097)
+#define JALR_T0 (0x000282e7)
+#define AUIPC_T0 (0x00000297)
#define NOP4 (0x00000013)
-#define make_call(caller, callee, call) \
+#define to_jalr_t0(offset) \
+ (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_T0)
+
+#define to_auipc_t0(offset) \
+ ((offset & JALR_SIGN_MASK) ? \
+ (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_T0) : \
+ ((offset & AUIPC_OFFSET_MASK) | AUIPC_T0))
+
+#define make_call_t0(caller, callee, call) \
do { \
- call[0] = to_auipc_insn((unsigned int)((unsigned long)callee - \
- (unsigned long)caller)); \
- call[1] = to_jalr_insn((unsigned int)((unsigned long)callee - \
- (unsigned long)caller)); \
+ unsigned int offset = \
+ (unsigned long) callee - (unsigned long) caller; \
+ call[0] = to_auipc_t0(offset); \
+ call[1] = to_jalr_t0(offset); \
} while (0)
-#define to_jalr_insn(offset) \
- (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_BASIC)
+#define to_jalr_ra(offset) \
+ (((offset & JALR_OFFSET_MASK) << JALR_SHIFT) | JALR_RA)
-#define to_auipc_insn(offset) \
+#define to_auipc_ra(offset) \
((offset & JALR_SIGN_MASK) ? \
- (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_BASIC) : \
- ((offset & AUIPC_OFFSET_MASK) | AUIPC_BASIC))
+ (((offset & AUIPC_OFFSET_MASK) + AUIPC_PAD) | AUIPC_RA) : \
+ ((offset & AUIPC_OFFSET_MASK) | AUIPC_RA))
+
+#define make_call_ra(caller, callee, call) \
+do { \
+ unsigned int offset = \
+ (unsigned long) callee - (unsigned long) caller; \
+ call[0] = to_auipc_ra(offset); \
+ call[1] = to_jalr_ra(offset); \
+} while (0)
/*
* Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
@@ -83,6 +109,6 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#define ftrace_init_nop ftrace_init_nop
#endif
-#endif
+#endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* _ASM_RISCV_FTRACE_H */
diff --git a/arch/riscv/include/asm/jump_label.h b/arch/riscv/include/asm/jump_label.h
index 38af2ec..729991e 100644
--- a/arch/riscv/include/asm/jump_label.h
+++ b/arch/riscv/include/asm/jump_label.h
@@ -18,6 +18,7 @@ static __always_inline bool arch_static_branch(struct static_key *key,
bool branch)
{
asm_volatile_goto(
+ " .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
" .option norvc \n\t"
@@ -39,6 +40,7 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key,
bool branch)
{
asm_volatile_goto(
+ " .align 2 \n\t"
" .option push \n\t"
" .option norelax \n\t"
" .option norvc \n\t"
diff --git a/arch/riscv/include/asm/mmu.h b/arch/riscv/include/asm/mmu.h
index 5ff1f19..0099dc1 100644
--- a/arch/riscv/include/asm/mmu.h
+++ b/arch/riscv/include/asm/mmu.h
@@ -19,8 +19,6 @@ typedef struct {
#ifdef CONFIG_SMP
/* A local icache flush is needed before user execution can resume. */
cpumask_t icache_stale_mask;
- /* A local tlb flush is needed before user execution can resume. */
- cpumask_t tlb_stale_mask;
#endif
} mm_context_t;
diff --git a/arch/riscv/include/asm/parse_asm.h b/arch/riscv/include/asm/parse_asm.h
index f36368de..3cd0033 100644
--- a/arch/riscv/include/asm/parse_asm.h
+++ b/arch/riscv/include/asm/parse_asm.h
@@ -3,6 +3,9 @@
* Copyright (C) 2020 SiFive
*/
+#ifndef _ASM_RISCV_INSN_H
+#define _ASM_RISCV_INSN_H
+
#include <linux/bits.h>
/* The bit field of immediate value in I-type instruction */
@@ -217,3 +220,5 @@ static inline bool is_ ## INSN_NAME ## _insn(long insn) \
(RVC_X(x_, RVC_B_IMM_5_OPOFF, RVC_B_IMM_5_MASK) << RVC_B_IMM_5_OFF) | \
(RVC_X(x_, RVC_B_IMM_7_6_OPOFF, RVC_B_IMM_7_6_MASK) << RVC_B_IMM_7_6_OFF) | \
(RVC_IMM_SIGN(x_) << RVC_B_IMM_SIGN_OFF); })
+
+#endif /* _ASM_RISCV_INSN_H */
diff --git a/arch/riscv/include/asm/patch.h b/arch/riscv/include/asm/patch.h
index 9a7d734..98d9de0 100644
--- a/arch/riscv/include/asm/patch.h
+++ b/arch/riscv/include/asm/patch.h
@@ -9,4 +9,6 @@
int patch_text_nosync(void *addr, const void *insns, size_t len);
int patch_text(void *addr, u32 insn);
+extern int riscv_patch_in_stop_machine;
+
#endif /* _ASM_RISCV_PATCH_H */
diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
index 799c16e..39b5503 100644
--- a/arch/riscv/include/asm/pgtable.h
+++ b/arch/riscv/include/asm/pgtable.h
@@ -386,7 +386,7 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
* Relying on flush_tlb_fix_spurious_fault would suffice, but
* the extra traps reduce performance. So, eagerly SFENCE.VMA.
*/
- flush_tlb_page(vma, address);
+ local_flush_tlb_page(address);
}
static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
index e3866ff..42d9704 100644
--- a/arch/riscv/include/asm/thread_info.h
+++ b/arch/riscv/include/asm/thread_info.h
@@ -43,6 +43,7 @@
#ifndef __ASSEMBLY__
extern long shadow_stack[SHADOW_OVERFLOW_STACK_SIZE / sizeof(long)];
+extern unsigned long spin_shadow_stack;
#include <asm/processor.h>
#include <asm/csr.h>
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
index 907b9ef..a09196f 100644
--- a/arch/riscv/include/asm/tlbflush.h
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -12,6 +12,8 @@
#include <asm/errata_list.h>
#ifdef CONFIG_MMU
+extern unsigned long asid_mask;
+
static inline void local_flush_tlb_all(void)
{
__asm__ __volatile__ ("sfence.vma" : : : "memory");
@@ -22,24 +24,6 @@ static inline void local_flush_tlb_page(unsigned long addr)
{
ALT_FLUSH_TLB_PAGE(__asm__ __volatile__ ("sfence.vma %0" : : "r" (addr) : "memory"));
}
-
-static inline void local_flush_tlb_all_asid(unsigned long asid)
-{
- __asm__ __volatile__ ("sfence.vma x0, %0"
- :
- : "r" (asid)
- : "memory");
-}
-
-static inline void local_flush_tlb_page_asid(unsigned long addr,
- unsigned long asid)
-{
- __asm__ __volatile__ ("sfence.vma %0, %1"
- :
- : "r" (addr), "r" (asid)
- : "memory");
-}
-
#else /* CONFIG_MMU */
#define local_flush_tlb_all() do { } while (0)
#define local_flush_tlb_page(addr) do { } while (0)
diff --git a/arch/riscv/include/uapi/asm/setup.h b/arch/riscv/include/uapi/asm/setup.h
new file mode 100644
index 0000000..66b13a5
--- /dev/null
+++ b/arch/riscv/include/uapi/asm/setup.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */
+
+#ifndef _UAPI_ASM_RISCV_SETUP_H
+#define _UAPI_ASM_RISCV_SETUP_H
+
+#define COMMAND_LINE_SIZE 1024
+
+#endif /* _UAPI_ASM_RISCV_SETUP_H */
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 7f1e520..1bf92cf 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -15,11 +15,21 @@
int ftrace_arch_code_modify_prepare(void) __acquires(&text_mutex)
{
mutex_lock(&text_mutex);
+
+ /*
+ * The code sequences we use for ftrace can't be patched while the
+ * kernel is running, so we need to use stop_machine() to modify them
+ * for now. This doesn't play nice with text_mutex, we use this flag
+ * to elide the check.
+ */
+ riscv_patch_in_stop_machine = true;
+
return 0;
}
int ftrace_arch_code_modify_post_process(void) __releases(&text_mutex)
{
+ riscv_patch_in_stop_machine = false;
mutex_unlock(&text_mutex);
return 0;
}
@@ -57,12 +67,15 @@ static int ftrace_check_current_call(unsigned long hook_pos,
}
static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
- bool enable)
+ bool enable, bool ra)
{
unsigned int call[2];
unsigned int nops[2] = {NOP4, NOP4};
- make_call(hook_pos, target, call);
+ if (ra)
+ make_call_ra(hook_pos, target, call);
+ else
+ make_call_t0(hook_pos, target, call);
/* Replace the auipc-jalr pair at once. Return -EPERM on write error. */
if (patch_text_nosync
@@ -72,42 +85,13 @@ static int __ftrace_modify_call(unsigned long hook_pos, unsigned long target,
return 0;
}
-/*
- * Put 5 instructions with 16 bytes at the front of function within
- * patchable function entry nops' area.
- *
- * 0: REG_S ra, -SZREG(sp)
- * 1: auipc ra, 0x?
- * 2: jalr -?(ra)
- * 3: REG_L ra, -SZREG(sp)
- *
- * So the opcodes is:
- * 0: 0xfe113c23 (sd)/0xfe112e23 (sw)
- * 1: 0x???????? -> auipc
- * 2: 0x???????? -> jalr
- * 3: 0xff813083 (ld)/0xffc12083 (lw)
- */
-#if __riscv_xlen == 64
-#define INSN0 0xfe113c23
-#define INSN3 0xff813083
-#elif __riscv_xlen == 32
-#define INSN0 0xfe112e23
-#define INSN3 0xffc12083
-#endif
-
-#define FUNC_ENTRY_SIZE 16
-#define FUNC_ENTRY_JMP 4
-
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
- unsigned int call[4] = {INSN0, 0, 0, INSN3};
- unsigned long target = addr;
- unsigned long caller = rec->ip + FUNC_ENTRY_JMP;
+ unsigned int call[2];
- call[1] = to_auipc_insn((unsigned int)(target - caller));
- call[2] = to_jalr_insn((unsigned int)(target - caller));
+ make_call_t0(rec->ip, addr, call);
- if (patch_text_nosync((void *)rec->ip, call, FUNC_ENTRY_SIZE))
+ if (patch_text_nosync((void *)rec->ip, call, MCOUNT_INSN_SIZE))
return -EPERM;
return 0;
@@ -116,15 +100,14 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
unsigned long addr)
{
- unsigned int nops[4] = {NOP4, NOP4, NOP4, NOP4};
+ unsigned int nops[2] = {NOP4, NOP4};
- if (patch_text_nosync((void *)rec->ip, nops, FUNC_ENTRY_SIZE))
+ if (patch_text_nosync((void *)rec->ip, nops, MCOUNT_INSN_SIZE))
return -EPERM;
return 0;
}
-
/*
* This is called early on, and isn't wrapped by
* ftrace_arch_code_modify_{prepare,post_process}() and therefor doesn't hold
@@ -136,9 +119,9 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
{
int out;
- ftrace_arch_code_modify_prepare();
+ mutex_lock(&text_mutex);
out = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
- ftrace_arch_code_modify_post_process();
+ mutex_unlock(&text_mutex);
return out;
}
@@ -146,10 +129,10 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
int ftrace_update_ftrace_func(ftrace_func_t func)
{
int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
- (unsigned long)func, true);
+ (unsigned long)func, true, true);
if (!ret) {
ret = __ftrace_modify_call((unsigned long)&ftrace_regs_call,
- (unsigned long)func, true);
+ (unsigned long)func, true, true);
}
return ret;
@@ -166,16 +149,16 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
unsigned long addr)
{
unsigned int call[2];
- unsigned long caller = rec->ip + FUNC_ENTRY_JMP;
+ unsigned long caller = rec->ip;
int ret;
- make_call(caller, old_addr, call);
+ make_call_t0(caller, old_addr, call);
ret = ftrace_check_current_call(caller, call);
if (ret)
return ret;
- return __ftrace_modify_call(caller, addr, true);
+ return __ftrace_modify_call(caller, addr, true, false);
}
#endif
@@ -210,12 +193,12 @@ int ftrace_enable_ftrace_graph_caller(void)
int ret;
ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call,
- (unsigned long)&prepare_ftrace_return, true);
+ (unsigned long)&prepare_ftrace_return, true, true);
if (ret)
return ret;
return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call,
- (unsigned long)&prepare_ftrace_return, true);
+ (unsigned long)&prepare_ftrace_return, true, true);
}
int ftrace_disable_ftrace_graph_caller(void)
@@ -223,12 +206,12 @@ int ftrace_disable_ftrace_graph_caller(void)
int ret;
ret = __ftrace_modify_call((unsigned long)&ftrace_graph_call,
- (unsigned long)&prepare_ftrace_return, false);
+ (unsigned long)&prepare_ftrace_return, false, true);
if (ret)
return ret;
return __ftrace_modify_call((unsigned long)&ftrace_graph_regs_call,
- (unsigned long)&prepare_ftrace_return, false);
+ (unsigned long)&prepare_ftrace_return, false, true);
}
#endif /* CONFIG_DYNAMIC_FTRACE */
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index d171eca..125de81 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -13,8 +13,8 @@
.text
-#define FENTRY_RA_OFFSET 12
-#define ABI_SIZE_ON_STACK 72
+#define FENTRY_RA_OFFSET 8
+#define ABI_SIZE_ON_STACK 80
#define ABI_A0 0
#define ABI_A1 8
#define ABI_A2 16
@@ -23,10 +23,10 @@
#define ABI_A5 40
#define ABI_A6 48
#define ABI_A7 56
-#define ABI_RA 64
+#define ABI_T0 64
+#define ABI_RA 72
.macro SAVE_ABI
- addi sp, sp, -SZREG
addi sp, sp, -ABI_SIZE_ON_STACK
REG_S a0, ABI_A0(sp)
@@ -37,6 +37,7 @@
REG_S a5, ABI_A5(sp)
REG_S a6, ABI_A6(sp)
REG_S a7, ABI_A7(sp)
+ REG_S t0, ABI_T0(sp)
REG_S ra, ABI_RA(sp)
.endm
@@ -49,24 +50,18 @@
REG_L a5, ABI_A5(sp)
REG_L a6, ABI_A6(sp)
REG_L a7, ABI_A7(sp)
+ REG_L t0, ABI_T0(sp)
REG_L ra, ABI_RA(sp)
addi sp, sp, ABI_SIZE_ON_STACK
- addi sp, sp, SZREG
.endm
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
.macro SAVE_ALL
- addi sp, sp, -SZREG
addi sp, sp, -PT_SIZE_ON_STACK
- REG_S x1, PT_EPC(sp)
- addi sp, sp, PT_SIZE_ON_STACK
- REG_L x1, (sp)
- addi sp, sp, -PT_SIZE_ON_STACK
+ REG_S t0, PT_EPC(sp)
REG_S x1, PT_RA(sp)
- REG_L x1, PT_EPC(sp)
-
REG_S x2, PT_SP(sp)
REG_S x3, PT_GP(sp)
REG_S x4, PT_TP(sp)
@@ -100,15 +95,11 @@
.endm
.macro RESTORE_ALL
+ REG_L t0, PT_EPC(sp)
REG_L x1, PT_RA(sp)
- addi sp, sp, PT_SIZE_ON_STACK
- REG_S x1, (sp)
- addi sp, sp, -PT_SIZE_ON_STACK
- REG_L x1, PT_EPC(sp)
REG_L x2, PT_SP(sp)
REG_L x3, PT_GP(sp)
REG_L x4, PT_TP(sp)
- REG_L x5, PT_T0(sp)
REG_L x6, PT_T1(sp)
REG_L x7, PT_T2(sp)
REG_L x8, PT_S0(sp)
@@ -137,17 +128,16 @@
REG_L x31, PT_T6(sp)
addi sp, sp, PT_SIZE_ON_STACK
- addi sp, sp, SZREG
.endm
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
ENTRY(ftrace_caller)
SAVE_ABI
- addi a0, ra, -FENTRY_RA_OFFSET
+ addi a0, t0, -FENTRY_RA_OFFSET
la a1, function_trace_op
REG_L a2, 0(a1)
- REG_L a1, ABI_SIZE_ON_STACK(sp)
+ mv a1, ra
mv a3, sp
ftrace_call:
@@ -155,8 +145,8 @@
call ftrace_stub
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
- addi a0, sp, ABI_SIZE_ON_STACK
- REG_L a1, ABI_RA(sp)
+ addi a0, sp, ABI_RA
+ REG_L a1, ABI_T0(sp)
addi a1, a1, -FENTRY_RA_OFFSET
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
mv a2, s0
@@ -166,17 +156,17 @@
call ftrace_stub
#endif
RESTORE_ABI
- ret
+ jr t0
ENDPROC(ftrace_caller)
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
ENTRY(ftrace_regs_caller)
SAVE_ALL
- addi a0, ra, -FENTRY_RA_OFFSET
+ addi a0, t0, -FENTRY_RA_OFFSET
la a1, function_trace_op
REG_L a2, 0(a1)
- REG_L a1, PT_SIZE_ON_STACK(sp)
+ mv a1, ra
mv a3, sp
ftrace_regs_call:
@@ -196,6 +186,6 @@
#endif
RESTORE_ALL
- ret
+ jr t0
ENDPROC(ftrace_regs_caller)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */
diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 765004b..e099961 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -11,6 +11,7 @@
#include <asm/kprobes.h>
#include <asm/cacheflush.h>
#include <asm/fixmap.h>
+#include <asm/ftrace.h>
#include <asm/patch.h>
struct patch_insn {
@@ -19,6 +20,8 @@ struct patch_insn {
atomic_t cpu_count;
};
+int riscv_patch_in_stop_machine = false;
+
#ifdef CONFIG_MMU
/*
* The fix_to_virt(, idx) needs a const value (not a dynamic variable of
@@ -59,8 +62,15 @@ static int patch_insn_write(void *addr, const void *insn, size_t len)
* Before reaching here, it was expected to lock the text_mutex
* already, so we don't need to give another lock here and could
* ensure that it was safe between each cores.
+ *
+ * We're currently using stop_machine() for ftrace & kprobes, and while
+ * that ensures text_mutex is held before installing the mappings it
+ * does not ensure text_mutex is held by the calling thread. That's
+ * safe but triggers a lockdep failure, so just elide it for that
+ * specific case.
*/
- lockdep_assert_held(&text_mutex);
+ if (!riscv_patch_in_stop_machine)
+ lockdep_assert_held(&text_mutex);
if (across_pages)
patch_map(addr + len, FIX_TEXT_POKE1);
@@ -121,13 +131,25 @@ NOKPROBE_SYMBOL(patch_text_cb);
int patch_text(void *addr, u32 insn)
{
+ int ret;
struct patch_insn patch = {
.addr = addr,
.insn = insn,
.cpu_count = ATOMIC_INIT(0),
};
- return stop_machine_cpuslocked(patch_text_cb,
- &patch, cpu_online_mask);
+ /*
+ * kprobes takes text_mutex, before calling patch_text(), but as we call
+ * calls stop_machine(), the lockdep assertion in patch_insn_write()
+ * gets confused by the context in which the lock is taken.
+ * Instead, ensure the lock is held before calling stop_machine(), and
+ * set riscv_patch_in_stop_machine to skip the check in
+ * patch_insn_write().
+ */
+ lockdep_assert_held(&text_mutex);
+ riscv_patch_in_stop_machine = true;
+ ret = stop_machine_cpuslocked(patch_text_cb, &patch, cpu_online_mask);
+ riscv_patch_in_stop_machine = false;
+ return ret;
}
NOKPROBE_SYMBOL(patch_text);
diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
index 00088dc..7548b1d 100644
--- a/arch/riscv/kernel/probes/kprobes.c
+++ b/arch/riscv/kernel/probes/kprobes.c
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0+
+#define pr_fmt(fmt) "kprobes: " fmt
+
#include <linux/kprobes.h>
#include <linux/extable.h>
#include <linux/slab.h>
@@ -46,18 +48,35 @@ static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
post_kprobe_handler(p, kcb, regs);
}
-int __kprobes arch_prepare_kprobe(struct kprobe *p)
+static bool __kprobes arch_check_kprobe(struct kprobe *p)
{
- unsigned long probe_addr = (unsigned long)p->addr;
+ unsigned long tmp = (unsigned long)p->addr - p->offset;
+ unsigned long addr = (unsigned long)p->addr;
- if (probe_addr & 0x1) {
- pr_warn("Address not aligned.\n");
+ while (tmp <= addr) {
+ if (tmp == addr)
+ return true;
- return -EINVAL;
+ tmp += GET_INSN_LENGTH(*(u16 *)tmp);
}
+ return false;
+}
+
+int __kprobes arch_prepare_kprobe(struct kprobe *p)
+{
+ u16 *insn = (u16 *)p->addr;
+
+ if ((unsigned long)insn & 0x1)
+ return -EILSEQ;
+
+ if (!arch_check_kprobe(p))
+ return -EILSEQ;
+
/* copy instruction */
- p->opcode = *p->addr;
+ p->opcode = (kprobe_opcode_t)(*insn++);
+ if (GET_INSN_LENGTH(p->opcode) == 4)
+ p->opcode |= (kprobe_opcode_t)(*insn) << 16;
/* decode instruction */
switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
@@ -191,7 +210,7 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
break;
case KPROBE_HIT_SS:
case KPROBE_REENTER:
- pr_warn("Unrecoverable kprobe detected.\n");
+ pr_warn("Failed to recover from reentered kprobes.\n");
dump_kprobe(p);
BUG();
break;
diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
index d73e96f..a20568b 100644
--- a/arch/riscv/kernel/probes/simulate-insn.c
+++ b/arch/riscv/kernel/probes/simulate-insn.c
@@ -71,11 +71,11 @@ bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *reg
u32 rd_index = (opcode >> 7) & 0x1f;
u32 rs1_index = (opcode >> 15) & 0x1f;
- ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
+ ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
if (!ret)
return ret;
- ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
+ ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
if (!ret)
return ret;
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
index 811e837..894ae66 100644
--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -32,6 +32,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
fp = (unsigned long)__builtin_frame_address(0);
sp = sp_in_global;
pc = (unsigned long)walk_stackframe;
+ level = -1;
} else {
/* task blocked in __switch_to */
fp = task->thread.s[0];
@@ -43,7 +44,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
unsigned long low, high;
struct stackframe *frame;
- if (unlikely(!__kernel_text_address(pc) || (level++ >= 1 && !fn(arg, pc))))
+ if (unlikely(!__kernel_text_address(pc) || (level++ >= 0 && !fn(arg, pc))))
break;
/* Validate frame pointer */
@@ -93,7 +94,7 @@ void notrace walk_stackframe(struct task_struct *task,
while (!kstack_end(ksp)) {
if (__kernel_text_address(pc) && unlikely(!fn(arg, pc)))
break;
- pc = (*ksp++) - 0x4;
+ pc = READ_ONCE_NOCHECK(*ksp++) - 0x4;
}
}
diff --git a/arch/riscv/kernel/time.c b/arch/riscv/kernel/time.c
index 8217b0f..1cf21db 100644
--- a/arch/riscv/kernel/time.c
+++ b/arch/riscv/kernel/time.c
@@ -5,6 +5,7 @@
*/
#include <linux/of_clk.h>
+#include <linux/clockchips.h>
#include <linux/clocksource.h>
#include <linux/delay.h>
#include <asm/sbi.h>
@@ -29,6 +30,8 @@ void __init time_init(void)
of_clk_init(NULL);
timer_probe();
+
+ tick_setup_hrtimer_broadcast();
}
void clocksource_arch_init(struct clocksource *cs)
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index 4102c97..4f38b3c 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -33,25 +33,29 @@ void die(struct pt_regs *regs, const char *str)
{
static int die_counter;
int ret;
+ long cause;
+ unsigned long flags;
oops_enter();
- spin_lock_irq(&die_lock);
+ spin_lock_irqsave(&die_lock, flags);
console_verbose();
bust_spinlocks(1);
pr_emerg("%s [#%d]\n", str, ++die_counter);
print_modules();
- show_regs(regs);
+ if (regs)
+ show_regs(regs);
- ret = notify_die(DIE_OOPS, str, regs, 0, regs->cause, SIGSEGV);
+ cause = regs ? regs->cause : -1;
+ ret = notify_die(DIE_OOPS, str, regs, 0, cause, SIGSEGV);
- if (regs && kexec_should_crash(current))
+ if (kexec_should_crash(current))
crash_kexec(regs);
bust_spinlocks(0);
add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
- spin_unlock_irq(&die_lock);
+ spin_unlock_irqrestore(&die_lock, flags);
oops_exit();
if (in_interrupt())
@@ -59,7 +63,7 @@ void die(struct pt_regs *regs, const char *str)
if (panic_on_oops)
panic("Fatal exception");
if (ret != NOTIFY_STOP)
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
diff --git a/arch/riscv/mm/cacheflush.c b/arch/riscv/mm/cacheflush.c
index 89f8106..2ae1201 100644
--- a/arch/riscv/mm/cacheflush.c
+++ b/arch/riscv/mm/cacheflush.c
@@ -85,7 +85,9 @@ void flush_icache_pte(pte_t pte)
{
struct page *page = pte_page(pte);
- if (!test_and_set_bit(PG_dcache_clean, &page->flags))
+ if (!test_bit(PG_dcache_clean, &page->flags)) {
flush_icache_all();
+ set_bit(PG_dcache_clean, &page->flags);
+ }
}
#endif /* CONFIG_MMU */
diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c
index cc4a47b..64bfb45 100644
--- a/arch/riscv/mm/context.c
+++ b/arch/riscv/mm/context.c
@@ -22,7 +22,7 @@ DEFINE_STATIC_KEY_FALSE(use_asid_allocator);
static unsigned long asid_bits;
static unsigned long num_asids;
-static unsigned long asid_mask;
+unsigned long asid_mask;
static atomic_long_t current_version;
@@ -196,16 +196,6 @@ static void set_mm_asid(struct mm_struct *mm, unsigned int cpu)
if (need_flush_tlb)
local_flush_tlb_all();
-#ifdef CONFIG_SMP
- else {
- cpumask_t *mask = &mm->context.tlb_stale_mask;
-
- if (cpumask_test_cpu(cpu, mask)) {
- cpumask_clear_cpu(cpu, mask);
- local_flush_tlb_all_asid(cntx & asid_mask);
- }
- }
-#endif
}
static void set_mm_noasid(struct mm_struct *mm)
@@ -215,12 +205,24 @@ static void set_mm_noasid(struct mm_struct *mm)
local_flush_tlb_all();
}
-static inline void set_mm(struct mm_struct *mm, unsigned int cpu)
+static inline void set_mm(struct mm_struct *prev,
+ struct mm_struct *next, unsigned int cpu)
{
- if (static_branch_unlikely(&use_asid_allocator))
- set_mm_asid(mm, cpu);
- else
- set_mm_noasid(mm);
+ /*
+ * The mm_cpumask indicates which harts' TLBs contain the virtual
+ * address mapping of the mm. Compared to noasid, using asid
+ * can't guarantee that stale TLB entries are invalidated because
+ * the asid mechanism wouldn't flush TLB for every switch_mm for
+ * performance. So when using asid, keep all CPUs footmarks in
+ * cpumask() until mm reset.
+ */
+ cpumask_set_cpu(cpu, mm_cpumask(next));
+ if (static_branch_unlikely(&use_asid_allocator)) {
+ set_mm_asid(next, cpu);
+ } else {
+ cpumask_clear_cpu(cpu, mm_cpumask(prev));
+ set_mm_noasid(next);
+ }
}
static int __init asids_init(void)
@@ -272,7 +274,8 @@ static int __init asids_init(void)
}
early_initcall(asids_init);
#else
-static inline void set_mm(struct mm_struct *mm, unsigned int cpu)
+static inline void set_mm(struct mm_struct *prev,
+ struct mm_struct *next, unsigned int cpu)
{
/* Nothing to do here when there is no MMU */
}
@@ -325,10 +328,7 @@ void switch_mm(struct mm_struct *prev, struct mm_struct *next,
*/
cpu = smp_processor_id();
- cpumask_clear_cpu(cpu, mm_cpumask(prev));
- cpumask_set_cpu(cpu, mm_cpumask(next));
-
- set_mm(next, cpu);
+ set_mm(prev, next, cpu);
flush_icache_deferred(next, cpu);
}
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 7cfaf36..884a3c76 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -31,7 +31,7 @@ static void die_kernel_fault(const char *msg, unsigned long addr,
bust_spinlocks(0);
die(regs, "Oops");
- do_exit(SIGKILL);
+ make_task_dead(SIGKILL);
}
static inline void no_context(struct pt_regs *regs, unsigned long addr)
@@ -271,10 +271,12 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
if (user_mode(regs))
flags |= FAULT_FLAG_USER;
- if (!user_mode(regs) && addr < TASK_SIZE &&
- unlikely(!(regs->status & SR_SUM)))
- die_kernel_fault("access to user memory without uaccess routines",
- addr, regs);
+ if (!user_mode(regs) && addr < TASK_SIZE && unlikely(!(regs->status & SR_SUM))) {
+ if (fixup_exception(regs))
+ return;
+
+ die_kernel_fault("access to user memory without uaccess routines", addr, regs);
+ }
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr);
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index efefc39..39d18fc 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -5,7 +5,23 @@
#include <linux/sched.h>
#include <asm/sbi.h>
#include <asm/mmu_context.h>
-#include <asm/tlbflush.h>
+
+static inline void local_flush_tlb_all_asid(unsigned long asid)
+{
+ __asm__ __volatile__ ("sfence.vma x0, %0"
+ :
+ : "r" (asid)
+ : "memory");
+}
+
+static inline void local_flush_tlb_page_asid(unsigned long addr,
+ unsigned long asid)
+{
+ __asm__ __volatile__ ("sfence.vma %0, %1"
+ :
+ : "r" (addr), "r" (asid)
+ : "memory");
+}
void flush_tlb_all(void)
{
@@ -15,7 +31,6 @@ void flush_tlb_all(void)
static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start,
unsigned long size, unsigned long stride)
{
- struct cpumask *pmask = &mm->context.tlb_stale_mask;
struct cpumask *cmask = mm_cpumask(mm);
struct cpumask hmask;
unsigned int cpuid;
@@ -28,16 +43,7 @@ static void __sbi_tlb_flush_range(struct mm_struct *mm, unsigned long start,
/* check if the tlbflush needs to be sent to other CPUs */
broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids;
if (static_branch_unlikely(&use_asid_allocator)) {
- unsigned long asid = atomic_long_read(&mm->context.id);
-
- /*
- * TLB will be immediately flushed on harts concurrently
- * executing this MM context. TLB flush on other harts
- * is deferred until this MM context migrates there.
- */
- cpumask_setall(pmask);
- cpumask_clear_cpu(cpuid, pmask);
- cpumask_andnot(pmask, pmask, cmask);
+ unsigned long asid = atomic_long_read(&mm->context.id) & asid_mask;
if (broadcast) {
riscv_cpuid_to_hartid_mask(cmask, &hmask);
diff --git a/arch/s390/boot/compressed/decompressor.c b/arch/s390/boot/compressed/decompressor.c
index e27c214..623f677 100644
--- a/arch/s390/boot/compressed/decompressor.c
+++ b/arch/s390/boot/compressed/decompressor.c
@@ -80,6 +80,6 @@ void *decompress_kernel(void)
void *output = (void *)decompress_offset;
__decompress(_compressed_start, _compressed_end - _compressed_start,
- NULL, NULL, output, 0, NULL, error);
+ NULL, NULL, output, vmlinux.image_size, NULL, error);
return output;
}
diff --git a/arch/s390/boot/ipl_report.c b/arch/s390/boot/ipl_report.c
index 9b14045..74b5cd2 100644
--- a/arch/s390/boot/ipl_report.c
+++ b/arch/s390/boot/ipl_report.c
@@ -57,11 +57,19 @@ static unsigned long find_bootdata_space(struct ipl_rb_components *comps,
if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_data.start && initrd_data.size &&
intersects(initrd_data.start, initrd_data.size, safe_addr, size))
safe_addr = initrd_data.start + initrd_data.size;
+ if (intersects(safe_addr, size, (unsigned long)comps, comps->len)) {
+ safe_addr = (unsigned long)comps + comps->len;
+ goto repeat;
+ }
for_each_rb_entry(comp, comps)
if (intersects(safe_addr, size, comp->addr, comp->len)) {
safe_addr = comp->addr + comp->len;
goto repeat;
}
+ if (intersects(safe_addr, size, (unsigned long)certs, certs->len)) {
+ safe_addr = (unsigned long)certs + certs->len;
+ goto repeat;
+ }
for_each_rb_entry(cert, certs)
if (intersects(safe_addr, size, cert->addr, cert->len)) {
safe_addr = cert->addr + cert->len;
diff --git a/arch/s390/boot/mem_detect.c b/arch/s390/boot/mem_detect.c
index 2f949cd..17a3270 100644
--- a/arch/s390/boot/mem_detect.c
+++ b/arch/s390/boot/mem_detect.c
@@ -165,7 +165,7 @@ static void search_mem_end(void)
unsigned long detect_memory(void)
{
- unsigned long max_physmem_end;
+ unsigned long max_physmem_end = 0;
sclp_early_get_memsize(&max_physmem_end);
diff --git a/arch/s390/include/asm/ap.h b/arch/s390/include/asm/ap.h
index 3afbee2..859e6d8 100644
--- a/arch/s390/include/asm/ap.h
+++ b/arch/s390/include/asm/ap.h
@@ -236,7 +236,10 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
union {
unsigned long value;
struct ap_qirq_ctrl qirqctrl;
- struct ap_queue_status status;
+ struct {
+ u32 _pad;
+ struct ap_queue_status status;
+ };
} reg1;
void *reg2 = ind;
@@ -250,7 +253,7 @@ static inline struct ap_queue_status ap_aqic(ap_qid_t qid,
" lgr %[reg1],1\n" /* gr1 (status) into reg1 */
: [reg1] "+&d" (reg1)
: [reg0] "d" (reg0), [reg2] "d" (reg2)
- : "cc", "0", "1", "2");
+ : "cc", "memory", "0", "1", "2");
return reg1.status;
}
@@ -287,7 +290,10 @@ static inline struct ap_queue_status ap_qact(ap_qid_t qid, int ifbit,
unsigned long reg0 = qid | (5UL << 24) | ((ifbit & 0x01) << 22);
union {
unsigned long value;
- struct ap_queue_status status;
+ struct {
+ u32 _pad;
+ struct ap_queue_status status;
+ };
} reg1;
unsigned long reg2;
diff --git a/arch/s390/include/asm/debug.h b/arch/s390/include/asm/debug.h
index 19a55e1..5fc91a9 100644
--- a/arch/s390/include/asm/debug.h
+++ b/arch/s390/include/asm/debug.h
@@ -4,8 +4,8 @@
*
* Copyright IBM Corp. 1999, 2020
*/
-#ifndef DEBUG_H
-#define DEBUG_H
+#ifndef _ASM_S390_DEBUG_H
+#define _ASM_S390_DEBUG_H
#include <linux/string.h>
#include <linux/spinlock.h>
@@ -487,4 +487,4 @@ void debug_register_static(debug_info_t *id, int pages_per_area, int nr_areas);
#endif /* MODULE */
-#endif /* DEBUG_H */
+#endif /* _ASM_S390_DEBUG_H */
diff --git a/arch/s390/kernel/dumpstack.c b/arch/s390/kernel/dumpstack.c
index db1bc00..272ef85 100644
--- a/arch/s390/kernel/dumpstack.c
+++ b/arch/s390/kernel/dumpstack.c
@@ -224,5 +224,5 @@ void die(struct pt_regs *regs, const char *str)
if (panic_on_oops)
panic("Fatal exception: panic_on_oops");
oops_exit();
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c
index 1d94ffd..5d0c45c 100644
--- a/arch/s390/kernel/ftrace.c
+++ b/arch/s390/kernel/ftrace.c
@@ -80,17 +80,6 @@ asm(
#ifdef CONFIG_MODULES
static char *ftrace_plt;
-
-asm(
- " .data\n"
- "ftrace_plt_template:\n"
- " basr %r1,%r0\n"
- " lg %r1,0f-.(%r1)\n"
- " br %r1\n"
- "0: .quad ftrace_caller\n"
- "ftrace_plt_template_end:\n"
- " .previous\n"
-);
#endif /* CONFIG_MODULES */
static const char *ftrace_shared_hotpatch_trampoline(const char **end)
@@ -116,7 +105,7 @@ static const char *ftrace_shared_hotpatch_trampoline(const char **end)
bool ftrace_need_init_nop(void)
{
- return ftrace_shared_hotpatch_trampoline(NULL);
+ return true;
}
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
@@ -175,28 +164,6 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
return 0;
}
-static void ftrace_generate_nop_insn(struct ftrace_insn *insn)
-{
- /* brcl 0,0 */
- insn->opc = 0xc004;
- insn->disp = 0;
-}
-
-static void ftrace_generate_call_insn(struct ftrace_insn *insn,
- unsigned long ip)
-{
- unsigned long target;
-
- /* brasl r0,ftrace_caller */
- target = FTRACE_ADDR;
-#ifdef CONFIG_MODULES
- if (is_module_addr((void *)ip))
- target = (unsigned long)ftrace_plt;
-#endif /* CONFIG_MODULES */
- insn->opc = 0xc005;
- insn->disp = (target - ip) / 2;
-}
-
static void brcl_disable(void *brcl)
{
u8 op = 0x04; /* set mask field to zero */
@@ -207,23 +174,7 @@ static void brcl_disable(void *brcl)
int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
unsigned long addr)
{
- struct ftrace_insn orig, new, old;
-
- if (ftrace_shared_hotpatch_trampoline(NULL)) {
- brcl_disable((void *)rec->ip);
- return 0;
- }
-
- if (copy_from_kernel_nofault(&old, (void *) rec->ip, sizeof(old)))
- return -EFAULT;
- /* Replace ftrace call with a nop. */
- ftrace_generate_call_insn(&orig, rec->ip);
- ftrace_generate_nop_insn(&new);
-
- /* Verify that the to be replaced code matches what we expect. */
- if (memcmp(&orig, &old, sizeof(old)))
- return -EINVAL;
- s390_kernel_write((void *) rec->ip, &new, sizeof(new));
+ brcl_disable((void *)rec->ip);
return 0;
}
@@ -236,23 +187,7 @@ static void brcl_enable(void *brcl)
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
{
- struct ftrace_insn orig, new, old;
-
- if (ftrace_shared_hotpatch_trampoline(NULL)) {
- brcl_enable((void *)rec->ip);
- return 0;
- }
-
- if (copy_from_kernel_nofault(&old, (void *) rec->ip, sizeof(old)))
- return -EFAULT;
- /* Replace nop with an ftrace call. */
- ftrace_generate_nop_insn(&orig);
- ftrace_generate_call_insn(&new, rec->ip);
-
- /* Verify that the to be replaced code matches what we expect. */
- if (memcmp(&orig, &old, sizeof(old)))
- return -EINVAL;
- s390_kernel_write((void *) rec->ip, &new, sizeof(new));
+ brcl_enable((void *)rec->ip);
return 0;
}
@@ -269,10 +204,7 @@ int __init ftrace_dyn_arch_init(void)
void arch_ftrace_update_code(int command)
{
- if (ftrace_shared_hotpatch_trampoline(NULL))
- ftrace_modify_all_code(command);
- else
- ftrace_run_stop_machine(command);
+ ftrace_modify_all_code(command);
}
static void __ftrace_sync(void *dummy)
@@ -281,10 +213,8 @@ static void __ftrace_sync(void *dummy)
int ftrace_arch_code_modify_post_process(void)
{
- if (ftrace_shared_hotpatch_trampoline(NULL)) {
- /* Send SIGP to the other CPUs, so they see the new code. */
- smp_call_function(__ftrace_sync, NULL, 1);
- }
+ /* Send SIGP to the other CPUs, so they see the new code. */
+ smp_call_function(__ftrace_sync, NULL, 1);
return 0;
}
@@ -299,10 +229,6 @@ static int __init ftrace_plt_init(void)
panic("cannot allocate ftrace plt\n");
start = ftrace_shared_hotpatch_trampoline(&end);
- if (!start) {
- start = ftrace_plt_template;
- end = ftrace_plt_template_end;
- }
memcpy(ftrace_plt, start, end - start);
set_memory_ro((unsigned long)ftrace_plt, 1);
return 0;
diff --git a/arch/s390/kernel/idle.c b/arch/s390/kernel/idle.c
index 4bf1ee29..a0da049 100644
--- a/arch/s390/kernel/idle.c
+++ b/arch/s390/kernel/idle.c
@@ -44,7 +44,7 @@ void account_idle_time_irq(void)
S390_lowcore.last_update_timer = idle->timer_idle_exit;
}
-void arch_cpu_idle(void)
+void noinstr arch_cpu_idle(void)
{
struct s390_idle_data *idle = this_cpu_ptr(&s390_idle);
unsigned long idle_time;
diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c
index 52d056a..fbc0bf4 100644
--- a/arch/s390/kernel/kprobes.c
+++ b/arch/s390/kernel/kprobes.c
@@ -7,6 +7,8 @@
* s390 port, used ppc64 as template. Mike Grundy <grundym@us.ibm.com>
*/
+#define pr_fmt(fmt) "kprobes: " fmt
+
#include <linux/moduleloader.h>
#include <linux/kprobes.h>
#include <linux/ptrace.h>
@@ -231,6 +233,7 @@ static void pop_kprobe(struct kprobe_ctlblk *kcb)
{
__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
kcb->kprobe_status = kcb->prev_kprobe.status;
+ kcb->prev_kprobe.kp = NULL;
}
NOKPROBE_SYMBOL(pop_kprobe);
@@ -259,7 +262,7 @@ static void kprobe_reenter_check(struct kprobe_ctlblk *kcb, struct kprobe *p)
* is a BUG. The code path resides in the .kprobes.text
* section and is executed with interrupts disabled.
*/
- pr_err("Invalid kprobe detected.\n");
+ pr_err("Failed to recover from reentered kprobes.\n");
dump_kprobe(p);
BUG();
}
@@ -392,12 +395,11 @@ static int post_kprobe_handler(struct pt_regs *regs)
if (!p)
return 0;
+ resume_execution(p, regs);
if (kcb->kprobe_status != KPROBE_REENTER && p->post_handler) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0);
}
-
- resume_execution(p, regs);
pop_kprobe(kcb);
preempt_enable_no_resched();
diff --git a/arch/s390/kernel/nmi.c b/arch/s390/kernel/nmi.c
index 383b479..d4f071e 100644
--- a/arch/s390/kernel/nmi.c
+++ b/arch/s390/kernel/nmi.c
@@ -175,7 +175,7 @@ void __s390_handle_mcck(void)
"malfunction (code 0x%016lx).\n", mcck.mcck_code);
printk(KERN_EMERG "mcck: task: %s, pid: %d.\n",
current->comm, current->pid);
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
}
diff --git a/arch/s390/kernel/vdso32/Makefile b/arch/s390/kernel/vdso32/Makefile
index e3e6ac5..245bddf 100644
--- a/arch/s390/kernel/vdso32/Makefile
+++ b/arch/s390/kernel/vdso32/Makefile
@@ -22,7 +22,7 @@
KBUILD_CFLAGS_32 := $(filter-out -m64,$(KBUILD_CFLAGS))
KBUILD_CFLAGS_32 += -m31 -fPIC -shared -fno-common -fno-builtin
-LDFLAGS_vdso32.so.dbg += -fPIC -shared -nostdlib -soname=linux-vdso32.so.1 \
+LDFLAGS_vdso32.so.dbg += -fPIC -shared -soname=linux-vdso32.so.1 \
--hash-style=both --build-id=sha1 -melf_s390 -T
$(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_32)
diff --git a/arch/s390/kernel/vdso64/Makefile b/arch/s390/kernel/vdso64/Makefile
index 0dea82b..1605ba4 100644
--- a/arch/s390/kernel/vdso64/Makefile
+++ b/arch/s390/kernel/vdso64/Makefile
@@ -25,8 +25,8 @@
KBUILD_AFLAGS_64 += -m64 -s
KBUILD_CFLAGS_64 := $(filter-out -m64,$(KBUILD_CFLAGS))
-KBUILD_CFLAGS_64 += -m64 -fPIC -shared -fno-common -fno-builtin
-ldflags-y := -fPIC -shared -nostdlib -soname=linux-vdso64.so.1 \
+KBUILD_CFLAGS_64 += -m64 -fPIC -fno-common -fno-builtin
+ldflags-y := -fPIC -shared -soname=linux-vdso64.so.1 \
--hash-style=both --build-id=sha1 -T
$(targets:%=$(obj)/%.dbg): KBUILD_CFLAGS = $(KBUILD_CFLAGS_64)
diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
index b508cca..853b807 100644
--- a/arch/s390/kernel/vmlinux.lds.S
+++ b/arch/s390/kernel/vmlinux.lds.S
@@ -17,6 +17,8 @@
/* Handle ro_after_init data on our own. */
#define RO_AFTER_INIT_DATA
+#define RUNTIME_DISCARD_EXIT
+
#define EMITS_PT_NOTE
#include <asm-generic/vmlinux.lds.h>
@@ -80,6 +82,7 @@
_end_amode31_refs = .;
}
+ . = ALIGN(PAGE_SIZE);
_edata = .; /* End of data section */
/* will be freed after init */
@@ -194,6 +197,7 @@
BSS_SECTION(PAGE_SIZE, 4 * PAGE_SIZE, PAGE_SIZE)
+ . = ALIGN(PAGE_SIZE);
_end = . ;
/*
@@ -225,5 +229,6 @@
DISCARDS
/DISCARD/ : {
*(.eh_frame)
+ *(.interp)
}
}
diff --git a/arch/s390/kvm/intercept.c b/arch/s390/kvm/intercept.c
index 8ca301f..aeb0e08 100644
--- a/arch/s390/kvm/intercept.c
+++ b/arch/s390/kvm/intercept.c
@@ -271,10 +271,18 @@ static int handle_prog(struct kvm_vcpu *vcpu)
* handle_external_interrupt - used for external interruption interceptions
* @vcpu: virtual cpu
*
- * This interception only occurs if the CPUSTAT_EXT_INT bit was set, or if
- * the new PSW does not have external interrupts disabled. In the first case,
- * we've got to deliver the interrupt manually, and in the second case, we
- * drop to userspace to handle the situation there.
+ * This interception occurs if:
+ * - the CPUSTAT_EXT_INT bit was already set when the external interrupt
+ * occurred. In this case, the interrupt needs to be injected manually to
+ * preserve interrupt priority.
+ * - the external new PSW has external interrupts enabled, which will cause an
+ * interruption loop. We drop to userspace in this case.
+ *
+ * The latter case can be detected by inspecting the external mask bit in the
+ * external new psw.
+ *
+ * Under PV, only the latter case can occur, since interrupt priorities are
+ * handled in the ultravisor.
*/
static int handle_external_interrupt(struct kvm_vcpu *vcpu)
{
@@ -285,10 +293,18 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu)
vcpu->stat.exit_external_interrupt++;
- rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t));
- if (rc)
- return rc;
- /* We can not handle clock comparator or timer interrupt with bad PSW */
+ if (kvm_s390_pv_cpu_is_protected(vcpu)) {
+ newpsw = vcpu->arch.sie_block->gpsw;
+ } else {
+ rc = read_guest_lc(vcpu, __LC_EXT_NEW_PSW, &newpsw, sizeof(psw_t));
+ if (rc)
+ return rc;
+ }
+
+ /*
+ * Clock comparator or timer interrupt with external interrupt enabled
+ * will cause interrupt loop. Drop to userspace.
+ */
if ((eic == EXT_IRQ_CLK_COMP || eic == EXT_IRQ_CPU_TIMER) &&
(newpsw.mask & PSW_MASK_EXT))
return -EOPNOTSUPP;
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 8ce03a5..ca7d09f 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -81,8 +81,9 @@ static int sca_inject_ext_call(struct kvm_vcpu *vcpu, int src_id)
struct esca_block *sca = vcpu->kvm->arch.sca;
union esca_sigp_ctrl *sigp_ctrl =
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
- union esca_sigp_ctrl new_val = {0}, old_val = *sigp_ctrl;
+ union esca_sigp_ctrl new_val = {0}, old_val;
+ old_val = READ_ONCE(*sigp_ctrl);
new_val.scn = src_id;
new_val.c = 1;
old_val.c = 0;
@@ -93,8 +94,9 @@ static int sca_inject_ext_call(struct kvm_vcpu *vcpu, int src_id)
struct bsca_block *sca = vcpu->kvm->arch.sca;
union bsca_sigp_ctrl *sigp_ctrl =
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
- union bsca_sigp_ctrl new_val = {0}, old_val = *sigp_ctrl;
+ union bsca_sigp_ctrl new_val = {0}, old_val;
+ old_val = READ_ONCE(*sigp_ctrl);
new_val.scn = src_id;
new_val.c = 1;
old_val.c = 0;
@@ -124,16 +126,18 @@ static void sca_clear_ext_call(struct kvm_vcpu *vcpu)
struct esca_block *sca = vcpu->kvm->arch.sca;
union esca_sigp_ctrl *sigp_ctrl =
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
- union esca_sigp_ctrl old = *sigp_ctrl;
+ union esca_sigp_ctrl old;
+ old = READ_ONCE(*sigp_ctrl);
expect = old.value;
rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
} else {
struct bsca_block *sca = vcpu->kvm->arch.sca;
union bsca_sigp_ctrl *sigp_ctrl =
&(sca->cpu[vcpu->vcpu_id].sigp_ctrl);
- union bsca_sigp_ctrl old = *sigp_ctrl;
+ union bsca_sigp_ctrl old;
+ old = READ_ONCE(*sigp_ctrl);
expect = old.value;
rc = cmpxchg(&sigp_ctrl->value, old.value, 0);
}
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index c61533e..d7aa442 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -5066,6 +5066,23 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
/* When we are protected, we should not change the memory slots */
if (kvm_s390_pv_get_handle(kvm))
return -EINVAL;
+
+ if (!kvm->arch.migration_mode)
+ return 0;
+
+ /*
+ * Turn off migration mode when:
+ * - userspace creates a new memslot with dirty logging off,
+ * - userspace modifies an existing memslot (MOVE or FLAGS_ONLY) and
+ * dirty logging is turned off.
+ * Migration mode expects dirty page logging being enabled to store
+ * its dirty bitmap.
+ */
+ if (change != KVM_MR_DELETE &&
+ !(mem->flags & KVM_MEM_LOG_DIRTY_PAGES))
+ WARN(kvm_s390_vm_stop_migration(kvm),
+ "Failed to stop migration mode");
+
return 0;
}
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index 0b012ce..25be142 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -227,7 +227,7 @@ static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size
"4: slgr %0,%0\n"
"5:\n"
EX_TABLE(0b,2b) EX_TABLE(6b,2b) EX_TABLE(3b,5b) EX_TABLE(7b,5b)
- : "+a" (size), "+a" (to), "+a" (tmp1), "=a" (tmp2)
+ : "+&a" (size), "+&a" (to), "+a" (tmp1), "=&a" (tmp2)
: "a" (empty_zero_page), [spec] "K" (0x81UL)
: "cc", "memory", "0");
return size;
diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c
index 5060956..1bc42ce 100644
--- a/arch/s390/mm/extmem.c
+++ b/arch/s390/mm/extmem.c
@@ -289,15 +289,17 @@ segment_overlaps_others (struct dcss_segment *seg)
/*
* real segment loading function, called from segment_load
+ * Must return either an error code < 0, or the segment type code >= 0
*/
static int
__segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long *end)
{
unsigned long start_addr, end_addr, dummy;
struct dcss_segment *seg;
- int rc, diag_cc;
+ int rc, diag_cc, segtype;
start_addr = end_addr = 0;
+ segtype = -1;
seg = kmalloc(sizeof(*seg), GFP_KERNEL | GFP_DMA);
if (seg == NULL) {
rc = -ENOMEM;
@@ -326,9 +328,9 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
seg->res_name[8] = '\0';
strlcat(seg->res_name, " (DCSS)", sizeof(seg->res_name));
seg->res->name = seg->res_name;
- rc = seg->vm_segtype;
- if (rc == SEG_TYPE_SC ||
- ((rc == SEG_TYPE_SR || rc == SEG_TYPE_ER) && !do_nonshared))
+ segtype = seg->vm_segtype;
+ if (segtype == SEG_TYPE_SC ||
+ ((segtype == SEG_TYPE_SR || segtype == SEG_TYPE_ER) && !do_nonshared))
seg->res->flags |= IORESOURCE_READONLY;
/* Check for overlapping resources before adding the mapping. */
@@ -386,7 +388,7 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long
out_free:
kfree(seg);
out:
- return rc;
+ return rc < 0 ? rc : segtype;
}
/*
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index 2b1c6d9..3991262 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -297,7 +297,7 @@ static void try_free_pmd_table(pud_t *pud, unsigned long start)
if (end > VMALLOC_START)
return;
#ifdef CONFIG_KASAN
- if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
+ if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
return;
#endif
pmd = pmd_offset(pud, start);
@@ -372,7 +372,7 @@ static void try_free_pud_table(p4d_t *p4d, unsigned long start)
if (end > VMALLOC_START)
return;
#ifdef CONFIG_KASAN
- if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
+ if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
return;
#endif
@@ -426,7 +426,7 @@ static void try_free_p4d_table(pgd_t *pgd, unsigned long start)
if (end > VMALLOC_START)
return;
#ifdef CONFIG_KASAN
- if (start < KASAN_SHADOW_END && KASAN_SHADOW_START > end)
+ if (start < KASAN_SHADOW_END && end > KASAN_SHADOW_START)
return;
#endif
diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 639924d..56c4cec 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -503,8 +503,7 @@ static struct resource *__alloc_res(struct zpci_dev *zdev, unsigned long start,
return r;
}
-int zpci_setup_bus_resources(struct zpci_dev *zdev,
- struct list_head *resources)
+int zpci_setup_bus_resources(struct zpci_dev *zdev)
{
unsigned long addr, size, flags;
struct resource *res;
@@ -540,7 +539,6 @@ int zpci_setup_bus_resources(struct zpci_dev *zdev,
return -ENOMEM;
}
zdev->bars[i].res = res;
- pci_add_resource(resources, res);
}
zdev->has_resources = 1;
@@ -549,17 +547,23 @@ int zpci_setup_bus_resources(struct zpci_dev *zdev,
static void zpci_cleanup_bus_resources(struct zpci_dev *zdev)
{
+ struct resource *res;
int i;
+ pci_lock_rescan_remove();
for (i = 0; i < PCI_STD_NUM_BARS; i++) {
- if (!zdev->bars[i].size || !zdev->bars[i].res)
+ res = zdev->bars[i].res;
+ if (!res)
continue;
+ release_resource(res);
+ pci_bus_remove_resource(zdev->zbus->bus, res);
zpci_free_iomap(zdev, zdev->bars[i].map_idx);
- release_resource(zdev->bars[i].res);
- kfree(zdev->bars[i].res);
+ zdev->bars[i].res = NULL;
+ kfree(res);
}
zdev->has_resources = 0;
+ pci_unlock_rescan_remove();
}
int pcibios_add_device(struct pci_dev *pdev)
diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
index 5d77acb..cc7e5b2 100644
--- a/arch/s390/pci/pci_bus.c
+++ b/arch/s390/pci/pci_bus.c
@@ -41,9 +41,7 @@ static int zpci_nb_devices;
*/
static int zpci_bus_prepare_device(struct zpci_dev *zdev)
{
- struct resource_entry *window, *n;
- struct resource *res;
- int rc;
+ int rc, i;
if (!zdev_enabled(zdev)) {
rc = zpci_enable_device(zdev);
@@ -57,10 +55,10 @@ static int zpci_bus_prepare_device(struct zpci_dev *zdev)
}
if (!zdev->has_resources) {
- zpci_setup_bus_resources(zdev, &zdev->zbus->resources);
- resource_list_for_each_entry_safe(window, n, &zdev->zbus->resources) {
- res = window->res;
- pci_bus_add_resource(zdev->zbus->bus, res, 0);
+ zpci_setup_bus_resources(zdev);
+ for (i = 0; i < PCI_STD_NUM_BARS; i++) {
+ if (zdev->bars[i].res)
+ pci_bus_add_resource(zdev->zbus->bus, zdev->bars[i].res, 0);
}
}
diff --git a/arch/s390/pci/pci_bus.h b/arch/s390/pci/pci_bus.h
index ecef3a9..c5aa9a2 100644
--- a/arch/s390/pci/pci_bus.h
+++ b/arch/s390/pci/pci_bus.h
@@ -30,8 +30,7 @@ static inline void zpci_zdev_get(struct zpci_dev *zdev)
int zpci_alloc_domain(int domain);
void zpci_free_domain(int domain);
-int zpci_setup_bus_resources(struct zpci_dev *zdev,
- struct list_head *resources);
+int zpci_setup_bus_resources(struct zpci_dev *zdev);
static inline struct zpci_dev *get_zdev_by_bus(struct pci_bus *bus,
unsigned int devfn)
diff --git a/arch/sh/include/asm/processor_32.h b/arch/sh/include/asm/processor_32.h
index aa92cc93..6c7966e 100644
--- a/arch/sh/include/asm/processor_32.h
+++ b/arch/sh/include/asm/processor_32.h
@@ -50,6 +50,7 @@
#define SR_FD 0x00008000
#define SR_MD 0x40000000
+#define SR_USER_MASK 0x00000303 // M, Q, S, T bits
/*
* DSP structure and data
*/
diff --git a/arch/sh/kernel/signal_32.c b/arch/sh/kernel/signal_32.c
index dd30929..dc13702 100644
--- a/arch/sh/kernel/signal_32.c
+++ b/arch/sh/kernel/signal_32.c
@@ -115,6 +115,7 @@ static int
restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p)
{
unsigned int err = 0;
+ unsigned int sr = regs->sr & ~SR_USER_MASK;
#define COPY(x) err |= __get_user(regs->x, &sc->sc_##x)
COPY(regs[1]);
@@ -130,6 +131,8 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, int *r0_p
COPY(sr); COPY(pc);
#undef COPY
+ regs->sr = (regs->sr & SR_USER_MASK) | sr;
+
#ifdef CONFIG_SH_FPU
if (boot_cpu_data.flags & CPU_HAS_FPU) {
int owned_fp;
diff --git a/arch/sh/kernel/traps.c b/arch/sh/kernel/traps.c
index e76b221..361b764 100644
--- a/arch/sh/kernel/traps.c
+++ b/arch/sh/kernel/traps.c
@@ -57,7 +57,7 @@ void die(const char *str, struct pt_regs *regs, long err)
if (panic_on_oops)
panic("Fatal exception");
- do_exit(SIGSEGV);
+ make_task_dead(SIGSEGV);
}
void die_if_kernel(const char *str, struct pt_regs *regs, long err)
diff --git a/arch/sh/kernel/vmlinux.lds.S b/arch/sh/kernel/vmlinux.lds.S
index 3161b9cc..b6276a3 100644
--- a/arch/sh/kernel/vmlinux.lds.S
+++ b/arch/sh/kernel/vmlinux.lds.S
@@ -4,6 +4,7 @@
* Written by Niibe Yutaka and Paul Mundt
*/
OUTPUT_ARCH(sh)
+#define RUNTIME_DISCARD_EXIT
#include <asm/thread_info.h>
#include <asm/cache.h>
#include <asm/vmlinux.lds.h>
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index b120ed9..eff9116 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -286,7 +286,7 @@
This config option is actually maximum order plus one. For example,
a value of 13 means that the largest free memory block is 2^12 pages.
-if SPARC64
+if SPARC64 || COMPILE_TEST
source "kernel/power/Kconfig"
endif
diff --git a/arch/sparc/kernel/traps_32.c b/arch/sparc/kernel/traps_32.c
index 5630e5a..179aabf 100644
--- a/arch/sparc/kernel/traps_32.c
+++ b/arch/sparc/kernel/traps_32.c
@@ -86,9 +86,7 @@ void __noreturn die_if_kernel(char *str, struct pt_regs *regs)
}
printk("Instruction DUMP:");
instruction_dump ((unsigned long *) regs->pc);
- if(regs->psr & PSR_PS)
- do_exit(SIGKILL);
- do_exit(SIGSEGV);
+ make_task_dead((regs->psr & PSR_PS) ? SIGKILL : SIGSEGV);
}
void do_hw_interrupt(struct pt_regs *regs, unsigned long type)
diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c
index 6863025..2107782 100644
--- a/arch/sparc/kernel/traps_64.c
+++ b/arch/sparc/kernel/traps_64.c
@@ -2559,9 +2559,7 @@ void __noreturn die_if_kernel(char *str, struct pt_regs *regs)
}
if (panic_on_oops)
panic("Fatal exception");
- if (regs->tstate & TSTATE_PRIV)
- do_exit(SIGKILL);
- do_exit(SIGSEGV);
+ make_task_dead((regs->tstate & TSTATE_PRIV)? SIGKILL : SIGSEGV);
}
EXPORT_SYMBOL(die_if_kernel);
diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
index cde6db1..45a4bcd 100644
--- a/arch/um/drivers/vector_kern.c
+++ b/arch/um/drivers/vector_kern.c
@@ -770,6 +770,7 @@ static int vector_config(char *str, char **error_out)
if (parsed == NULL) {
*error_out = "vector_config failed to parse parameters";
+ kfree(params);
return -EINVAL;
}
diff --git a/arch/um/drivers/virt-pci.c b/arch/um/drivers/virt-pci.c
index 0ab5801..d762d72 100644
--- a/arch/um/drivers/virt-pci.c
+++ b/arch/um/drivers/virt-pci.c
@@ -131,8 +131,11 @@ static int um_pci_send_cmd(struct um_pci_device *dev,
out ? 1 : 0,
posted ? cmd : HANDLE_NO_FREE(cmd),
GFP_ATOMIC);
- if (ret)
+ if (ret) {
+ if (posted)
+ kfree(cmd);
goto out;
+ }
if (posted) {
virtqueue_kick(dev->cmd_vq);
@@ -615,22 +618,33 @@ static void um_pci_virtio_remove(struct virtio_device *vdev)
struct um_pci_device *dev = vdev->priv;
int i;
- /* Stop all virtqueues */
- vdev->config->reset(vdev);
- vdev->config->del_vqs(vdev);
-
device_set_wakeup_enable(&vdev->dev, false);
mutex_lock(&um_pci_mtx);
for (i = 0; i < MAX_DEVICES; i++) {
if (um_pci_devices[i].dev != dev)
continue;
+
um_pci_devices[i].dev = NULL;
irq_free_desc(dev->irq);
+
+ break;
}
mutex_unlock(&um_pci_mtx);
- um_pci_rescan();
+ if (i < MAX_DEVICES) {
+ struct pci_dev *pci_dev;
+
+ pci_dev = pci_get_slot(bridge->bus, i);
+ if (pci_dev)
+ pci_stop_and_remove_bus_device_locked(pci_dev);
+ }
+
+ /* Stop all virtqueues */
+ virtio_reset_device(vdev);
+ dev->cmd_vq = NULL;
+ dev->irq_vq = NULL;
+ vdev->config->del_vqs(vdev);
kfree(dev);
}
diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
index 82ff378..204e9df 100644
--- a/arch/um/drivers/virtio_uml.c
+++ b/arch/um/drivers/virtio_uml.c
@@ -168,7 +168,8 @@ static void vhost_user_check_reset(struct virtio_uml_device *vu_dev,
if (!vu_dev->registered)
return;
- virtio_break_device(&vu_dev->vdev);
+ vu_dev->registered = 0;
+
schedule_work(&pdata->conn_broken_wk);
}
@@ -1132,6 +1133,15 @@ void virtio_uml_set_no_vq_suspend(struct virtio_device *vdev,
static void vu_of_conn_broken(struct work_struct *wk)
{
+ struct virtio_uml_platform_data *pdata;
+ struct virtio_uml_device *vu_dev;
+
+ pdata = container_of(wk, struct virtio_uml_platform_data, conn_broken_wk);
+
+ vu_dev = platform_get_drvdata(pdata->pdev);
+
+ virtio_break_device(&vu_dev->vdev);
+
/*
* We can't remove the device from the devicetree so the only thing we
* can do is warn.
@@ -1262,8 +1272,14 @@ static int vu_unregister_cmdline_device(struct device *dev, void *data)
static void vu_conn_broken(struct work_struct *wk)
{
struct virtio_uml_platform_data *pdata;
+ struct virtio_uml_device *vu_dev;
pdata = container_of(wk, struct virtio_uml_platform_data, conn_broken_wk);
+
+ vu_dev = platform_get_drvdata(pdata->pdev);
+
+ virtio_break_device(&vu_dev->vdev);
+
vu_unregister_cmdline_device(&pdata->pdev->dev, NULL);
}
diff --git a/arch/um/kernel/vmlinux.lds.S b/arch/um/kernel/vmlinux.lds.S
index 16e49bf..53d719c 100644
--- a/arch/um/kernel/vmlinux.lds.S
+++ b/arch/um/kernel/vmlinux.lds.S
@@ -1,4 +1,4 @@
-
+#define RUNTIME_DISCARD_EXIT
KERNEL_STACK_SIZE = 4096 * (1 << CONFIG_KERNEL_STACK_ORDER);
#ifdef CONFIG_LD_SCRIPT_STATIC
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0f2234c..a08ce63 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1320,17 +1320,16 @@
If you select this option, microcode patch loading support for AMD
processors will be enabled.
-config MICROCODE_OLD_INTERFACE
- bool "Ancient loading interface (DEPRECATED)"
+config MICROCODE_LATE_LOADING
+ bool "Late microcode loading (DANGEROUS)"
default n
depends on MICROCODE
help
- DO NOT USE THIS! This is the ancient /dev/cpu/microcode interface
- which was used by userspace tools like iucode_tool and microcode.ctl.
- It is inadequate because it runs too late to be able to properly
- load microcode on a machine and it needs special tools. Instead, you
- should've switched to the early loading method with the initrd or
- builtin microcode by now: Documentation/x86/microcode.rst
+ Loading microcode late, when the system is up and executing instructions
+ is a tricky business and should be avoided if possible. Just the sequence
+ of synchronizing all cores and SMT threads is one fragile dance which does
+ not guarantee that cores might not softlock after the loading. Therefore,
+ use this at your own risk. Late loading taints the kernel too.
config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support"
diff --git a/arch/x86/configs/google/xfstest.config b/arch/x86/configs/google/xfstest.config
new file mode 100644
index 0000000..2c0732c
--- /dev/null
+++ b/arch/x86/configs/google/xfstest.config
@@ -0,0 +1,18 @@
+#Configurations required to run xfs tests
+CONFIG_MODULE_SIG=n
+CONFIG_MODULE_SIG_ALL=n
+CONFIG_SECURITY_LOADPIN=n
+CONFIG_SECURITY_LOADPIN_ENFORCE=n
+CONFIG_SECURITY_YAMA=n
+CONFIG_SECURITY_LOCKDOWN_LSM=n
+CONFIG_LOCK_DOWN_KERNEL_FORCE_NONE=n
+CONFIG_LSM=""
+CONFIG_SYSTEM_TRUSTED_KEYRING=n
+CONFIG_SECONDARY_TRUSTED_KEYRING=n
+CONFIG_VFAT_FS=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_NLS_UTF8=y
+CONFIG_FAT_DEFAULT_UTF8=y
diff --git a/arch/x86/configs/lakitu_defconfig b/arch/x86/configs/lakitu_defconfig
index 1425df3..d900fa5 100644
--- a/arch/x86/configs/lakitu_defconfig
+++ b/arch/x86/configs/lakitu_defconfig
@@ -1,8 +1,8 @@
#
# Automatically generated file; DO NOT EDIT.
-# Linux/x86_64 5.15.58 Kernel Configuration
+# Linux/x86_64 5.15.90 Kernel Configuration
#
-CONFIG_CC_VERSION_TEXT="Chromium OS 14.0_pre445002_p20220217-r1 clang version 14.0.0 (/var/tmp/portage/sys-devel/llvm-14.0_pre445002_p20220217-r1/work/llvm-14.0_pre445002_p20220217/clang 18308e171b5b1dd99627a4d88c7d6c5ff21b8c96)"
+CONFIG_CC_VERSION_TEXT="Chromium OS 14.0_pre445002_p20220217-r3 clang version 14.0.0 (/var/tmp/portage/sys-devel/llvm-14.0_pre445002_p20220217-r3/work/llvm-14.0_pre445002_p20220217/clang 18308e171b5b1dd99627a4d88c7d6c5ff21b8c96)"
CONFIG_GCC_VERSION=0
CONFIG_CC_IS_CLANG=y
CONFIG_CLANG_VERSION=140000
@@ -486,9 +486,12 @@
# end of Processor type and features
CONFIG_CC_HAS_SLS=y
+CONFIG_CC_HAS_RETURN_THUNK=y
CONFIG_SPECULATION_MITIGATIONS=y
CONFIG_PAGE_TABLE_ISOLATION=y
CONFIG_RETPOLINE=y
+CONFIG_RETHUNK=y
+CONFIG_CPU_UNRET_ENTRY=y
CONFIG_CPU_IBPB_ENTRY=y
CONFIG_CPU_IBRS_ENTRY=y
# CONFIG_SLS is not set
@@ -669,6 +672,7 @@
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
+CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
@@ -977,7 +981,7 @@
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_USER_COMPAT is not set
-# CONFIG_XFRM_INTERFACE is not set
+CONFIG_XFRM_INTERFACE=m
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
CONFIG_XFRM_STATISTICS=y
@@ -1015,13 +1019,14 @@
# CONFIG_INET_ESP_OFFLOAD is not set
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
+CONFIG_INET_TABLE_PERTURB_ORDER=16
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
# CONFIG_INET_RAW_DIAG is not set
-# CONFIG_INET_DIAG_DESTROY is not set
+CONFIG_INET_DIAG_DESTROY=y
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
@@ -1444,7 +1449,6 @@
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
-CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
@@ -3445,6 +3449,7 @@
# Memory initialization
#
CONFIG_CC_HAS_AUTO_VAR_INIT_PATTERN=y
+CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO_ENABLER=y
CONFIG_CC_HAS_AUTO_VAR_INIT_ZERO=y
# CONFIG_INIT_STACK_NONE is not set
# CONFIG_INIT_STACK_ALL_PATTERN is not set
@@ -3550,8 +3555,7 @@
# CONFIG_CRYPTO_CRC32_PCLMUL is not set
# CONFIG_CRYPTO_XXHASH is not set
# CONFIG_CRYPTO_BLAKE2B is not set
-# CONFIG_CRYPTO_BLAKE2S is not set
-CONFIG_CRYPTO_BLAKE2S_X86=m
+CONFIG_CRYPTO_BLAKE2S_X86=y
CONFIG_CRYPTO_CRCT10DIF=y
# CONFIG_CRYPTO_CRCT10DIF_PCLMUL is not set
CONFIG_CRYPTO_GHASH=y
@@ -3824,6 +3828,8 @@
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options
+CONFIG_AS_HAS_NON_CONST_LEB128=y
+
#
# Compile-time checks and compiler options
#
@@ -3834,6 +3840,8 @@
# CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is not set
CONFIG_DEBUG_INFO_DWARF4=y
CONFIG_DEBUG_INFO_BTF=y
+CONFIG_PAHOLE_HAS_SPLIT_BTF=y
+CONFIG_DEBUG_INFO_BTF_MODULES=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
@@ -4064,7 +4072,6 @@
#
# x86 Debugging
#
-CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=y
CONFIG_EARLY_PRINTK_USB=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c
index 1f1a95f..c0ab0ff 100644
--- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
+++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c
@@ -19,6 +19,7 @@
#include <crypto/internal/simd.h>
#include <asm/cpu_device_id.h>
#include <asm/simd.h>
+#include <asm/unaligned.h>
#define GHASH_BLOCK_SIZE 16
#define GHASH_DIGEST_SIZE 16
@@ -54,15 +55,14 @@ static int ghash_setkey(struct crypto_shash *tfm,
const u8 *key, unsigned int keylen)
{
struct ghash_ctx *ctx = crypto_shash_ctx(tfm);
- be128 *x = (be128 *)key;
u64 a, b;
if (keylen != GHASH_BLOCK_SIZE)
return -EINVAL;
/* perform multiplication by 'x' in GF(2^128) */
- a = be64_to_cpu(x->a);
- b = be64_to_cpu(x->b);
+ a = get_unaligned_be64(key);
+ b = get_unaligned_be64(key + 8);
ctx->shash.a = (b << 1) | (a >> 63);
ctx->shash.b = (a << 1) | (b >> 63);
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 6b44263..e309e71 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1239,14 +1239,14 @@
SYM_CODE_END(asm_exc_nmi)
.pushsection .text, "ax"
-SYM_CODE_START(rewind_stack_do_exit)
+SYM_CODE_START(rewind_stack_and_make_dead)
/* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp
movl PER_CPU_VAR(cpu_current_top_of_stack), %esi
leal -TOP_OF_KERNEL_STACK_PADDING-PTREGS_SIZE(%esi), %esp
- call do_exit
+ call make_task_dead
1: jmp 1b
-SYM_CODE_END(rewind_stack_do_exit)
+SYM_CODE_END(rewind_stack_and_make_dead)
.popsection
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index a3af2a9..9f1333a 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1487,7 +1487,7 @@
#endif
.pushsection .text, "ax"
-SYM_CODE_START(rewind_stack_do_exit)
+SYM_CODE_START(rewind_stack_and_make_dead)
UNWIND_HINT_FUNC
/* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp
@@ -1496,6 +1496,6 @@
leaq -PTREGS_SIZE(%rax), %rsp
UNWIND_HINT_REGS
- call do_exit
-SYM_CODE_END(rewind_stack_do_exit)
+ call make_task_dead
+SYM_CODE_END(rewind_stack_and_make_dead)
.popsection
diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 9687a8a..d93d098 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -976,7 +976,7 @@ static int __init amd_core_pmu_init(void)
* numbered counter following it.
*/
for (i = 0; i < x86_pmu.num_counters - 1; i += 2)
- even_ctr_mask |= 1 << i;
+ even_ctr_mask |= BIT_ULL(i);
pair_constraint = (struct event_constraint)
__EVENT_CONSTRAINT(0, even_ctr_mask, 0,
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index f36fda2..b70e152 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -6109,6 +6109,7 @@ __init int intel_pmu_init(void)
break;
case INTEL_FAM6_SAPPHIRERAPIDS_X:
+ case INTEL_FAM6_EMERALDRAPIDS_X:
pmem = true;
x86_pmu.late_ack = true;
memcpy(hw_cache_event_ids, spr_hw_cache_event_ids, sizeof(hw_cache_event_ids));
diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index c72e368..7e16c59 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -1829,6 +1829,7 @@ static const struct x86_cpu_id intel_uncore_match[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &adl_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &adl_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &spr_uncore_init),
+ X86_MATCH_INTEL_FAM6_MODEL(EMERALDRAPIDS_X, &spr_uncore_init),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, &snr_uncore_init),
{},
};
diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
index 96c775a..d23b552 100644
--- a/arch/x86/events/msr.c
+++ b/arch/x86/events/msr.c
@@ -69,6 +69,7 @@ static bool test_intel(int idx, void *data)
case INTEL_FAM6_BROADWELL_G:
case INTEL_FAM6_BROADWELL_X:
case INTEL_FAM6_SAPPHIRERAPIDS_X:
+ case INTEL_FAM6_EMERALDRAPIDS_X:
case INTEL_FAM6_ATOM_SILVERMONT:
case INTEL_FAM6_ATOM_SILVERMONT_D:
diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
index 77e3a47..840ee43 100644
--- a/arch/x86/events/rapl.c
+++ b/arch/x86/events/rapl.c
@@ -804,6 +804,8 @@ static const struct x86_cpu_id rapl_model_match[] __initconst = {
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, &model_hsx),
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE_L, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, &model_skl),
+ X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, &model_skl),
+ X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &model_skl),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &model_spr),
diff --git a/arch/x86/events/zhaoxin/core.c b/arch/x86/events/zhaoxin/core.c
index 949d845..3e9acda 100644
--- a/arch/x86/events/zhaoxin/core.c
+++ b/arch/x86/events/zhaoxin/core.c
@@ -541,7 +541,13 @@ __init int zhaoxin_pmu_init(void)
switch (boot_cpu_data.x86) {
case 0x06:
- if (boot_cpu_data.x86_model == 0x0f || boot_cpu_data.x86_model == 0x19) {
+ /*
+ * Support Zhaoxin CPU from ZXC series, exclude Nano series through FMS.
+ * Nano FMS: Family=6, Model=F, Stepping=[0-A][C-D]
+ * ZXC FMS: Family=6, Model=F, Stepping=E-F OR Family=6, Model=0x19, Stepping=0-3
+ */
+ if ((boot_cpu_data.x86_model == 0x0f && boot_cpu_data.x86_stepping >= 0x0e) ||
+ boot_cpu_data.x86_model == 0x19) {
x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index f3cb8c8..e31c7e7 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -452,5 +452,6 @@
#define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */
#define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */
#define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */
+#define X86_BUG_SMT_RSB X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */
#endif /* _ASM_X86_CPUFEATURES_H */
diff --git a/arch/x86/include/asm/debugreg.h b/arch/x86/include/asm/debugreg.h
index cfdf307..9ed8343 100644
--- a/arch/x86/include/asm/debugreg.h
+++ b/arch/x86/include/asm/debugreg.h
@@ -39,7 +39,20 @@ static __always_inline unsigned long native_get_debugreg(int regno)
asm("mov %%db6, %0" :"=r" (val));
break;
case 7:
- asm("mov %%db7, %0" :"=r" (val));
+ /*
+ * Apply __FORCE_ORDER to DR7 reads to forbid re-ordering them
+ * with other code.
+ *
+ * This is needed because a DR7 access can cause a #VC exception
+ * when running under SEV-ES. Taking a #VC exception is not a
+ * safe thing to do just anywhere in the entry code and
+ * re-ordering might place the access into an unsafe location.
+ *
+ * This happened in the NMI handler, where the DR7 read was
+ * re-ordered to happen before the call to sev_es_ist_enter(),
+ * causing stack recursion.
+ */
+ asm volatile("mov %%db7, %0" : "=r" (val) : __FORCE_ORDER);
break;
default:
BUG();
@@ -66,7 +79,16 @@ static __always_inline void native_set_debugreg(int regno, unsigned long value)
asm("mov %0, %%db6" ::"r" (value));
break;
case 7:
- asm("mov %0, %%db7" ::"r" (value));
+ /*
+ * Apply __FORCE_ORDER to DR7 writes to forbid re-ordering them
+ * with other code.
+ *
+ * While is didn't happen with a DR7 write (see the DR7 read
+ * comment above which explains where it happened), add the
+ * __FORCE_ORDER here too to avoid similar problems in the
+ * future.
+ */
+ asm volatile("mov %0, %%db7" ::"r" (value), __FORCE_ORDER);
break;
default:
BUG();
diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h
index 1392296..b8e7ea9 100644
--- a/arch/x86/include/asm/intel-family.h
+++ b/arch/x86/include/asm/intel-family.h
@@ -113,6 +113,8 @@
#define INTEL_FAM6_ALDERLAKE 0x97 /* Golden Cove / Gracemont */
#define INTEL_FAM6_ALDERLAKE_L 0x9A /* Golden Cove / Gracemont */
+#define INTEL_FAM6_LUNARLAKE_M 0xBD
+
/* "Small Core" Processors (Atom/E-Core) */
#define INTEL_FAM6_ATOM_BONNELL 0x1C /* Diamondville, Pineview */
diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
index d130d21..1bf064a 100644
--- a/arch/x86/include/asm/microcode.h
+++ b/arch/x86/include/asm/microcode.h
@@ -130,14 +130,14 @@ static inline unsigned int x86_cpuid_family(void)
#ifdef CONFIG_MICROCODE
extern void __init load_ucode_bsp(void);
extern void load_ucode_ap(void);
-void reload_early_microcode(void);
+void reload_early_microcode(unsigned int cpu);
extern bool get_builtin_firmware(struct cpio_data *cd, const char *name);
extern bool initrd_gone;
void microcode_bsp_resume(void);
#else
static inline void __init load_ucode_bsp(void) { }
static inline void load_ucode_ap(void) { }
-static inline void reload_early_microcode(void) { }
+static inline void reload_early_microcode(unsigned int cpu) { }
static inline void microcode_bsp_resume(void) { }
static inline bool
get_builtin_firmware(struct cpio_data *cd, const char *name) { return false; }
diff --git a/arch/x86/include/asm/microcode_amd.h b/arch/x86/include/asm/microcode_amd.h
index 7063b5a..a645b25 100644
--- a/arch/x86/include/asm/microcode_amd.h
+++ b/arch/x86/include/asm/microcode_amd.h
@@ -47,12 +47,12 @@ struct microcode_amd {
extern void __init load_ucode_amd_bsp(unsigned int family);
extern void load_ucode_amd_ap(unsigned int family);
extern int __init save_microcode_in_initrd_amd(unsigned int family);
-void reload_ucode_amd(void);
+void reload_ucode_amd(unsigned int cpu);
#else
static inline void __init load_ucode_amd_bsp(unsigned int family) {}
static inline void load_ucode_amd_ap(unsigned int family) {}
static inline int __init
save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
-static inline void reload_ucode_amd(void) {}
+static inline void reload_ucode_amd(unsigned int cpu) {}
#endif
#endif /* _ASM_X86_MICROCODE_AMD_H */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index a9452ad..e8b8991 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -54,6 +54,10 @@
#define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */
#define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
+/* A mask for bits which the kernel toggles when controlling mitigations */
+#define SPEC_CTRL_MITIGATIONS_MASK (SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD \
+ | SPEC_CTRL_RRSBA_DIS_S)
+
#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
#define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 577f342..3e3bd5b 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -834,8 +834,9 @@ bool xen_set_default_idle(void);
#define xen_set_default_idle 0
#endif
-void stop_this_cpu(void *dummy);
-void microcode_check(void);
+void __noreturn stop_this_cpu(void *dummy);
+void microcode_check(struct cpuinfo_x86 *prev_info);
+void store_cpu_caps(struct cpuinfo_x86 *info);
enum l1tf_mitigations {
L1TF_MITIGATION_OFF,
diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h
index 04c17be9..bc5b4d7 100644
--- a/arch/x86/include/asm/reboot.h
+++ b/arch/x86/include/asm/reboot.h
@@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type);
#define MRR_BIOS 0
#define MRR_APM 1
+void cpu_emergency_disable_virtualization(void);
+
typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
void nmi_panic_self_stop(struct pt_regs *regs);
void nmi_shootdown_cpus(nmi_shootdown_cb callback);
diff --git a/arch/x86/include/asm/resctrl.h b/arch/x86/include/asm/resctrl.h
index d60ed06..b9ccdf5 100644
--- a/arch/x86/include/asm/resctrl.h
+++ b/arch/x86/include/asm/resctrl.h
@@ -51,7 +51,7 @@ DECLARE_STATIC_KEY_FALSE(rdt_mon_enable_key);
* simple as possible.
* Must be called with preemption disabled.
*/
-static void __resctrl_sched_in(void)
+static inline void __resctrl_sched_in(struct task_struct *tsk)
{
struct resctrl_pqr_state *state = this_cpu_ptr(&pqr_state);
u32 closid = state->default_closid;
@@ -63,13 +63,13 @@ static void __resctrl_sched_in(void)
* Else use the closid/rmid assigned to this cpu.
*/
if (static_branch_likely(&rdt_alloc_enable_key)) {
- tmp = READ_ONCE(current->closid);
+ tmp = READ_ONCE(tsk->closid);
if (tmp)
closid = tmp;
}
if (static_branch_likely(&rdt_mon_enable_key)) {
- tmp = READ_ONCE(current->rmid);
+ tmp = READ_ONCE(tsk->rmid);
if (tmp)
rmid = tmp;
}
@@ -81,17 +81,17 @@ static void __resctrl_sched_in(void)
}
}
-static inline void resctrl_sched_in(void)
+static inline void resctrl_sched_in(struct task_struct *tsk)
{
if (static_branch_likely(&rdt_enable_key))
- __resctrl_sched_in();
+ __resctrl_sched_in(tsk);
}
void resctrl_cpu_detect(struct cpuinfo_x86 *c);
#else
-static inline void resctrl_sched_in(void) {}
+static inline void resctrl_sched_in(struct task_struct *tsk) {}
static inline void resctrl_cpu_detect(struct cpuinfo_x86 *c) {}
#endif /* CONFIG_X86_CPU_RESCTRL */
diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h
index 8757078..3b12e6b 100644
--- a/arch/x86/include/asm/virtext.h
+++ b/arch/x86/include/asm/virtext.h
@@ -126,7 +126,21 @@ static inline void cpu_svm_disable(void)
wrmsrl(MSR_VM_HSAVE_PA, 0);
rdmsrl(MSR_EFER, efer);
- wrmsrl(MSR_EFER, efer & ~EFER_SVME);
+ if (efer & EFER_SVME) {
+ /*
+ * Force GIF=1 prior to disabling SVM to ensure INIT and NMI
+ * aren't blocked, e.g. if a fatal error occurred between CLGI
+ * and STGI. Note, STGI may #UD if SVM is disabled from NMI
+ * context between reading EFER and executing STGI. In that
+ * case, GIF must already be set, otherwise the NMI would have
+ * been blocked, so just eat the fault.
+ */
+ asm_volatile_goto("1: stgi\n\t"
+ _ASM_EXTABLE(1b, %l[fault])
+ ::: "memory" : fault);
+fault:
+ wrmsrl(MSR_EFER, efer & ~EFER_SVME);
+ }
}
/** Makes sure SVM is disabled, if it is supported on the CPU
diff --git a/arch/x86/kernel/acpi/cstate.c b/arch/x86/kernel/acpi/cstate.c
index 7de599e..7945eae 100644
--- a/arch/x86/kernel/acpi/cstate.c
+++ b/arch/x86/kernel/acpi/cstate.c
@@ -79,6 +79,21 @@ void acpi_processor_power_init_bm_check(struct acpi_processor_flags *flags,
*/
flags->bm_control = 0;
}
+ if (c->x86_vendor == X86_VENDOR_AMD && c->x86 >= 0x17) {
+ /*
+ * For all AMD Zen or newer CPUs that support C3, caches
+ * should not be flushed by software while entering C3
+ * type state. Set bm->check to 1 so that kernel doesn't
+ * need to execute cache flush operation.
+ */
+ flags->bm_check = 1;
+ /*
+ * In current AMD C state implementation ARB_DIS is no longer
+ * used. So set bm_control to zero to indicate ARB_DIS is not
+ * required while entering C3 type state.
+ */
+ flags->bm_control = 0;
+ }
}
EXPORT_SYMBOL(acpi_processor_power_init_bm_check);
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index c30e320..83bf26e 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -904,6 +904,15 @@ void init_spectral_chicken(struct cpuinfo_x86 *c)
}
}
#endif
+ /*
+ * Work around Erratum 1386. The XSAVES instruction malfunctions in
+ * certain circumstances on Zen1/2 uarch, and not all parts have had
+ * updated microcode at the time of writing (March 2023).
+ *
+ * Affected parts all have no supervisor XSAVE states, meaning that
+ * the XSAVEC instruction (which works fine) is equivalent.
+ */
+ clear_cpu_cap(c, X86_FEATURE_XSAVES);
}
static void init_amd_zn(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 544e6c6..2627e97 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -144,9 +144,17 @@ void __init check_bugs(void)
* have unknown values. AMD64_LS_CFG MSR is cached in the early AMD
* init code as it is not enumerated and depends on the family.
*/
- if (boot_cpu_has(X86_FEATURE_MSR_SPEC_CTRL))
+ if (cpu_feature_enabled(X86_FEATURE_MSR_SPEC_CTRL)) {
rdmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
+ /*
+ * Previously running kernel (kexec), may have some controls
+ * turned ON. Clear them and let the mitigations setup below
+ * rediscover them based on configuration.
+ */
+ x86_spec_ctrl_base &= ~SPEC_CTRL_MITIGATIONS_MASK;
+ }
+
/* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation();
spectre_v2_select_mitigation();
@@ -1095,14 +1103,18 @@ spectre_v2_parse_user_cmdline(void)
return SPECTRE_V2_USER_CMD_AUTO;
}
-static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
+static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
{
- return mode == SPECTRE_V2_IBRS ||
- mode == SPECTRE_V2_EIBRS ||
+ return mode == SPECTRE_V2_EIBRS ||
mode == SPECTRE_V2_EIBRS_RETPOLINE ||
mode == SPECTRE_V2_EIBRS_LFENCE;
}
+static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
+{
+ return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
+}
+
static void __init
spectre_v2_user_select_mitigation(void)
{
@@ -1165,12 +1177,19 @@ spectre_v2_user_select_mitigation(void)
}
/*
- * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible,
- * STIBP is not required.
+ * If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP
+ * is not required.
+ *
+ * Enhanced IBRS also protects against cross-thread branch target
+ * injection in user-mode as the IBRS bit remains always set which
+ * implicitly enables cross-thread protections. However, in legacy IBRS
+ * mode, the IBRS bit is set only on kernel entry and cleared on return
+ * to userspace. This disables the implicit cross-thread protection,
+ * so allow for STIBP to be selected in that case.
*/
if (!boot_cpu_has(X86_FEATURE_STIBP) ||
!smt_possible ||
- spectre_v2_in_ibrs_mode(spectre_v2_enabled))
+ spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return;
/*
@@ -2297,7 +2316,7 @@ static ssize_t mmio_stale_data_show_state(char *buf)
static char *stibp_state(void)
{
- if (spectre_v2_in_ibrs_mode(spectre_v2_enabled))
+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
return "";
switch (spectre_v2_user_stibp) {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 82d1483..ac6a1ad 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1126,6 +1126,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
#define MMIO_SBDS BIT(2)
/* CPU is affected by RETbleed, speculating where you would not expect it */
#define RETBLEED BIT(3)
+/* CPU is affected by SMT (cross-thread) return predictions */
+#define SMT_RSB BIT(4)
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
@@ -1157,8 +1159,8 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_AMD(0x15, RETBLEED),
VULNBL_AMD(0x16, RETBLEED),
- VULNBL_AMD(0x17, RETBLEED),
- VULNBL_HYGON(0x18, RETBLEED),
+ VULNBL_AMD(0x17, RETBLEED | SMT_RSB),
+ VULNBL_HYGON(0x18, RETBLEED | SMT_RSB),
{}
};
@@ -1276,6 +1278,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
!(ia32_cap & ARCH_CAP_PBRSB_NO))
setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB);
+ if (cpu_matches(cpu_vuln_blacklist, SMT_RSB))
+ setup_force_cpu_bug(X86_BUG_SMT_RSB);
+
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;
@@ -2148,35 +2153,52 @@ void cpu_init_secondary(void)
}
#endif
-/*
+#ifdef CONFIG_MICROCODE_LATE_LOADING
+/**
+ * store_cpu_caps() - Store a snapshot of CPU capabilities
+ * @curr_info: Pointer where to store it
+ *
+ * Returns: None
+ */
+void store_cpu_caps(struct cpuinfo_x86 *curr_info)
+{
+ /* Reload CPUID max function as it might've changed. */
+ curr_info->cpuid_level = cpuid_eax(0);
+
+ /* Copy all capability leafs and pick up the synthetic ones. */
+ memcpy(&curr_info->x86_capability, &boot_cpu_data.x86_capability,
+ sizeof(curr_info->x86_capability));
+
+ /* Get the hardware CPUID leafs */
+ get_cpu_cap(curr_info);
+}
+
+/**
+ * microcode_check() - Check if any CPU capabilities changed after an update.
+ * @prev_info: CPU capabilities stored before an update.
+ *
* The microcode loader calls this upon late microcode load to recheck features,
* only when microcode has been updated. Caller holds microcode_mutex and CPU
* hotplug lock.
+ *
+ * Return: None
*/
-void microcode_check(void)
+void microcode_check(struct cpuinfo_x86 *prev_info)
{
- struct cpuinfo_x86 info;
+ struct cpuinfo_x86 curr_info;
perf_check_microcode();
- /* Reload CPUID max function as it might've changed. */
- info.cpuid_level = cpuid_eax(0);
+ store_cpu_caps(&curr_info);
- /*
- * Copy all capability leafs to pick up the synthetic ones so that
- * memcmp() below doesn't fail on that. The ones coming from CPUID will
- * get overwritten in get_cpu_cap().
- */
- memcpy(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability));
-
- get_cpu_cap(&info);
-
- if (!memcmp(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability)))
+ if (!memcmp(&prev_info->x86_capability, &curr_info.x86_capability,
+ sizeof(prev_info->x86_capability)))
return;
pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
}
+#endif
/*
* Invoked from core CPU hotplug code after hotplug operations
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 5ee82fd3..a072772 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -2302,6 +2302,7 @@ static void mce_restart(void)
{
mce_timer_delete_all();
on_each_cpu(mce_cpu_restart, NULL, 1);
+ mce_schedule_work();
}
/* Toggle features for corrected errors */
diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
index 7c758f1..9bffe40 100644
--- a/arch/x86/kernel/cpu/microcode/amd.c
+++ b/arch/x86/kernel/cpu/microcode/amd.c
@@ -55,7 +55,9 @@ struct cont_desc {
};
static u32 ucode_new_rev;
-static u8 amd_ucode_patch[PATCH_MAX_SIZE];
+
+/* One blob per node. */
+static u8 amd_ucode_patch[MAX_NUMNODES][PATCH_MAX_SIZE];
/*
* Microcode patch container file is prepended to the initrd in cpio
@@ -428,7 +430,7 @@ apply_microcode_early_amd(u32 cpuid_1_eax, void *ucode, size_t size, bool save_p
patch = (u8 (*)[PATCH_MAX_SIZE])__pa_nodebug(&amd_ucode_patch);
#else
new_rev = &ucode_new_rev;
- patch = &amd_ucode_patch;
+ patch = &amd_ucode_patch[0];
#endif
desc.cpuid_1_eax = cpuid_1_eax;
@@ -547,8 +549,7 @@ void load_ucode_amd_ap(unsigned int cpuid_1_eax)
apply_microcode_early_amd(cpuid_1_eax, cp.data, cp.size, false);
}
-static enum ucode_state
-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size);
+static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
{
@@ -566,19 +567,19 @@ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
if (!desc.mc)
return -EINVAL;
- ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size);
+ ret = load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
if (ret > UCODE_UPDATED)
return -EINVAL;
return 0;
}
-void reload_ucode_amd(void)
+void reload_ucode_amd(unsigned int cpu)
{
- struct microcode_amd *mc;
u32 rev, dummy __always_unused;
+ struct microcode_amd *mc;
- mc = (struct microcode_amd *)amd_ucode_patch;
+ mc = (struct microcode_amd *)amd_ucode_patch[cpu_to_node(cpu)];
rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
@@ -844,9 +845,10 @@ static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
return UCODE_OK;
}
-static enum ucode_state
-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
+static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size)
{
+ struct cpuinfo_x86 *c;
+ unsigned int nid, cpu;
struct ucode_patch *p;
enum ucode_state ret;
@@ -859,23 +861,23 @@ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
return ret;
}
- p = find_patch(0);
- if (!p) {
- return ret;
- } else {
- if (boot_cpu_data.microcode >= p->patch_id)
- return ret;
+ for_each_node(nid) {
+ cpu = cpumask_first(cpumask_of_node(nid));
+ c = &cpu_data(cpu);
+
+ p = find_patch(cpu);
+ if (!p)
+ continue;
+
+ if (c->microcode >= p->patch_id)
+ continue;
ret = UCODE_NEW;
+
+ memset(&amd_ucode_patch[nid], 0, PATCH_MAX_SIZE);
+ memcpy(&amd_ucode_patch[nid], p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
}
- /* save BSP's matching patch for early load */
- if (!save)
- return ret;
-
- memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
- memcpy(amd_ucode_patch, p->data, min_t(u32, p->size, PATCH_MAX_SIZE));
-
return ret;
}
@@ -900,12 +902,11 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
{
char fw_name[36] = "amd-ucode/microcode_amd.bin";
struct cpuinfo_x86 *c = &cpu_data(cpu);
- bool bsp = c->cpu_index == boot_cpu_data.cpu_index;
enum ucode_state ret = UCODE_NFOUND;
const struct firmware *fw;
/* reload ucode container only on the boot cpu */
- if (!refresh_fw || !bsp)
+ if (!refresh_fw)
return UCODE_OK;
if (c->x86 >= 0x15)
@@ -920,7 +921,7 @@ static enum ucode_state request_microcode_amd(int cpu, struct device *device,
if (!verify_container(fw->data, fw->size, false))
goto fw_release;
- ret = load_microcode_amd(bsp, c->x86, fw->data, fw->size);
+ ret = load_microcode_amd(c->x86, fw->data, fw->size);
fw_release:
release_firmware(fw);
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index 150ebfb..d2f00d7 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -315,7 +315,7 @@ struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa)
#endif
}
-void reload_early_microcode(void)
+void reload_early_microcode(unsigned int cpu)
{
int vendor, family;
@@ -329,7 +329,7 @@ void reload_early_microcode(void)
break;
case X86_VENDOR_AMD:
if (family >= 0x10)
- reload_ucode_amd();
+ reload_ucode_amd(cpu);
break;
default:
break;
@@ -390,101 +390,10 @@ static int apply_microcode_on_target(int cpu)
return ret;
}
-#ifdef CONFIG_MICROCODE_OLD_INTERFACE
-static int do_microcode_update(const void __user *buf, size_t size)
-{
- int error = 0;
- int cpu;
-
- for_each_online_cpu(cpu) {
- struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
- enum ucode_state ustate;
-
- if (!uci->valid)
- continue;
-
- ustate = microcode_ops->request_microcode_user(cpu, buf, size);
- if (ustate == UCODE_ERROR) {
- error = -1;
- break;
- } else if (ustate == UCODE_NEW) {
- apply_microcode_on_target(cpu);
- }
- }
-
- return error;
-}
-
-static int microcode_open(struct inode *inode, struct file *file)
-{
- return capable(CAP_SYS_RAWIO) ? stream_open(inode, file) : -EPERM;
-}
-
-static ssize_t microcode_write(struct file *file, const char __user *buf,
- size_t len, loff_t *ppos)
-{
- ssize_t ret = -EINVAL;
- unsigned long nr_pages = totalram_pages();
-
- if ((len >> PAGE_SHIFT) > nr_pages) {
- pr_err("too much data (max %ld pages)\n", nr_pages);
- return ret;
- }
-
- cpus_read_lock();
- mutex_lock(µcode_mutex);
-
- if (do_microcode_update(buf, len) == 0)
- ret = (ssize_t)len;
-
- if (ret > 0)
- perf_check_microcode();
-
- mutex_unlock(µcode_mutex);
- cpus_read_unlock();
-
- return ret;
-}
-
-static const struct file_operations microcode_fops = {
- .owner = THIS_MODULE,
- .write = microcode_write,
- .open = microcode_open,
- .llseek = no_llseek,
-};
-
-static struct miscdevice microcode_dev = {
- .minor = MICROCODE_MINOR,
- .name = "microcode",
- .nodename = "cpu/microcode",
- .fops = µcode_fops,
-};
-
-static int __init microcode_dev_init(void)
-{
- int error;
-
- error = misc_register(µcode_dev);
- if (error) {
- pr_err("can't misc_register on minor=%d\n", MICROCODE_MINOR);
- return error;
- }
-
- return 0;
-}
-
-static void __exit microcode_dev_exit(void)
-{
- misc_deregister(µcode_dev);
-}
-#else
-#define microcode_dev_init() 0
-#define microcode_dev_exit() do { } while (0)
-#endif
-
/* fake device for request_firmware */
static struct platform_device *microcode_pdev;
+#ifdef CONFIG_MICROCODE_LATE_LOADING
/*
* Late loading dance. Why the heavy-handed stomp_machine effort?
*
@@ -599,16 +508,27 @@ static int __reload_late(void *info)
*/
static int microcode_reload_late(void)
{
- int ret;
+ int old = boot_cpu_data.microcode, ret;
+ struct cpuinfo_x86 prev_info;
atomic_set(&late_cpus_in, 0);
atomic_set(&late_cpus_out, 0);
- ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
- if (ret == 0)
- microcode_check();
+ /*
+ * Take a snapshot before the microcode update in order to compare and
+ * check whether any bits changed after an update.
+ */
+ store_cpu_caps(&prev_info);
- pr_info("Reload completed, microcode revision: 0x%x\n", boot_cpu_data.microcode);
+ ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
+ if (!ret) {
+ pr_info("Reload succeeded, microcode revision: 0x%x -> 0x%x\n",
+ old, boot_cpu_data.microcode);
+ microcode_check(&prev_info);
+ } else {
+ pr_info("Reload failed, current microcode revision: 0x%x\n",
+ boot_cpu_data.microcode);
+ }
return ret;
}
@@ -652,6 +572,9 @@ static ssize_t reload_store(struct device *dev,
return ret;
}
+static DEVICE_ATTR_WO(reload);
+#endif
+
static ssize_t version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -668,7 +591,6 @@ static ssize_t pf_show(struct device *dev,
return sprintf(buf, "0x%x\n", uci->cpu_sig.pf);
}
-static DEVICE_ATTR_WO(reload);
static DEVICE_ATTR(version, 0444, version_show, NULL);
static DEVICE_ATTR(processor_flags, 0444, pf_show, NULL);
@@ -785,7 +707,7 @@ void microcode_bsp_resume(void)
if (uci->valid && uci->mc)
microcode_ops->apply_microcode(cpu);
else if (!uci->mc)
- reload_early_microcode();
+ reload_early_microcode(cpu);
}
static struct syscore_ops mc_syscore_ops = {
@@ -821,7 +743,9 @@ static int mc_cpu_down_prep(unsigned int cpu)
}
static struct attribute *cpu_root_microcode_attrs[] = {
+#ifdef CONFIG_MICROCODE_LATE_LOADING
&dev_attr_reload.attr,
+#endif
NULL
};
@@ -873,10 +797,6 @@ static int __init microcode_init(void)
goto out_driver;
}
- error = microcode_dev_init();
- if (error)
- goto out_ucode_group;
-
register_syscore_ops(&mc_syscore_ops);
cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:starting",
mc_cpu_starting, NULL);
@@ -887,10 +807,6 @@ static int __init microcode_init(void)
return 0;
- out_ucode_group:
- sysfs_remove_group(&cpu_subsys.dev_root->kobj,
- &cpu_root_microcode_group);
-
out_driver:
cpus_read_lock();
mutex_lock(µcode_mutex);
diff --git a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
index 8766627..000e146 100644
--- a/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
+++ b/arch/x86/kernel/cpu/resctrl/ctrlmondata.c
@@ -353,7 +353,6 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
{
struct resctrl_schema *s;
struct rdtgroup *rdtgrp;
- struct rdt_domain *dom;
struct rdt_resource *r;
char *tok, *resname;
int ret = 0;
@@ -382,10 +381,7 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
goto out;
}
- list_for_each_entry(s, &resctrl_schema_all, list) {
- list_for_each_entry(dom, &s->res->domains, list)
- memset(dom->staged_config, 0, sizeof(dom->staged_config));
- }
+ rdt_staged_configs_clear();
while ((tok = strsep(&buf, "\n")) != NULL) {
resname = strim(strsep(&tok, ":"));
@@ -422,6 +418,7 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
}
out:
+ rdt_staged_configs_clear();
rdtgroup_kn_unlock(of->kn);
cpus_read_unlock();
return ret ?: nbytes;
diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
index 1d64718..218d888 100644
--- a/arch/x86/kernel/cpu/resctrl/internal.h
+++ b/arch/x86/kernel/cpu/resctrl/internal.h
@@ -550,5 +550,6 @@ bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d);
void __check_limbo(struct rdt_domain *d, bool force_free);
void rdt_domain_reconfigure_cdp(struct rdt_resource *r);
void __init thread_throttle_mode_init(void);
+void rdt_staged_configs_clear(void);
#endif /* _ASM_X86_RESCTRL_INTERNAL_H */
diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
index 4f5d79e..421d317 100644
--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
@@ -78,6 +78,19 @@ void rdt_last_cmd_printf(const char *fmt, ...)
va_end(ap);
}
+void rdt_staged_configs_clear(void)
+{
+ struct rdt_resource *r;
+ struct rdt_domain *dom;
+
+ lockdep_assert_held(&rdtgroup_mutex);
+
+ for_each_alloc_capable_rdt_resource(r) {
+ list_for_each_entry(dom, &r->domains, list)
+ memset(dom->staged_config, 0, sizeof(dom->staged_config));
+ }
+}
+
/*
* Trivial allocator for CLOSIDs. Since h/w only supports a small number,
* we can keep a bitmap of free CLOSIDs in a single integer.
@@ -314,7 +327,7 @@ static void update_cpu_closid_rmid(void *info)
* executing task might have its own closid selected. Just reuse
* the context switch code.
*/
- resctrl_sched_in();
+ resctrl_sched_in(current);
}
/*
@@ -535,7 +548,7 @@ static void _update_task_closid_rmid(void *task)
* Otherwise, the MSR is updated when the task is scheduled in.
*/
if (task == current)
- resctrl_sched_in();
+ resctrl_sched_in(task);
}
static void update_task_closid_rmid(struct task_struct *t)
@@ -2813,7 +2826,9 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
{
struct resctrl_schema *s;
struct rdt_resource *r;
- int ret;
+ int ret = 0;
+
+ rdt_staged_configs_clear();
list_for_each_entry(s, &resctrl_schema_all, list) {
r = s->res;
@@ -2822,20 +2837,22 @@ static int rdtgroup_init_alloc(struct rdtgroup *rdtgrp)
} else {
ret = rdtgroup_init_cat(s, rdtgrp->closid);
if (ret < 0)
- return ret;
+ goto out;
}
ret = resctrl_arch_update_domains(r, rdtgrp->closid);
if (ret < 0) {
rdt_last_cmd_puts("Failed to initialize allocations\n");
- return ret;
+ goto out;
}
}
rdtgrp->mode = RDT_MODE_SHAREABLE;
- return 0;
+out:
+ rdt_staged_configs_clear();
+ return ret;
}
static int mkdir_rdt_prepare(struct kernfs_node *parent_kn,
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 03a454d..97b9212 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -37,7 +37,6 @@
#include <linux/kdebug.h>
#include <asm/cpu.h>
#include <asm/reboot.h>
-#include <asm/virtext.h>
#include <asm/intel_pt.h>
#include <asm/crash.h>
#include <asm/cmdline.h>
@@ -81,15 +80,6 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
*/
cpu_crash_vmclear_loaded_vmcss();
- /* Disable VMX or SVM if needed.
- *
- * We need to disable virtualization on all CPUs.
- * Having VMX or SVM enabled on any CPU may break rebooting
- * after the kdump kernel has finished its task.
- */
- cpu_emergency_vmxoff();
- cpu_emergency_svm_disable();
-
/*
* Disable Intel PT to stop its logging
*/
@@ -148,12 +138,7 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
*/
cpu_crash_vmclear_loaded_vmcss();
- /* Booting kdump kernel with VMX or SVM enabled won't work,
- * because (among other limitations) we can't disable paging
- * with the virt flags.
- */
- cpu_emergency_vmxoff();
- cpu_emergency_svm_disable();
+ cpu_emergency_disable_virtualization();
/*
* Disable Intel PT to stop its logging
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index ea4fe19..53de044 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -351,7 +351,7 @@ unsigned long oops_begin(void)
}
NOKPROBE_SYMBOL(oops_begin);
-void __noreturn rewind_stack_do_exit(int signr);
+void __noreturn rewind_stack_and_make_dead(int signr);
void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
{
@@ -386,7 +386,7 @@ void oops_end(unsigned long flags, struct pt_regs *regs, int signr)
* reuse the task stack and that existing poisons are invalid.
*/
kasan_unpoison_task_stack(current);
- rewind_stack_do_exit(signr);
+ rewind_stack_and_make_dead(signr);
}
NOKPROBE_SYMBOL(oops_end);
diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
index 64e2992..c949424 100644
--- a/arch/x86/kernel/fpu/init.c
+++ b/arch/x86/kernel/fpu/init.c
@@ -138,9 +138,6 @@ static void __init fpu__init_system_generic(void)
unsigned int fpu_kernel_xstate_size __ro_after_init;
EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size);
-/* Get alignment of the TYPE. */
-#define TYPE_ALIGN(TYPE) offsetof(struct { char x; TYPE test; }, test)
-
/*
* Enforce that 'MEMBER' is the last field of 'TYPE'.
*
@@ -148,8 +145,8 @@ EXPORT_SYMBOL_GPL(fpu_kernel_xstate_size);
* because that's how C aligns structs.
*/
#define CHECK_MEMBER_AT_END_OF(TYPE, MEMBER) \
- BUILD_BUG_ON(sizeof(TYPE) != ALIGN(offsetofend(TYPE, MEMBER), \
- TYPE_ALIGN(TYPE)))
+ BUILD_BUG_ON(sizeof(TYPE) != \
+ ALIGN(offsetofend(TYPE, MEMBER), _Alignof(TYPE)))
/*
* We append the 'struct fpu' to the task_struct:
diff --git a/arch/x86/kernel/i8259.c b/arch/x86/kernel/i8259.c
index 15aefa3..f91e5e3 100644
--- a/arch/x86/kernel/i8259.c
+++ b/arch/x86/kernel/i8259.c
@@ -114,6 +114,7 @@ static void make_8259A_irq(unsigned int irq)
disable_irq_nosync(irq);
io_apic_irqs &= ~(1<<irq);
irq_set_chip_and_handler(irq, &i8259A_chip, handle_level_irq);
+ irq_set_status_flags(irq, IRQ_LEVEL);
enable_irq(irq);
lapic_assign_legacy_vector(irq, true);
}
diff --git a/arch/x86/kernel/irqinit.c b/arch/x86/kernel/irqinit.c
index beb1bad..c683666 100644
--- a/arch/x86/kernel/irqinit.c
+++ b/arch/x86/kernel/irqinit.c
@@ -65,8 +65,10 @@ void __init init_ISA_irqs(void)
legacy_pic->init(0);
- for (i = 0; i < nr_legacy_irqs(); i++)
+ for (i = 0; i < nr_legacy_irqs(); i++) {
irq_set_chip_and_handler(i, chip, handle_level_irq);
+ irq_set_status_flags(i, IRQ_LEVEL);
+ }
}
void __init init_IRQ(void)
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index a912107..98d0e20 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *buf, unsigned long addr)
/* This function only handles jump-optimized kprobe */
if (kp && kprobe_optimized(kp)) {
op = container_of(kp, struct optimized_kprobe, kp);
- /* If op->list is not empty, op is under optimizing */
- if (list_empty(&op->list))
+ /* If op is optimized or under unoptimizing */
+ if (list_empty(&op->list) || optprobe_queued_unopt(op))
goto found;
}
}
@@ -346,7 +346,7 @@ int arch_check_optimized_kprobe(struct optimized_kprobe *op)
for (i = 1; i < op->optinsn.size; i++) {
p = get_kprobe(op->kp.addr + i);
- if (p && !kprobe_disabled(p))
+ if (p && !kprobe_disarmed(p))
return -EEXIST;
}
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index bc9b4b9..e6b28c6 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -731,7 +731,7 @@ bool xen_set_default_idle(void)
}
#endif
-void stop_this_cpu(void *dummy)
+void __noreturn stop_this_cpu(void *dummy)
{
local_irq_disable();
/*
diff --git a/arch/x86/kernel/process_32.c b/arch/x86/kernel/process_32.c
index 4f2f54e..d4a1303 100644
--- a/arch/x86/kernel/process_32.c
+++ b/arch/x86/kernel/process_32.c
@@ -216,7 +216,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
switch_fpu_finish(next_fpu);
/* Load the Intel cache allocation PQR MSR. */
- resctrl_sched_in();
+ resctrl_sched_in(next_p);
return prev_p;
}
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index ec0d836..b8fe38c 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -656,7 +656,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
}
/* Load the Intel cache allocation PQR MSR. */
- resctrl_sched_in();
+ resctrl_sched_in(next_p);
return prev_p;
}
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index fa700b4..deedd77 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -528,33 +528,29 @@ static inline void kb_wait(void)
}
}
-static void vmxoff_nmi(int cpu, struct pt_regs *regs)
-{
- cpu_emergency_vmxoff();
-}
+static inline void nmi_shootdown_cpus_on_restart(void);
-/* Use NMIs as IPIs to tell all CPUs to disable virtualization */
-static void emergency_vmx_disable_all(void)
+static void emergency_reboot_disable_virtualization(void)
{
/* Just make sure we won't change CPUs while doing this */
local_irq_disable();
/*
- * Disable VMX on all CPUs before rebooting, otherwise we risk hanging
- * the machine, because the CPU blocks INIT when it's in VMX root.
+ * Disable virtualization on all CPUs before rebooting to avoid hanging
+ * the system, as VMX and SVM block INIT when running in the host.
*
* We can't take any locks and we may be on an inconsistent state, so
- * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt.
+ * use NMIs as IPIs to tell the other CPUs to disable VMX/SVM and halt.
*
- * Do the NMI shootdown even if VMX if off on _this_ CPU, as that
- * doesn't prevent a different CPU from being in VMX root operation.
+ * Do the NMI shootdown even if virtualization is off on _this_ CPU, as
+ * other CPUs may have virtualization enabled.
*/
- if (cpu_has_vmx()) {
- /* Safely force _this_ CPU out of VMX root operation. */
- __cpu_emergency_vmxoff();
+ if (cpu_has_vmx() || cpu_has_svm(NULL)) {
+ /* Safely force _this_ CPU out of VMX/SVM operation. */
+ cpu_emergency_disable_virtualization();
- /* Halt and exit VMX root operation on the other CPUs. */
- nmi_shootdown_cpus(vmxoff_nmi);
+ /* Disable VMX/SVM and halt on other CPUs. */
+ nmi_shootdown_cpus_on_restart();
}
}
@@ -590,7 +586,7 @@ static void native_machine_emergency_restart(void)
unsigned short mode;
if (reboot_emergency)
- emergency_vmx_disable_all();
+ emergency_reboot_disable_virtualization();
tboot_shutdown(TB_SHUTDOWN_REBOOT);
@@ -795,6 +791,17 @@ void machine_crash_shutdown(struct pt_regs *regs)
/* This is the CPU performing the emergency shutdown work. */
int crashing_cpu = -1;
+/*
+ * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized during
+ * reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT if
+ * GIF=0, i.e. if the crash occurred between CLGI and STGI.
+ */
+void cpu_emergency_disable_virtualization(void)
+{
+ cpu_emergency_vmxoff();
+ cpu_emergency_svm_disable();
+}
+
#if defined(CONFIG_SMP)
static nmi_shootdown_cb shootdown_callback;
@@ -817,7 +824,14 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
return NMI_HANDLED;
local_irq_disable();
- shootdown_callback(cpu, regs);
+ if (shootdown_callback)
+ shootdown_callback(cpu, regs);
+
+ /*
+ * Prepare the CPU for reboot _after_ invoking the callback so that the
+ * callback can safely use virtualization instructions, e.g. VMCLEAR.
+ */
+ cpu_emergency_disable_virtualization();
atomic_dec(&waiting_for_crash_ipi);
/* Assume hlt works */
@@ -828,18 +842,32 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
return NMI_HANDLED;
}
-/*
- * Halt all other CPUs, calling the specified function on each of them
+/**
+ * nmi_shootdown_cpus - Stop other CPUs via NMI
+ * @callback: Optional callback to be invoked from the NMI handler
*
- * This function can be used to halt all other CPUs on crash
- * or emergency reboot time. The function passed as parameter
- * will be called inside a NMI handler on all CPUs.
+ * The NMI handler on the remote CPUs invokes @callback, if not
+ * NULL, first and then disables virtualization to ensure that
+ * INIT is recognized during reboot.
+ *
+ * nmi_shootdown_cpus() can only be invoked once. After the first
+ * invocation all other CPUs are stuck in crash_nmi_callback() and
+ * cannot respond to a second NMI.
*/
void nmi_shootdown_cpus(nmi_shootdown_cb callback)
{
unsigned long msecs;
+
local_irq_disable();
+ /*
+ * Avoid certain doom if a shootdown already occurred; re-registering
+ * the NMI handler will cause list corruption, modifying the callback
+ * will do who knows what, etc...
+ */
+ if (WARN_ON_ONCE(crash_ipi_issued))
+ return;
+
/* Make a note of crashing cpu. Will be used in NMI callback. */
crashing_cpu = safe_smp_processor_id();
@@ -867,7 +895,17 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
msecs--;
}
- /* Leave the nmi callback set */
+ /*
+ * Leave the nmi callback set, shootdown is a one-time thing. Clearing
+ * the callback could result in a NULL pointer dereference if a CPU
+ * (finally) responds after the timeout expires.
+ */
+}
+
+static inline void nmi_shootdown_cpus_on_restart(void)
+{
+ if (!crash_ipi_issued)
+ nmi_shootdown_cpus(NULL);
}
/*
@@ -897,6 +935,8 @@ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
/* No other CPUs to shoot down */
}
+static inline void nmi_shootdown_cpus_on_restart(void) { }
+
void run_crash_ipi_callback(struct pt_regs *regs)
{
}
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 06db901..375b33e 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -32,7 +32,7 @@
#include <asm/mce.h>
#include <asm/trace/irq_vectors.h>
#include <asm/kexec.h>
-#include <asm/virtext.h>
+#include <asm/reboot.h>
/*
* Some notes on x86 processor bugs affecting SMP operation:
@@ -122,7 +122,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
return NMI_HANDLED;
- cpu_emergency_vmxoff();
+ cpu_emergency_disable_virtualization();
stop_this_cpu(NULL);
return NMI_HANDLED;
@@ -134,7 +134,7 @@ static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
DEFINE_IDTENTRY_SYSVEC(sysvec_reboot)
{
ack_APIC_irq();
- cpu_emergency_vmxoff();
+ cpu_emergency_disable_virtualization();
stop_this_cpu(NULL);
}
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 762b43f..a067c7c 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1846,16 +1846,19 @@ static u64 kvm_hv_flush_tlb(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
cpumask_clear(&hv_vcpu->tlb_flush);
- vcpu_mask = all_cpus ? NULL :
- sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
- vp_bitmap, vcpu_bitmap);
-
/*
* vcpu->arch.cr3 may not be up-to-date for running vCPUs so we can't
* analyze it here, flush TLB regardless of the specified address space.
*/
- kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
- NULL, vcpu_mask, &hv_vcpu->tlb_flush);
+ if (all_cpus) {
+ kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH_GUEST);
+ } else {
+ vcpu_mask = sparse_set_to_vcpu_mask(kvm, sparse_banks, valid_bank_mask,
+ vp_bitmap, vcpu_bitmap);
+
+ kvm_make_vcpus_request_mask(kvm, KVM_REQ_TLB_FLUSH_GUEST,
+ NULL, vcpu_mask, &hv_vcpu->tlb_flush);
+ }
ret_success:
/* We always do full TLB flush, set 'Reps completed' = 'Rep Count' */
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 8ea4658..40fc187 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1295,6 +1295,7 @@ void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high)
kvm_irq_delivery_to_apic(apic->vcpu->kvm, apic, &irq, NULL);
}
+EXPORT_SYMBOL_GPL(kvm_apic_send_ipi);
static u32 apic_get_tmcct(struct kvm_lapic *apic)
{
@@ -2127,10 +2128,14 @@ int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
break;
case APIC_SELF_IPI:
- if (apic_x2apic_mode(apic))
- kvm_apic_send_ipi(apic, APIC_DEST_SELF | (val & APIC_VECTOR_MASK), 0);
- else
+ /*
+ * Self-IPI exists only when x2APIC is enabled. Bits 7:0 hold
+ * the vector, everything else is reserved.
+ */
+ if (!apic_x2apic_mode(apic) || (val & ~APIC_VECTOR_MASK))
ret = 1;
+ else
+ kvm_apic_send_ipi(apic, APIC_DEST_SELF | val, 0);
break;
default:
ret = 1;
@@ -2311,6 +2316,7 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu)
apic->irr_pending = (apic_search_irr(apic) != -1);
apic->isr_count = count_vectors(apic->regs + APIC_ISR);
}
+ apic->highest_isr_cache = -1;
}
EXPORT_SYMBOL_GPL(kvm_apic_update_apicv);
@@ -2363,7 +2369,6 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
kvm_lapic_set_reg(apic, APIC_TMR + 0x10 * i, 0);
}
kvm_apic_update_apicv(vcpu);
- apic->highest_isr_cache = -1;
update_divide_count(apic);
atomic_set(&apic->lapic_timer.pending, 0);
@@ -2633,7 +2638,6 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
__start_apic_timer(apic, APIC_TMCCT);
kvm_lapic_set_reg(apic, APIC_TMCCT, 0);
kvm_apic_update_apicv(vcpu);
- apic->highest_isr_cache = -1;
if (vcpu->arch.apicv_active) {
static_call(kvm_x86_apicv_post_state_restore)(vcpu);
static_call(kvm_x86_hwapic_irr_update)(vcpu,
@@ -2797,6 +2801,10 @@ int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data)
/* if this is ICR write vector before command */
if (reg == APIC_ICR)
kvm_lapic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
+ else if (data >> 32)
+ /* Bits 63:32 are reserved in all other registers. */
+ return 1;
+
return kvm_lapic_reg_write(apic, reg, (u32)data);
}
@@ -2831,6 +2839,10 @@ int kvm_hv_vapic_msr_write(struct kvm_vcpu *vcpu, u32 reg, u64 data)
/* if this is ICR write vector before command */
if (reg == APIC_ICR)
kvm_lapic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
+ else if (data >> 32)
+ /* Bits 63:32 are reserved in all other registers. */
+ return 1;
+
return kvm_lapic_reg_write(apic, reg, (u32)data);
}
diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 3d3f8df..b595a33 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -318,20 +318,24 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu)
trace_kvm_avic_incomplete_ipi(vcpu->vcpu_id, icrh, icrl, id, index);
switch (id) {
+ case AVIC_IPI_FAILURE_INVALID_TARGET:
case AVIC_IPI_FAILURE_INVALID_INT_TYPE:
/*
- * AVIC hardware handles the generation of
- * IPIs when the specified Message Type is Fixed
- * (also known as fixed delivery mode) and
- * the Trigger Mode is edge-triggered. The hardware
- * also supports self and broadcast delivery modes
- * specified via the Destination Shorthand(DSH)
- * field of the ICRL. Logical and physical APIC ID
- * formats are supported. All other IPI types cause
- * a #VMEXIT, which needs to emulated.
+ * Emulate IPIs that are not handled by AVIC hardware, which
+ * only virtualizes Fixed, Edge-Triggered INTRs, and falls over
+ * if _any_ targets are invalid, e.g. if the logical mode mask
+ * is a superset of running vCPUs.
+ *
+ * The exit is a trap, e.g. ICR holds the correct value and RIP
+ * has been advanced, KVM is responsible only for emulating the
+ * IPI. Sadly, hardware may sometimes leave the BUSY flag set,
+ * in which case KVM needs to emulate the ICR write as well in
+ * order to clear the BUSY flag.
*/
- kvm_lapic_reg_write(apic, APIC_ICR2, icrh);
- kvm_lapic_reg_write(apic, APIC_ICR, icrl);
+ if (icrl & APIC_ICR_BUSY)
+ kvm_apic_write_nodecode(vcpu, APIC_ICR);
+ else
+ kvm_apic_send_ipi(apic, icrl, icrh);
break;
case AVIC_IPI_FAILURE_TARGET_NOT_RUNNING:
/*
@@ -341,8 +345,6 @@ int avic_incomplete_ipi_interception(struct kvm_vcpu *vcpu)
*/
avic_kick_target_vcpus(vcpu->kvm, apic, icrl, icrh);
break;
- case AVIC_IPI_FAILURE_INVALID_TARGET:
- break;
case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE:
WARN_ONCE(1, "Invalid backing page\n");
break;
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 58ca93a..6d7bddc 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1277,7 +1277,7 @@ static int sev_send_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
/* Check if we are crossing the page boundary */
offset = params.guest_uaddr & (PAGE_SIZE - 1);
- if ((params.guest_len + offset > PAGE_SIZE))
+ if (params.guest_len > PAGE_SIZE || (params.guest_len + offset) > PAGE_SIZE)
return -EINVAL;
/* Pin guest memory */
@@ -1457,7 +1457,7 @@ static int sev_receive_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
/* Check if we are crossing the page boundary */
offset = params.guest_uaddr & (PAGE_SIZE - 1);
- if ((params.guest_len + offset > PAGE_SIZE))
+ if (params.guest_len > PAGE_SIZE || (params.guest_len + offset) > PAGE_SIZE)
return -EINVAL;
hdr = psp_copy_user_blob(params.hdr_uaddr, params.hdr_len);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 541f983..c046c0e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -466,11 +466,24 @@ static int has_svm(void)
return 1;
}
+void __svm_write_tsc_multiplier(u64 multiplier)
+{
+ preempt_disable();
+
+ if (multiplier == __this_cpu_read(current_tsc_ratio))
+ goto out;
+
+ wrmsrl(MSR_AMD64_TSC_RATIO, multiplier);
+ __this_cpu_write(current_tsc_ratio, multiplier);
+out:
+ preempt_enable();
+}
+
static void svm_hardware_disable(void)
{
/* Make sure we clean up behind us */
if (static_cpu_has(X86_FEATURE_TSCRATEMSR))
- wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT);
+ __svm_write_tsc_multiplier(TSC_RATIO_DEFAULT);
cpu_svm_disable();
@@ -512,8 +525,11 @@ static int svm_hardware_enable(void)
wrmsrl(MSR_VM_HSAVE_PA, __sme_page_pa(sd->save_area));
if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) {
- wrmsrl(MSR_AMD64_TSC_RATIO, TSC_RATIO_DEFAULT);
- __this_cpu_write(current_tsc_ratio, TSC_RATIO_DEFAULT);
+ /*
+ * Set the default value, even if we don't use TSC scaling
+ * to avoid having stale value in the msr
+ */
+ __svm_write_tsc_multiplier(TSC_RATIO_DEFAULT);
}
@@ -1126,9 +1142,10 @@ static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset)
static void svm_write_tsc_multiplier(struct kvm_vcpu *vcpu, u64 multiplier)
{
- wrmsrl(MSR_AMD64_TSC_RATIO, multiplier);
+ __svm_write_tsc_multiplier(multiplier);
}
+
/* Evaluate instruction intercepts that depend on guest CPUID features. */
static void svm_recalc_instruction_intercepts(struct kvm_vcpu *vcpu,
struct vcpu_svm *svm)
@@ -1452,13 +1469,8 @@ static void svm_prepare_guest_switch(struct kvm_vcpu *vcpu)
vmsave(__sme_page_pa(sd->save_area));
}
- if (static_cpu_has(X86_FEATURE_TSCRATEMSR)) {
- u64 tsc_ratio = vcpu->arch.tsc_scaling_ratio;
- if (tsc_ratio != __this_cpu_read(current_tsc_ratio)) {
- __this_cpu_write(current_tsc_ratio, tsc_ratio);
- wrmsrl(MSR_AMD64_TSC_RATIO, tsc_ratio);
- }
- }
+ if (static_cpu_has(X86_FEATURE_TSCRATEMSR))
+ __svm_write_tsc_multiplier(vcpu->arch.tsc_scaling_ratio);
if (likely(tsc_aux_uret_slot >= 0))
kvm_set_user_return_msr(tsc_aux_uret_slot, svm->tsc_aux, -1ull);
@@ -3729,8 +3741,14 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
- if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
- to_svm(vcpu)->vmcb->control.exit_info_1)
+ struct vmcb_control_area *control = &to_svm(vcpu)->vmcb->control;
+
+ /*
+ * Note, the next RIP must be provided as SRCU isn't held, i.e. KVM
+ * can't read guest memory (dereference memslots) to decode the WRMSR.
+ */
+ if (control->exit_code == SVM_EXIT_MSR && control->exit_info_1 &&
+ nrips && control->next_rip)
return handle_fastpath_set_msr_irqoff(vcpu);
return EXIT_FASTPATH_NONE;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 41adc3e..4737d50 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -491,6 +491,7 @@ int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
int nested_svm_exit_special(struct vcpu_svm *svm);
void nested_load_control_from_vmcb12(struct vcpu_svm *svm,
struct vmcb_control_area *control);
+void __svm_write_tsc_multiplier(u64 multiplier);
void nested_sync_control_from_vmcb02(struct vcpu_svm *svm);
void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm);
void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb);
diff --git a/arch/x86/kvm/svm/svm_onhyperv.h b/arch/x86/kvm/svm/svm_onhyperv.h
index c53b8bf..3a0c381 100644
--- a/arch/x86/kvm/svm/svm_onhyperv.h
+++ b/arch/x86/kvm/svm/svm_onhyperv.h
@@ -48,7 +48,7 @@ static inline void svm_hv_init_vmcb(struct vmcb *vmcb)
hve->hv_enlightenments_control.enlightened_npt_tlb = 1;
}
-static inline void svm_hv_hardware_setup(void)
+static inline __init void svm_hv_hardware_setup(void)
{
if (npt_enabled &&
ms_hyperv.nested_features & HV_X64_NESTED_ENLIGHTENED_TLB) {
@@ -112,7 +112,7 @@ static inline void svm_hv_init_vmcb(struct vmcb *vmcb)
{
}
-static inline void svm_hv_hardware_setup(void)
+static inline __init void svm_hv_hardware_setup(void)
{
}
diff --git a/arch/x86/kvm/vmx/evmcs.h b/arch/x86/kvm/vmx/evmcs.h
index b43976e..57451cf 100644
--- a/arch/x86/kvm/vmx/evmcs.h
+++ b/arch/x86/kvm/vmx/evmcs.h
@@ -162,16 +162,6 @@ static inline u16 evmcs_read16(unsigned long field)
return *(u16 *)((char *)current_evmcs + offset);
}
-static inline void evmcs_touch_msr_bitmap(void)
-{
- if (unlikely(!current_evmcs))
- return;
-
- if (current_evmcs->hv_enlightenments_control.msr_bitmap)
- current_evmcs->hv_clean_fields &=
- ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_MSR_BITMAP;
-}
-
static inline void evmcs_load(u64 phys_addr)
{
struct hv_vp_assist_page *vp_ap =
@@ -192,7 +182,6 @@ static inline u64 evmcs_read64(unsigned long field) { return 0; }
static inline u32 evmcs_read32(unsigned long field) { return 0; }
static inline u16 evmcs_read16(unsigned long field) { return 0; }
static inline void evmcs_load(u64 phys_addr) {}
-static inline void evmcs_touch_msr_bitmap(void) {}
#endif /* IS_ENABLED(CONFIG_HYPERV) */
#define EVMPTR_INVALID (-1ULL)
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index cdebece..e4e4c1d 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2991,7 +2991,7 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12,
enum vm_entry_failure_code *entry_failure_code)
{
- bool ia32e;
+ bool ia32e = !!(vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE);
*entry_failure_code = ENTRY_FAIL_DEFAULT;
@@ -3017,6 +3017,13 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
vmcs12->guest_ia32_perf_global_ctrl)))
return -EINVAL;
+ if (CC((vmcs12->guest_cr0 & (X86_CR0_PG | X86_CR0_PE)) == X86_CR0_PG))
+ return -EINVAL;
+
+ if (CC(ia32e && !(vmcs12->guest_cr4 & X86_CR4_PAE)) ||
+ CC(ia32e && !(vmcs12->guest_cr0 & X86_CR0_PG)))
+ return -EINVAL;
+
/*
* If the load IA32_EFER VM-entry control is 1, the following checks
* are performed on the field for the IA32_EFER MSR:
@@ -3028,7 +3035,6 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
*/
if (to_vmx(vcpu)->nested.nested_run_pending &&
(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_EFER)) {
- ia32e = (vmcs12->vm_entry_controls & VM_ENTRY_IA32E_MODE) != 0;
if (CC(!kvm_valid_efer(vcpu, vmcs12->guest_ia32_efer)) ||
CC(ia32e != !!(vmcs12->guest_ia32_efer & EFER_LMA)) ||
CC(((vmcs12->guest_cr0 & X86_CR0_PG) &&
@@ -4617,6 +4623,17 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
vmx_switch_vmcs(vcpu, &vmx->vmcs01);
+ /*
+ * If IBRS is advertised to the vCPU, KVM must flush the indirect
+ * branch predictors when transitioning from L2 to L1, as L1 expects
+ * hardware (KVM in this case) to provide separate predictor modes.
+ * Bare metal isolates VMX root (host) from VMX non-root (guest), but
+ * doesn't isolate different VMCSs, i.e. in this case, doesn't provide
+ * separate modes for L2 vs L1.
+ */
+ if (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
+ indirect_branch_prediction_barrier();
+
/* Update any VMCS fields that might have changed while L2 ran */
vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.host.nr);
vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.guest.nr);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 6918f59..c95c367 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -1332,8 +1332,10 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
/*
* No indirect branch prediction barrier needed when switching
- * the active VMCS within a guest, e.g. on nested VM-Enter.
- * The L1 VMM can protect itself with retpolines, IBPB or IBRS.
+ * the active VMCS within a vCPU, unless IBRS is advertised to
+ * the vCPU. To minimize the number of IBPBs executed, KVM
+ * performs IBPB on nested VM-Exit (a single nested transition
+ * may switch the active VMCS multiple times).
*/
if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
indirect_branch_prediction_barrier();
@@ -2737,15 +2739,6 @@ int alloc_loaded_vmcs(struct loaded_vmcs *loaded_vmcs)
if (!loaded_vmcs->msr_bitmap)
goto out_vmcs;
memset(loaded_vmcs->msr_bitmap, 0xff, PAGE_SIZE);
-
- if (IS_ENABLED(CONFIG_HYPERV) &&
- static_branch_unlikely(&enable_evmcs) &&
- (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) {
- struct hv_enlightened_vmcs *evmcs =
- (struct hv_enlightened_vmcs *)loaded_vmcs->vmcs;
-
- evmcs->hv_enlightenments_control.msr_bitmap = 1;
- }
}
memset(&loaded_vmcs->host_state, 0, sizeof(struct vmcs_host_state));
@@ -3361,18 +3354,15 @@ static u32 vmx_segment_access_rights(struct kvm_segment *var)
{
u32 ar;
- if (var->unusable || !var->present)
- ar = 1 << 16;
- else {
- ar = var->type & 15;
- ar |= (var->s & 1) << 4;
- ar |= (var->dpl & 3) << 5;
- ar |= (var->present & 1) << 7;
- ar |= (var->avl & 1) << 12;
- ar |= (var->l & 1) << 13;
- ar |= (var->db & 1) << 14;
- ar |= (var->g & 1) << 15;
- }
+ ar = var->type & 15;
+ ar |= (var->s & 1) << 4;
+ ar |= (var->dpl & 3) << 5;
+ ar |= (var->present & 1) << 7;
+ ar |= (var->avl & 1) << 12;
+ ar |= (var->l & 1) << 13;
+ ar |= (var->db & 1) << 14;
+ ar |= (var->g & 1) << 15;
+ ar |= (var->unusable || !var->present) << 16;
return ar;
}
@@ -3782,6 +3772,22 @@ void free_vpid(int vpid)
spin_unlock(&vmx_vpid_lock);
}
+static void vmx_msr_bitmap_l01_changed(struct vcpu_vmx *vmx)
+{
+ /*
+ * When KVM is a nested hypervisor on top of Hyper-V and uses
+ * 'Enlightened MSR Bitmap' feature L0 needs to know that MSR
+ * bitmap has changed.
+ */
+ if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs)) {
+ struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;
+
+ if (evmcs->hv_enlightenments_control.msr_bitmap)
+ evmcs->hv_clean_fields &=
+ ~HV_VMX_ENLIGHTENED_CLEAN_FIELD_MSR_BITMAP;
+ }
+}
+
void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
@@ -3790,8 +3796,7 @@ void vmx_disable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
if (!cpu_has_vmx_msr_bitmap())
return;
- if (static_branch_unlikely(&enable_evmcs))
- evmcs_touch_msr_bitmap();
+ vmx_msr_bitmap_l01_changed(vmx);
/*
* Mark the desired intercept state in shadow bitmap, this is needed
@@ -3835,8 +3840,7 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type)
if (!cpu_has_vmx_msr_bitmap())
return;
- if (static_branch_unlikely(&enable_evmcs))
- evmcs_touch_msr_bitmap();
+ vmx_msr_bitmap_l01_changed(vmx);
/*
* Mark the desired intercept state in shadow bitmap, this is needed
@@ -6970,6 +6974,19 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
if (err < 0)
goto free_pml;
+ /*
+ * Use Hyper-V 'Enlightened MSR Bitmap' feature when KVM runs as a
+ * nested (L1) hypervisor and Hyper-V in L0 supports it. Enable the
+ * feature only for vmcs01, KVM currently isn't equipped to realize any
+ * performance benefits from enabling it for vmcs02.
+ */
+ if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs) &&
+ (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) {
+ struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;
+
+ evmcs->hv_enlightenments_control.msr_bitmap = 1;
+ }
+
/* The MSR bitmap starts with all ones */
bitmap_fill(vmx->shadow_msr_intercept.read, MAX_POSSIBLE_PASSTHROUGH_MSRS);
bitmap_fill(vmx->shadow_msr_intercept.write, MAX_POSSIBLE_PASSTHROUGH_MSRS);
@@ -7580,17 +7597,11 @@ static int vmx_pre_block(struct kvm_vcpu *vcpu)
if (pi_pre_block(vcpu))
return 1;
- if (kvm_lapic_hv_timer_in_use(vcpu))
- kvm_lapic_switch_to_sw_timer(vcpu);
-
return 0;
}
static void vmx_post_block(struct kvm_vcpu *vcpu)
{
- if (kvm_x86_ops.set_hv_timer)
- kvm_lapic_switch_to_hv_timer(vcpu);
-
pi_post_block(vcpu);
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fcfa3fe..5cb4af4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -184,6 +184,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO);
int __read_mostly pi_inject_timer = -1;
module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR);
+/* Enable/disable SMT_RSB bug mitigation */
+bool __read_mostly mitigate_smt_rsb;
+module_param(mitigate_smt_rsb, bool, 0444);
+
/*
* Restoring the host value for MSRs that are only consumed when running in
* usermode, e.g. SYSCALL MSRs and TSC_AUX, can be deferred until the CPU
@@ -4164,10 +4168,15 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = KVM_CLOCK_TSC_STABLE;
break;
case KVM_CAP_X86_DISABLE_EXITS:
- r |= KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_PAUSE |
- KVM_X86_DISABLE_EXITS_CSTATE;
- if(kvm_can_mwait_in_guest())
- r |= KVM_X86_DISABLE_EXITS_MWAIT;
+ r = KVM_X86_DISABLE_EXITS_PAUSE;
+
+ if (!mitigate_smt_rsb) {
+ r |= KVM_X86_DISABLE_EXITS_HLT |
+ KVM_X86_DISABLE_EXITS_CSTATE;
+
+ if (kvm_can_mwait_in_guest())
+ r |= KVM_X86_DISABLE_EXITS_MWAIT;
+ }
break;
case KVM_CAP_X86_SMM:
/* SMBASE is usually relocated above 1M on modern chipsets,
@@ -4812,12 +4821,11 @@ static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
{
unsigned long val;
+ memset(dbgregs, 0, sizeof(*dbgregs));
memcpy(dbgregs->db, vcpu->arch.db, sizeof(vcpu->arch.db));
kvm_get_dr(vcpu, 6, &val);
dbgregs->dr6 = val;
dbgregs->dr7 = vcpu->arch.dr7;
- dbgregs->flags = 0;
- memset(&dbgregs->reserved, 0, sizeof(dbgregs->reserved));
}
static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
@@ -5746,15 +5754,26 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
if (cap->args[0] & ~KVM_X86_DISABLE_VALID_EXITS)
break;
- if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&
- kvm_can_mwait_in_guest())
- kvm->arch.mwait_in_guest = true;
- if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)
- kvm->arch.hlt_in_guest = true;
if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE)
kvm->arch.pause_in_guest = true;
- if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE)
- kvm->arch.cstate_in_guest = true;
+
+#define SMT_RSB_MSG "This processor is affected by the Cross-Thread Return Predictions vulnerability. " \
+ "KVM_CAP_X86_DISABLE_EXITS should only be used with SMT disabled or trusted guests."
+
+ if (!mitigate_smt_rsb) {
+ if (boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible() &&
+ (cap->args[0] & ~KVM_X86_DISABLE_EXITS_PAUSE))
+ pr_warn_once(SMT_RSB_MSG);
+
+ if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) &&
+ kvm_can_mwait_in_guest())
+ kvm->arch.mwait_in_guest = true;
+ if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT)
+ kvm->arch.hlt_in_guest = true;
+ if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE)
+ kvm->arch.cstate_in_guest = true;
+ }
+
r = 0;
break;
case KVM_CAP_MSR_PLATFORM_INFO:
@@ -8097,7 +8116,9 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
write_fault_to_spt,
emulation_type))
return 1;
- if (ctxt->have_exception) {
+
+ if (ctxt->have_exception &&
+ !(emulation_type & EMULTYPE_SKIP)) {
/*
* #UD should result in just EMULATION_FAILED, and trap-like
* exception should not be encountered during decode.
@@ -10022,12 +10043,28 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
{
+ bool hv_timer;
+
if (!kvm_arch_vcpu_runnable(vcpu) &&
(!kvm_x86_ops.pre_block || static_call(kvm_x86_pre_block)(vcpu) == 0)) {
+ /*
+ * Switch to the software timer before halt-polling/blocking as
+ * the guest's timer may be a break event for the vCPU, and the
+ * hypervisor timer runs only when the CPU is in guest mode.
+ * Switch before halt-polling so that KVM recognizes an expired
+ * timer before blocking.
+ */
+ hv_timer = kvm_lapic_hv_timer_in_use(vcpu);
+ if (hv_timer)
+ kvm_lapic_switch_to_sw_timer(vcpu);
+
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
kvm_vcpu_block(vcpu);
vcpu->srcu_idx = srcu_read_lock(&kvm->srcu);
+ if (hv_timer)
+ kvm_lapic_switch_to_hv_timer(vcpu);
+
if (kvm_x86_ops.post_block)
static_call(kvm_x86_post_block)(vcpu);
@@ -10266,6 +10303,11 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
r = -EINTR;
goto out;
}
+ /*
+ * It should be impossible for the hypervisor timer to be in
+ * use before KVM has ever run the vCPU.
+ */
+ WARN_ON_ONCE(kvm_lapic_hv_timer_in_use(vcpu));
kvm_vcpu_block(vcpu);
if (kvm_apic_accept_events(vcpu) < 0) {
r = 0;
@@ -12796,6 +12838,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_msr_protocol_exit);
static int __init kvm_x86_init(void)
{
kvm_mmu_x86_module_init();
+ mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible();
return 0;
}
module_init(kvm_x86_init);
diff --git a/arch/x86/lib/iomap_copy_64.S b/arch/x86/lib/iomap_copy_64.S
index a1f9416..6ff2f56 100644
--- a/arch/x86/lib/iomap_copy_64.S
+++ b/arch/x86/lib/iomap_copy_64.S
@@ -10,6 +10,6 @@
*/
SYM_FUNC_START(__iowrite32_copy)
movl %edx,%ecx
- rep movsd
+ rep movsl
RET
SYM_FUNC_END(__iowrite32_copy)
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 2f723e10..2e05707 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -600,7 +600,8 @@ void __init sme_enable(struct boot_params *bp)
cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr |
((u64)bp->ext_cmd_line_ptr << 32));
- cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer));
+ if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0)
+ return;
if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
sme_me_mask = me_mask;
diff --git a/arch/x86/um/vdso/um_vdso.c b/arch/x86/um/vdso/um_vdso.c
index 2112b8d..ff0f3b4 100644
--- a/arch/x86/um/vdso/um_vdso.c
+++ b/arch/x86/um/vdso/um_vdso.c
@@ -17,8 +17,10 @@ int __vdso_clock_gettime(clockid_t clock, struct __kernel_old_timespec *ts)
{
long ret;
- asm("syscall" : "=a" (ret) :
- "0" (__NR_clock_gettime), "D" (clock), "S" (ts) : "memory");
+ asm("syscall"
+ : "=a" (ret)
+ : "0" (__NR_clock_gettime), "D" (clock), "S" (ts)
+ : "rcx", "r11", "memory");
return ret;
}
@@ -29,8 +31,10 @@ int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz)
{
long ret;
- asm("syscall" : "=a" (ret) :
- "0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
+ asm("syscall"
+ : "=a" (ret)
+ : "0" (__NR_gettimeofday), "D" (tv), "S" (tz)
+ : "rcx", "r11", "memory");
return ret;
}
diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
index 4953260..40b5779 100644
--- a/arch/x86/xen/Makefile
+++ b/arch/x86/xen/Makefile
@@ -45,7 +45,7 @@
obj-$(CONFIG_XEN_DEBUG_FS) += debugfs.o
-obj-$(CONFIG_XEN_PV_DOM0) += vga.o
+obj-$(CONFIG_XEN_DOM0) += vga.o
obj-$(CONFIG_SWIOTLB_XEN) += pci-swiotlb-xen.o
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 561aad1..998db02 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1353,7 +1353,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
x86_platform.set_legacy_features =
xen_dom0_set_legacy_features;
- xen_init_vga(info, xen_start_info->console.dom0.info_size);
+ xen_init_vga(info, xen_start_info->console.dom0.info_size,
+ &boot_params.screen_info);
xen_start_info->console.domU.mfn = 0;
xen_start_info->console.domU.evtchn = 0;
diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
index bcae606..ada3868 100644
--- a/arch/x86/xen/enlighten_pvh.c
+++ b/arch/x86/xen/enlighten_pvh.c
@@ -43,6 +43,19 @@ void __init xen_pvh_init(struct boot_params *boot_params)
x86_init.oem.banner = xen_banner;
xen_efi_init(boot_params);
+
+ if (xen_initial_domain()) {
+ struct xen_platform_op op = {
+ .cmd = XENPF_get_dom0_console,
+ };
+ int ret = HYPERVISOR_platform_op(&op);
+
+ if (ret > 0)
+ xen_init_vga(&op.u.dom0_console,
+ min(ret * sizeof(char),
+ sizeof(op.u.dom0_console)),
+ &boot_params->screen_info);
+ }
}
void __init mem_map_via_hcall(struct boot_params *boot_params_p)
diff --git a/arch/x86/xen/vga.c b/arch/x86/xen/vga.c
index e336f22..93697109 100644
--- a/arch/x86/xen/vga.c
+++ b/arch/x86/xen/vga.c
@@ -9,10 +9,9 @@
#include "xen-ops.h"
-void __init xen_init_vga(const struct dom0_vga_console_info *info, size_t size)
+void __init xen_init_vga(const struct dom0_vga_console_info *info, size_t size,
+ struct screen_info *screen_info)
{
- struct screen_info *screen_info = &boot_params.screen_info;
-
/* This is drawn from a dump from vgacon:startup in
* standard Linux. */
screen_info->orig_video_mode = 3;
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 16aed4b..71f3103 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -110,11 +110,12 @@ static inline void xen_uninit_lock_cpu(int cpu)
struct dom0_vga_console_info;
-#ifdef CONFIG_XEN_PV_DOM0
-void __init xen_init_vga(const struct dom0_vga_console_info *, size_t size);
+#ifdef CONFIG_XEN_DOM0
+void __init xen_init_vga(const struct dom0_vga_console_info *, size_t size,
+ struct screen_info *);
#else
static inline void __init xen_init_vga(const struct dom0_vga_console_info *info,
- size_t size)
+ size_t size, struct screen_info *si)
{
}
#endif
diff --git a/arch/xtensa/kernel/traps.c b/arch/xtensa/kernel/traps.c
index 874b6ef..5624a64 100644
--- a/arch/xtensa/kernel/traps.c
+++ b/arch/xtensa/kernel/traps.c
@@ -510,7 +510,7 @@ static size_t kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
void show_stack(struct task_struct *task, unsigned long *sp, const char *loglvl)
{
- size_t len;
+ size_t len, off = 0;
if (!sp)
sp = stack_pointer(task);
@@ -519,9 +519,17 @@ void show_stack(struct task_struct *task, unsigned long *sp, const char *loglvl)
kstack_depth_to_print * STACK_DUMP_ENTRY_SIZE);
printk("%sStack:\n", loglvl);
- print_hex_dump(loglvl, " ", DUMP_PREFIX_NONE,
- STACK_DUMP_LINE_SIZE, STACK_DUMP_ENTRY_SIZE,
- sp, len, false);
+ while (off < len) {
+ u8 line[STACK_DUMP_LINE_SIZE];
+ size_t line_len = len - off > STACK_DUMP_LINE_SIZE ?
+ STACK_DUMP_LINE_SIZE : len - off;
+
+ __memcpy(line, (u8 *)sp + off, line_len);
+ print_hex_dump(loglvl, " ", DUMP_PREFIX_NONE,
+ STACK_DUMP_LINE_SIZE, STACK_DUMP_ENTRY_SIZE,
+ line, line_len, false);
+ off += STACK_DUMP_LINE_SIZE;
+ }
show_trace(task, sp, loglvl);
}
@@ -552,5 +560,5 @@ void die(const char * str, struct pt_regs * regs, long err)
if (panic_on_oops)
panic("Fatal exception");
- do_exit(err);
+ make_task_dead(err);
}
diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index a8d0b4c..53e275e3 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -713,15 +713,15 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
struct bfq_io_cq *bic,
struct bfq_group *bfqg)
{
- struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
- struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
+ struct bfq_queue *async_bfqq = bic_to_bfqq(bic, false);
+ struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, true);
struct bfq_entity *entity;
if (async_bfqq) {
entity = &async_bfqq->entity;
if (entity->sched_data != &bfqg->sched_data) {
- bic_set_bfqq(bic, NULL, 0);
+ bic_set_bfqq(bic, NULL, false);
bfq_release_process_ref(bfqd, async_bfqq);
}
}
@@ -756,8 +756,8 @@ static void *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
* request from the old cgroup.
*/
bfq_put_cooperator(sync_bfqq);
+ bic_set_bfqq(bic, NULL, true);
bfq_release_process_ref(bfqd, sync_bfqq);
- bic_set_bfqq(bic, NULL, 1);
}
}
}
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 85120d7..f545549 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -3046,7 +3046,7 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
/*
* Merge queues (that is, let bic redirect its requests to new_bfqq)
*/
- bic_set_bfqq(bic, new_bfqq, 1);
+ bic_set_bfqq(bic, new_bfqq, true);
bfq_mark_bfqq_coop(new_bfqq);
/*
* new_bfqq now belongs to at least two bics (it is a shared queue):
@@ -5359,9 +5359,11 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
bfqq = bic_to_bfqq(bic, false);
if (bfqq) {
- bfq_release_process_ref(bfqd, bfqq);
- bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic, true);
+ struct bfq_queue *old_bfqq = bfqq;
+
+ bfqq = bfq_get_queue(bfqd, bio, false, bic, true);
bic_set_bfqq(bic, bfqq, false);
+ bfq_release_process_ref(bfqd, old_bfqq);
}
bfqq = bic_to_bfqq(bic, true);
@@ -6475,7 +6477,7 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
return bfqq;
}
- bic_set_bfqq(bic, NULL, 1);
+ bic_set_bfqq(bic, NULL, true);
bfq_put_cooperator(bfqq);
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index 4a7c33e..4f34ac2 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -417,6 +417,7 @@ int bio_integrity_clone(struct bio *bio, struct bio *bio_src,
bip->bip_vcnt = bip_src->bip_vcnt;
bip->bip_iter = bip_src->bip_iter;
+ bip->bip_flags = bip_src->bip_flags & ~BIP_BLOCK_INTEGRITY;
return 0;
}
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index ce5858d..3ee4c12 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1360,6 +1360,10 @@ int blkcg_activate_policy(struct request_queue *q,
list_for_each_entry_reverse(blkg, &q->blkg_list, q_node)
pol->pd_init_fn(blkg->pd[pol->plid]);
+ if (pol->pd_online_fn)
+ list_for_each_entry_reverse(blkg, &q->blkg_list, q_node)
+ pol->pd_online_fn(blkg->pd[pol->plid]);
+
__set_bit(pol->plid, q->blkcg_pols);
ret = 0;
diff --git a/block/blk-core.c b/block/blk-core.c
index 13e1fca..ed6271d 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -698,14 +698,10 @@ static inline bool should_fail_request(struct block_device *part,
static inline bool bio_check_ro(struct bio *bio)
{
if (op_is_write(bio_op(bio)) && bdev_read_only(bio->bi_bdev)) {
- char b[BDEVNAME_SIZE];
-
if (op_is_flush(bio->bi_opf) && !bio_sectors(bio))
return false;
-
- WARN_ONCE(1,
- "Trying to write to read-only block-device %s (partno %d)\n",
- bio_devname(bio, b), bio->bi_bdev->bd_partno);
+ pr_warn("Trying to write to read-only block-device %pg\n",
+ bio->bi_bdev);
/* Older lvm-tools actually trigger this */
return false;
}
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index 069193d..bd7e9ff 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -870,9 +870,14 @@ static void calc_lcoefs(u64 bps, u64 seqiops, u64 randiops,
*page = *seqio = *randio = 0;
- if (bps)
- *page = DIV64_U64_ROUND_UP(VTIME_PER_SEC,
- DIV_ROUND_UP_ULL(bps, IOC_PAGE_SIZE));
+ if (bps) {
+ u64 bps_pages = DIV_ROUND_UP_ULL(bps, IOC_PAGE_SIZE);
+
+ if (bps_pages)
+ *page = DIV64_U64_ROUND_UP(VTIME_PER_SEC, bps_pages);
+ else
+ *page = 1;
+ }
if (seqiops) {
v = DIV64_U64_ROUND_UP(VTIME_PER_SEC, seqiops);
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 35770e3..ff1021d 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -45,8 +45,7 @@ void blk_mq_sched_assign_ioc(struct request *rq)
}
/*
- * Mark a hardware queue as needing a restart. For shared queues, maintain
- * a count of how many hardware queues are marked for restart.
+ * Mark a hardware queue as needing a restart.
*/
void blk_mq_sched_mark_restart_hctx(struct blk_mq_hw_ctx *hctx)
{
@@ -110,7 +109,7 @@ static bool blk_mq_dispatch_hctx_list(struct list_head *rq_list)
/*
* Only SCSI implements .get_budget and .put_budget, and SCSI restarts
* its queue by itself in its completion handler, so we don't need to
- * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
+ * restart queue if .get_budget() fails to get the budget.
*
* Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
* be run again. This is necessary to avoid starving flushes.
@@ -238,7 +237,7 @@ static struct blk_mq_ctx *blk_mq_next_ctx(struct blk_mq_hw_ctx *hctx,
/*
* Only SCSI implements .get_budget and .put_budget, and SCSI restarts
* its queue by itself in its completion handler, so we don't need to
- * restart queue if .get_budget() returns BLK_STS_NO_RESOURCE.
+ * restart queue if .get_budget() fails to get the budget.
*
* Returns -EAGAIN if hctx->dispatch was found non-empty and run_work has to
* be run again. This is necessary to avoid starving flushes.
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9f53b4c..01e2818 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -457,7 +457,8 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
* allocator for this for the rare use case of a command tied to
* a specific queue.
*/
- if (WARN_ON_ONCE(!(flags & (BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED))))
+ if (WARN_ON_ONCE(!(flags & BLK_MQ_REQ_NOWAIT)) ||
+ WARN_ON_ONCE(!(flags & BLK_MQ_REQ_RESERVED)))
return ERR_PTR(-EINVAL);
if (hctx_idx >= q->nr_hw_queues)
diff --git a/block/fops.c b/block/fops.c
index 1e970c2..6c265a1 100644
--- a/block/fops.c
+++ b/block/fops.c
@@ -243,6 +243,24 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
bio_endio(bio);
break;
}
+ if (iocb->ki_flags & IOCB_NOWAIT) {
+ /*
+ * This is nonblocking IO, and we need to allocate
+ * another bio if we have data left to map. As we
+ * cannot guarantee that one of the sub bios will not
+ * fail getting issued FOR NOWAIT and as error results
+ * are coalesced across all of them, be safe and ask for
+ * a retry of this from blocking context.
+ */
+ if (unlikely(iov_iter_count(iter))) {
+ bio_release_pages(bio, false);
+ bio_clear_flag(bio, BIO_REFFED);
+ bio_put(bio);
+ blk_finish_plug(&plug);
+ return -EAGAIN;
+ }
+ bio->bi_opf |= REQ_NOWAIT;
+ }
if (is_read) {
bio->bi_opf = REQ_OP_READ;
@@ -252,9 +270,6 @@ static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter,
bio->bi_opf = dio_bio_write_op(iocb);
task_io_account_write(bio->bi_iter.bi_size);
}
- if (iocb->ki_flags & IOCB_NOWAIT)
- bio->bi_opf |= REQ_NOWAIT;
-
dio->size += bio->bi_iter.bi_size;
pos += bio->bi_iter.bi_size;
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 950c463..aaef508 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -305,7 +305,7 @@ static inline int deadline_check_fifo(struct dd_per_prio *per_prio,
/*
* Check if rq has a sequential request preceding it.
*/
-static bool deadline_is_seq_writes(struct deadline_data *dd, struct request *rq)
+static bool deadline_is_seq_write(struct deadline_data *dd, struct request *rq)
{
struct request *prev = deadline_earlier_request(rq);
@@ -364,7 +364,7 @@ deadline_fifo_request(struct deadline_data *dd, struct dd_per_prio *per_prio,
list_for_each_entry(rq, &per_prio->fifo_list[DD_WRITE], queuelist) {
if (blk_req_can_dispatch_to_zone(rq) &&
(blk_queue_nonrot(rq->q) ||
- !deadline_is_seq_writes(dd, rq)))
+ !deadline_is_seq_write(dd, rq)))
goto out;
}
rq = NULL;
diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
index 2f8352e..eca5671 100644
--- a/crypto/asymmetric_keys/public_key.c
+++ b/crypto/asymmetric_keys/public_key.c
@@ -186,8 +186,28 @@ static int software_key_query(const struct kernel_pkey_params *params,
len = crypto_akcipher_maxsize(tfm);
info->key_size = len * 8;
- info->max_data_size = len;
- info->max_sig_size = len;
+
+ if (strncmp(pkey->pkey_algo, "ecdsa", 5) == 0) {
+ /*
+ * ECDSA key sizes are much smaller than RSA, and thus could
+ * operate on (hashed) inputs that are larger than key size.
+ * For example SHA384-hashed input used with secp256r1
+ * based keys. Set max_data_size to be at least as large as
+ * the largest supported hash size (SHA512)
+ */
+ info->max_data_size = 64;
+
+ /*
+ * Verify takes ECDSA-Sig (described in RFC 5480) as input,
+ * which is actually 2 'key_size'-bit integers encoded in
+ * ASN.1. Account for the ASN.1 encoding overhead here.
+ */
+ info->max_sig_size = 2 * (len + 3) + 2;
+ } else {
+ info->max_data_size = len;
+ info->max_sig_size = len;
+ }
+
info->max_enc_size = len;
info->max_dec_size = len;
info->supported_ops = (KEYCTL_SUPPORTS_ENCRYPT |
diff --git a/crypto/essiv.c b/crypto/essiv.c
index 8bcc5bd..3505b07 100644
--- a/crypto/essiv.c
+++ b/crypto/essiv.c
@@ -171,7 +171,12 @@ static void essiv_aead_done(struct crypto_async_request *areq, int err)
struct aead_request *req = areq->data;
struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+ if (err == -EINPROGRESS)
+ goto out;
+
kfree(rctx->assoc);
+
+out:
aead_request_complete(req, err);
}
@@ -247,7 +252,7 @@ static int essiv_aead_crypt(struct aead_request *req, bool enc)
err = enc ? crypto_aead_encrypt(subreq) :
crypto_aead_decrypt(subreq);
- if (rctx->assoc && err != -EINPROGRESS)
+ if (rctx->assoc && err != -EINPROGRESS && err != -EBUSY)
kfree(rctx->assoc);
return err;
}
diff --git a/crypto/rsa-pkcs1pad.c b/crypto/rsa-pkcs1pad.c
index 9d80483..a4ebbb8 100644
--- a/crypto/rsa-pkcs1pad.c
+++ b/crypto/rsa-pkcs1pad.c
@@ -214,16 +214,14 @@ static void pkcs1pad_encrypt_sign_complete_cb(
struct crypto_async_request *child_async_req, int err)
{
struct akcipher_request *req = child_async_req->data;
- struct crypto_async_request async_req;
if (err == -EINPROGRESS)
- return;
+ goto out;
- async_req.data = req->base.data;
- async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
- async_req.flags = child_async_req->flags;
- req->base.complete(&async_req,
- pkcs1pad_encrypt_sign_complete(req, err));
+ err = pkcs1pad_encrypt_sign_complete(req, err);
+
+out:
+ akcipher_request_complete(req, err);
}
static int pkcs1pad_encrypt(struct akcipher_request *req)
@@ -332,15 +330,14 @@ static void pkcs1pad_decrypt_complete_cb(
struct crypto_async_request *child_async_req, int err)
{
struct akcipher_request *req = child_async_req->data;
- struct crypto_async_request async_req;
if (err == -EINPROGRESS)
- return;
+ goto out;
- async_req.data = req->base.data;
- async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
- async_req.flags = child_async_req->flags;
- req->base.complete(&async_req, pkcs1pad_decrypt_complete(req, err));
+ err = pkcs1pad_decrypt_complete(req, err);
+
+out:
+ akcipher_request_complete(req, err);
}
static int pkcs1pad_decrypt(struct akcipher_request *req)
@@ -512,15 +509,14 @@ static void pkcs1pad_verify_complete_cb(
struct crypto_async_request *child_async_req, int err)
{
struct akcipher_request *req = child_async_req->data;
- struct crypto_async_request async_req;
if (err == -EINPROGRESS)
- return;
+ goto out;
- async_req.data = req->base.data;
- async_req.tfm = crypto_akcipher_tfm(crypto_akcipher_reqtfm(req));
- async_req.flags = child_async_req->flags;
- req->base.complete(&async_req, pkcs1pad_verify_complete(req, err));
+ err = pkcs1pad_verify_complete(req, err);
+
+out:
+ akcipher_request_complete(req, err);
}
/*
diff --git a/crypto/seqiv.c b/crypto/seqiv.c
index 0899d52..b1bcfe5 100644
--- a/crypto/seqiv.c
+++ b/crypto/seqiv.c
@@ -23,7 +23,7 @@ static void seqiv_aead_encrypt_complete2(struct aead_request *req, int err)
struct aead_request *subreq = aead_request_ctx(req);
struct crypto_aead *geniv;
- if (err == -EINPROGRESS)
+ if (err == -EINPROGRESS || err == -EBUSY)
return;
if (err)
diff --git a/crypto/xts.c b/crypto/xts.c
index 63c85b9..de6cbcf 100644
--- a/crypto/xts.c
+++ b/crypto/xts.c
@@ -203,12 +203,12 @@ static void xts_encrypt_done(struct crypto_async_request *areq, int err)
if (!err) {
struct xts_request_ctx *rctx = skcipher_request_ctx(req);
- rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
err = xts_xor_tweak_post(req, true);
if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) {
err = xts_cts_final(req, crypto_skcipher_encrypt);
- if (err == -EINPROGRESS)
+ if (err == -EINPROGRESS || err == -EBUSY)
return;
}
}
@@ -223,12 +223,12 @@ static void xts_decrypt_done(struct crypto_async_request *areq, int err)
if (!err) {
struct xts_request_ctx *rctx = skcipher_request_ctx(req);
- rctx->subreq.base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+ rctx->subreq.base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
err = xts_xor_tweak_post(req, false);
if (!err && unlikely(req->cryptlen % XTS_BLOCK_SIZE)) {
err = xts_cts_final(req, crypto_skcipher_decrypt);
- if (err == -EINPROGRESS)
+ if (err == -EINPROGRESS || err == -EBUSY)
return;
}
}
diff --git a/drivers/accessibility/speakup/spk_ttyio.c b/drivers/accessibility/speakup/spk_ttyio.c
index 08cf8a1..07373b3 100644
--- a/drivers/accessibility/speakup/spk_ttyio.c
+++ b/drivers/accessibility/speakup/spk_ttyio.c
@@ -354,6 +354,9 @@ void spk_ttyio_release(struct spk_synth *in_synth)
{
struct tty_struct *tty = in_synth->dev;
+ if (tty == NULL)
+ return;
+
tty_lock(tty);
if (tty->ops->close)
diff --git a/drivers/acpi/acpica/Makefile b/drivers/acpi/acpica/Makefile
index 5970043..f919811 100644
--- a/drivers/acpi/acpica/Makefile
+++ b/drivers/acpi/acpica/Makefile
@@ -3,7 +3,7 @@
# Makefile for ACPICA Core interpreter
#
-ccflags-y := -Os -D_LINUX -DBUILDING_ACPICA
+ccflags-y := -D_LINUX -DBUILDING_ACPICA
ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
# use acpi.o to put all files here into acpi.o modparam namespace
diff --git a/drivers/acpi/acpica/hwvalid.c b/drivers/acpi/acpica/hwvalid.c
index e15badf..c6716f9 100644
--- a/drivers/acpi/acpica/hwvalid.c
+++ b/drivers/acpi/acpica/hwvalid.c
@@ -23,8 +23,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width);
*
* The table is used to implement the Microsoft port access rules that
* first appeared in Windows XP. Some ports are always illegal, and some
- * ports are only illegal if the BIOS calls _OSI with a win_XP string or
- * later (meaning that the BIOS itelf is post-XP.)
+ * ports are only illegal if the BIOS calls _OSI with nothing newer than
+ * the specific _OSI strings.
*
* This provides ACPICA with the desired port protections and
* Microsoft compatibility.
@@ -145,7 +145,8 @@ acpi_hw_validate_io_request(acpi_io_address address, u32 bit_width)
/* Port illegality may depend on the _OSI calls made by the BIOS */
- if (acpi_gbl_osi_data >= port_info->osi_dependency) {
+ if (port_info->osi_dependency == ACPI_ALWAYS_ILLEGAL ||
+ acpi_gbl_osi_data == port_info->osi_dependency) {
ACPI_DEBUG_PRINT((ACPI_DB_VALUES,
"Denied AML access to port 0x%8.8X%8.8X/%X (%s 0x%.4X-0x%.4X)\n",
ACPI_FORMAT_UINT64(address),
diff --git a/drivers/acpi/acpica/nsrepair.c b/drivers/acpi/acpica/nsrepair.c
index 499067d..1b8677f 100644
--- a/drivers/acpi/acpica/nsrepair.c
+++ b/drivers/acpi/acpica/nsrepair.c
@@ -181,8 +181,9 @@ acpi_ns_simple_repair(struct acpi_evaluate_info *info,
* Try to fix if there was no return object. Warning if failed to fix.
*/
if (!return_object) {
- if (expected_btypes && (!(expected_btypes & ACPI_RTYPE_NONE))) {
- if (package_index != ACPI_NOT_PACKAGE_ELEMENT) {
+ if (expected_btypes) {
+ if (!(expected_btypes & ACPI_RTYPE_NONE) &&
+ package_index != ACPI_NOT_PACKAGE_ELEMENT) {
ACPI_WARN_PREDEFINED((AE_INFO,
info->full_pathname,
ACPI_WARN_ALWAYS,
@@ -196,14 +197,15 @@ acpi_ns_simple_repair(struct acpi_evaluate_info *info,
if (ACPI_SUCCESS(status)) {
return (AE_OK); /* Repair was successful */
}
- } else {
+ }
+
+ if (expected_btypes != ACPI_RTYPE_NONE) {
ACPI_WARN_PREDEFINED((AE_INFO,
info->full_pathname,
ACPI_WARN_ALWAYS,
"Missing expected return value"));
+ return (AE_AML_NO_RETURN_VALUE);
}
-
- return (AE_AML_NO_RETURN_VALUE);
}
}
diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
index 56db7b4..c756915 100644
--- a/drivers/acpi/battery.c
+++ b/drivers/acpi/battery.c
@@ -449,7 +449,7 @@ static int extract_package(struct acpi_battery *battery,
if (element->type == ACPI_TYPE_STRING ||
element->type == ACPI_TYPE_BUFFER)
- strncpy(ptr, element->string.pointer, 32);
+ strscpy(ptr, element->string.pointer, 32);
else if (element->type == ACPI_TYPE_INTEGER) {
strncpy(ptr, (u8 *)&element->integer.value,
sizeof(u64));
diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index 7dd80ac..2575d6c5 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -3676,8 +3676,8 @@ void acpi_nfit_shutdown(void *data)
mutex_lock(&acpi_desc->init_mutex);
set_bit(ARS_CANCEL, &acpi_desc->scrub_flags);
- cancel_delayed_work_sync(&acpi_desc->dwork);
mutex_unlock(&acpi_desc->init_mutex);
+ cancel_delayed_work_sync(&acpi_desc->dwork);
/*
* Bounce the nvdimm bus lock to make sure any in-flight
diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
index 89c22bc..09c0af8 100644
--- a/drivers/acpi/prmt.c
+++ b/drivers/acpi/prmt.c
@@ -219,6 +219,11 @@ static acpi_status acpi_platformrt_space_handler(u32 function,
efi_status_t status;
struct prm_context_buffer context;
+ if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
+ pr_err_ratelimited("PRM: EFI runtime services no longer available\n");
+ return AE_NO_HANDLER;
+ }
+
/*
* The returned acpi_status will always be AE_OK. Error values will be
* saved in the first byte of the PRM message buffer to be used by ASL.
@@ -308,6 +313,11 @@ void __init init_prmt(void)
pr_info("PRM: found %u modules\n", mc);
+ if (!efi_enabled(EFI_RUNTIME_SERVICES)) {
+ pr_err("PRM: EFI runtime services unavailable\n");
+ return;
+ }
+
status = acpi_install_address_space_handler(ACPI_ROOT_OBJECT,
ACPI_ADR_SPACE_PLATFORM_RT,
&acpi_platformrt_space_handler,
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index dc880da..e9116db 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -531,10 +531,27 @@ static void wait_for_freeze(void)
/* No delay is needed if we are in guest */
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return;
+ /*
+ * Modern (>=Nehalem) Intel systems use ACPI via intel_idle,
+ * not this code. Assume that any Intel systems using this
+ * are ancient and may need the dummy wait. This also assumes
+ * that the motivating chipset issue was Intel-only.
+ */
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ return;
#endif
- /* Dummy wait op - must do something useless after P_LVL2 read
- because chipsets cannot guarantee that STPCLK# signal
- gets asserted in time to freeze execution properly. */
+ /*
+ * Dummy wait op - must do something useless after P_LVL2 read
+ * because chipsets cannot guarantee that STPCLK# signal gets
+ * asserted in time to freeze execution properly
+ *
+ * This workaround has been in place since the original ACPI
+ * implementation was merged, circa 2002.
+ *
+ * If a profile is pointing to this instruction, please first
+ * consider moving your system to a more modern idle
+ * mechanism.
+ */
inl(acpi_gbl_FADT.xpm_timer_block.address);
}
diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
index 3392194..3b9f894 100644
--- a/drivers/acpi/resource.c
+++ b/drivers/acpi/resource.c
@@ -435,17 +435,34 @@ static const struct dmi_system_id lenovo_laptop[] = {
{ }
};
-static const struct dmi_system_id schenker_gm_rg[] = {
+static const struct dmi_system_id tongfang_gm_rg[] = {
{
- .ident = "XMG CORE 15 (M22)",
+ .ident = "TongFang GMxRGxx/XMG CORE 15 (M22)/TUXEDO Stellaris 15 Gen4 AMD",
.matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "SchenkerTechnologiesGmbH"),
DMI_MATCH(DMI_BOARD_NAME, "GMxRGxx"),
},
},
{ }
};
+static const struct dmi_system_id maingear_laptop[] = {
+ {
+ .ident = "MAINGEAR Vector Pro 2 15",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Micro Electronics Inc"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "MG-VCP2-15A3070T"),
+ }
+ },
+ {
+ .ident = "MAINGEAR Vector Pro 2 17",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Micro Electronics Inc"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "MG-VCP2-17A3070T"),
+ },
+ },
+ { }
+};
+
struct irq_override_cmp {
const struct dmi_system_id *system;
unsigned char irq;
@@ -460,7 +477,8 @@ static const struct irq_override_cmp override_table[] = {
{ asus_laptop, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
{ lenovo_laptop, 6, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
{ lenovo_laptop, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, true },
- { schenker_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
+ { tongfang_gm_rg, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
+ { maingear_laptop, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
};
static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
@@ -771,6 +789,23 @@ int acpi_dev_get_dma_resources(struct acpi_device *adev, struct list_head *list)
EXPORT_SYMBOL_GPL(acpi_dev_get_dma_resources);
/**
+ * acpi_dev_get_memory_resources - Get current memory resources of a device.
+ * @adev: ACPI device node to get the resources for.
+ * @list: Head of the resultant list of resources (must be empty).
+ *
+ * This is a helper function that locates all memory type resources of @adev
+ * with acpi_dev_get_resources().
+ *
+ * The number of resources in the output list is returned on success, an error
+ * code reflecting the error condition is returned otherwise.
+ */
+int acpi_dev_get_memory_resources(struct acpi_device *adev, struct list_head *list)
+{
+ return acpi_dev_get_resources(adev, list, is_memory, NULL);
+}
+EXPORT_SYMBOL_GPL(acpi_dev_get_memory_resources);
+
+/**
* acpi_dev_filter_resource_type - Filter ACPI resource according to resource
* types
* @ares: Input ACPI resource object.
diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
index b137131..038542b 100644
--- a/drivers/acpi/video_detect.c
+++ b/drivers/acpi/video_detect.c
@@ -313,7 +313,7 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
.ident = "Lenovo Ideapad Z570",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
- DMI_MATCH(DMI_PRODUCT_NAME, "102434U"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"),
},
},
{
diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
index 2af1ae1..4a11a38 100644
--- a/drivers/acpi/x86/s2idle.c
+++ b/drivers/acpi/x86/s2idle.c
@@ -86,6 +86,8 @@ struct lpi_device_constraint_amd {
int min_dstate;
};
+static LIST_HEAD(lps0_s2idle_devops_head);
+
static struct lpi_constraints *lpi_constraints_table;
static int lpi_constraints_table_size;
static int rev_id;
@@ -434,6 +436,8 @@ static struct acpi_scan_handler lps0_handler = {
int acpi_s2idle_prepare_late(void)
{
+ struct acpi_s2idle_dev_ops *handler;
+
if (!lps0_device_handle || sleep_no_lps0)
return 0;
@@ -464,14 +468,26 @@ int acpi_s2idle_prepare_late(void)
acpi_sleep_run_lps0_dsm(ACPI_LPS0_MS_ENTRY,
lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
}
+
+ list_for_each_entry(handler, &lps0_s2idle_devops_head, list_node) {
+ if (handler->prepare)
+ handler->prepare();
+ }
+
return 0;
}
void acpi_s2idle_restore_early(void)
{
+ struct acpi_s2idle_dev_ops *handler;
+
if (!lps0_device_handle || sleep_no_lps0)
return;
+ list_for_each_entry(handler, &lps0_s2idle_devops_head, list_node)
+ if (handler->restore)
+ handler->restore();
+
/* Modern standby exit */
if (lps0_dsm_func_mask_microsoft > 0)
acpi_sleep_run_lps0_dsm(ACPI_LPS0_MS_EXIT,
@@ -514,4 +530,28 @@ void acpi_s2idle_setup(void)
s2idle_set_ops(&acpi_s2idle_ops_lps0);
}
+int acpi_register_lps0_dev(struct acpi_s2idle_dev_ops *arg)
+{
+ if (!lps0_device_handle || sleep_no_lps0)
+ return -ENODEV;
+
+ lock_system_sleep();
+ list_add(&arg->list_node, &lps0_s2idle_devops_head);
+ unlock_system_sleep();
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(acpi_register_lps0_dev);
+
+void acpi_unregister_lps0_dev(struct acpi_s2idle_dev_ops *arg)
+{
+ if (!lps0_device_handle || sleep_no_lps0)
+ return;
+
+ lock_system_sleep();
+ list_del(&arg->list_node);
+ unlock_system_sleep();
+}
+EXPORT_SYMBOL_GPL(acpi_unregister_lps0_dev);
+
#endif /* CONFIG_SUSPEND */
diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
index 222b951..f1dd086 100644
--- a/drivers/acpi/x86/utils.c
+++ b/drivers/acpi/x86/utils.c
@@ -191,37 +191,26 @@ bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *s
* a hardcoded allowlist for D3 support, which was used for these platforms.
*
* This allows quirking on Linux in a similar fashion.
+ *
+ * Cezanne systems shouldn't *normally* need this as the BIOS includes
+ * StorageD3Enable. But for two reasons we have added it.
+ * 1) The BIOS on a number of Dell systems have ambiguity
+ * between the same value used for _ADR on ACPI nodes GPP1.DEV0 and GPP1.NVME.
+ * GPP1.NVME is needed to get StorageD3Enable node set properly.
+ * https://bugzilla.kernel.org/show_bug.cgi?id=216440
+ * https://bugzilla.kernel.org/show_bug.cgi?id=216773
+ * https://bugzilla.kernel.org/show_bug.cgi?id=217003
+ * 2) On at least one HP system StorageD3Enable is missing on the second NVME
+ disk in the system.
*/
static const struct x86_cpu_id storage_d3_cpu_ids[] = {
X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 96, NULL), /* Renoir */
X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 104, NULL), /* Lucienne */
- {}
-};
-
-static const struct dmi_system_id force_storage_d3_dmi[] = {
- {
- /*
- * _ADR is ambiguous between GPP1.DEV0 and GPP1.NVME
- * but .NVME is needed to get StorageD3Enable node
- * https://bugzilla.kernel.org/show_bug.cgi?id=216440
- */
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 14 7425 2-in-1"),
- }
- },
- {
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 16 5625"),
- }
- },
+ X86_MATCH_VENDOR_FAM_MODEL(AMD, 25, 80, NULL), /* Cezanne */
{}
};
bool force_storage_d3(void)
{
- const struct dmi_system_id *dmi_id = dmi_first_match(force_storage_d3_dmi);
-
- return dmi_id || x86_match_cpu(storage_d3_cpu_ids);
+ return x86_match_cpu(storage_d3_cpu_ids);
}
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 00c6c03..c8d33c5 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -2269,16 +2269,266 @@ static int binder_translate_fd(u32 fd, binder_size_t fd_offset,
return ret;
}
-static int binder_translate_fd_array(struct binder_fd_array_object *fda,
+/**
+ * struct binder_ptr_fixup - data to be fixed-up in target buffer
+ * @offset offset in target buffer to fixup
+ * @skip_size bytes to skip in copy (fixup will be written later)
+ * @fixup_data data to write at fixup offset
+ * @node list node
+ *
+ * This is used for the pointer fixup list (pf) which is created and consumed
+ * during binder_transaction() and is only accessed locally. No
+ * locking is necessary.
+ *
+ * The list is ordered by @offset.
+ */
+struct binder_ptr_fixup {
+ binder_size_t offset;
+ size_t skip_size;
+ binder_uintptr_t fixup_data;
+ struct list_head node;
+};
+
+/**
+ * struct binder_sg_copy - scatter-gather data to be copied
+ * @offset offset in target buffer
+ * @sender_uaddr user address in source buffer
+ * @length bytes to copy
+ * @node list node
+ *
+ * This is used for the sg copy list (sgc) which is created and consumed
+ * during binder_transaction() and is only accessed locally. No
+ * locking is necessary.
+ *
+ * The list is ordered by @offset.
+ */
+struct binder_sg_copy {
+ binder_size_t offset;
+ const void __user *sender_uaddr;
+ size_t length;
+ struct list_head node;
+};
+
+/**
+ * binder_do_deferred_txn_copies() - copy and fixup scatter-gather data
+ * @alloc: binder_alloc associated with @buffer
+ * @buffer: binder buffer in target process
+ * @sgc_head: list_head of scatter-gather copy list
+ * @pf_head: list_head of pointer fixup list
+ *
+ * Processes all elements of @sgc_head, applying fixups from @pf_head
+ * and copying the scatter-gather data from the source process' user
+ * buffer to the target's buffer. It is expected that the list creation
+ * and processing all occurs during binder_transaction() so these lists
+ * are only accessed in local context.
+ *
+ * Return: 0=success, else -errno
+ */
+static int binder_do_deferred_txn_copies(struct binder_alloc *alloc,
+ struct binder_buffer *buffer,
+ struct list_head *sgc_head,
+ struct list_head *pf_head)
+{
+ int ret = 0;
+ struct binder_sg_copy *sgc, *tmpsgc;
+ struct binder_ptr_fixup *tmppf;
+ struct binder_ptr_fixup *pf =
+ list_first_entry_or_null(pf_head, struct binder_ptr_fixup,
+ node);
+
+ list_for_each_entry_safe(sgc, tmpsgc, sgc_head, node) {
+ size_t bytes_copied = 0;
+
+ while (bytes_copied < sgc->length) {
+ size_t copy_size;
+ size_t bytes_left = sgc->length - bytes_copied;
+ size_t offset = sgc->offset + bytes_copied;
+
+ /*
+ * We copy up to the fixup (pointed to by pf)
+ */
+ copy_size = pf ? min(bytes_left, (size_t)pf->offset - offset)
+ : bytes_left;
+ if (!ret && copy_size)
+ ret = binder_alloc_copy_user_to_buffer(
+ alloc, buffer,
+ offset,
+ sgc->sender_uaddr + bytes_copied,
+ copy_size);
+ bytes_copied += copy_size;
+ if (copy_size != bytes_left) {
+ BUG_ON(!pf);
+ /* we stopped at a fixup offset */
+ if (pf->skip_size) {
+ /*
+ * we are just skipping. This is for
+ * BINDER_TYPE_FDA where the translated
+ * fds will be fixed up when we get
+ * to target context.
+ */
+ bytes_copied += pf->skip_size;
+ } else {
+ /* apply the fixup indicated by pf */
+ if (!ret)
+ ret = binder_alloc_copy_to_buffer(
+ alloc, buffer,
+ pf->offset,
+ &pf->fixup_data,
+ sizeof(pf->fixup_data));
+ bytes_copied += sizeof(pf->fixup_data);
+ }
+ list_del(&pf->node);
+ kfree(pf);
+ pf = list_first_entry_or_null(pf_head,
+ struct binder_ptr_fixup, node);
+ }
+ }
+ list_del(&sgc->node);
+ kfree(sgc);
+ }
+ list_for_each_entry_safe(pf, tmppf, pf_head, node) {
+ BUG_ON(pf->skip_size == 0);
+ list_del(&pf->node);
+ kfree(pf);
+ }
+ BUG_ON(!list_empty(sgc_head));
+
+ return ret > 0 ? -EINVAL : ret;
+}
+
+/**
+ * binder_cleanup_deferred_txn_lists() - free specified lists
+ * @sgc_head: list_head of scatter-gather copy list
+ * @pf_head: list_head of pointer fixup list
+ *
+ * Called to clean up @sgc_head and @pf_head if there is an
+ * error.
+ */
+static void binder_cleanup_deferred_txn_lists(struct list_head *sgc_head,
+ struct list_head *pf_head)
+{
+ struct binder_sg_copy *sgc, *tmpsgc;
+ struct binder_ptr_fixup *pf, *tmppf;
+
+ list_for_each_entry_safe(sgc, tmpsgc, sgc_head, node) {
+ list_del(&sgc->node);
+ kfree(sgc);
+ }
+ list_for_each_entry_safe(pf, tmppf, pf_head, node) {
+ list_del(&pf->node);
+ kfree(pf);
+ }
+}
+
+/**
+ * binder_defer_copy() - queue a scatter-gather buffer for copy
+ * @sgc_head: list_head of scatter-gather copy list
+ * @offset: binder buffer offset in target process
+ * @sender_uaddr: user address in source process
+ * @length: bytes to copy
+ *
+ * Specify a scatter-gather block to be copied. The actual copy must
+ * be deferred until all the needed fixups are identified and queued.
+ * Then the copy and fixups are done together so un-translated values
+ * from the source are never visible in the target buffer.
+ *
+ * We are guaranteed that repeated calls to this function will have
+ * monotonically increasing @offset values so the list will naturally
+ * be ordered.
+ *
+ * Return: 0=success, else -errno
+ */
+static int binder_defer_copy(struct list_head *sgc_head, binder_size_t offset,
+ const void __user *sender_uaddr, size_t length)
+{
+ struct binder_sg_copy *bc = kzalloc(sizeof(*bc), GFP_KERNEL);
+
+ if (!bc)
+ return -ENOMEM;
+
+ bc->offset = offset;
+ bc->sender_uaddr = sender_uaddr;
+ bc->length = length;
+ INIT_LIST_HEAD(&bc->node);
+
+ /*
+ * We are guaranteed that the deferred copies are in-order
+ * so just add to the tail.
+ */
+ list_add_tail(&bc->node, sgc_head);
+
+ return 0;
+}
+
+/**
+ * binder_add_fixup() - queue a fixup to be applied to sg copy
+ * @pf_head: list_head of binder ptr fixup list
+ * @offset: binder buffer offset in target process
+ * @fixup: bytes to be copied for fixup
+ * @skip_size: bytes to skip when copying (fixup will be applied later)
+ *
+ * Add the specified fixup to a list ordered by @offset. When copying
+ * the scatter-gather buffers, the fixup will be copied instead of
+ * data from the source buffer. For BINDER_TYPE_FDA fixups, the fixup
+ * will be applied later (in target process context), so we just skip
+ * the bytes specified by @skip_size. If @skip_size is 0, we copy the
+ * value in @fixup.
+ *
+ * This function is called *mostly* in @offset order, but there are
+ * exceptions. Since out-of-order inserts are relatively uncommon,
+ * we insert the new element by searching backward from the tail of
+ * the list.
+ *
+ * Return: 0=success, else -errno
+ */
+static int binder_add_fixup(struct list_head *pf_head, binder_size_t offset,
+ binder_uintptr_t fixup, size_t skip_size)
+{
+ struct binder_ptr_fixup *pf = kzalloc(sizeof(*pf), GFP_KERNEL);
+ struct binder_ptr_fixup *tmppf;
+
+ if (!pf)
+ return -ENOMEM;
+
+ pf->offset = offset;
+ pf->fixup_data = fixup;
+ pf->skip_size = skip_size;
+ INIT_LIST_HEAD(&pf->node);
+
+ /* Fixups are *mostly* added in-order, but there are some
+ * exceptions. Look backwards through list for insertion point.
+ */
+ list_for_each_entry_reverse(tmppf, pf_head, node) {
+ if (tmppf->offset < pf->offset) {
+ list_add(&pf->node, &tmppf->node);
+ return 0;
+ }
+ }
+ /*
+ * if we get here, then the new offset is the lowest so
+ * insert at the head
+ */
+ list_add(&pf->node, pf_head);
+ return 0;
+}
+
+static int binder_translate_fd_array(struct list_head *pf_head,
+ struct binder_fd_array_object *fda,
+ const void __user *sender_ubuffer,
struct binder_buffer_object *parent,
+ struct binder_buffer_object *sender_uparent,
struct binder_transaction *t,
struct binder_thread *thread,
struct binder_transaction *in_reply_to)
{
binder_size_t fdi, fd_buf_size;
binder_size_t fda_offset;
+ const void __user *sender_ufda_base;
struct binder_proc *proc = thread->proc;
- struct binder_proc *target_proc = t->to_proc;
+ int ret;
+
+ if (fda->num_fds == 0)
+ return 0;
fd_buf_size = sizeof(u32) * fda->num_fds;
if (fda->num_fds >= SIZE_MAX / sizeof(u32)) {
@@ -2302,19 +2552,25 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
*/
fda_offset = (parent->buffer - (uintptr_t)t->buffer->user_data) +
fda->parent_offset;
- if (!IS_ALIGNED((unsigned long)fda_offset, sizeof(u32))) {
+ sender_ufda_base = (void __user *)(uintptr_t)sender_uparent->buffer +
+ fda->parent_offset;
+
+ if (!IS_ALIGNED((unsigned long)fda_offset, sizeof(u32)) ||
+ !IS_ALIGNED((unsigned long)sender_ufda_base, sizeof(u32))) {
binder_user_error("%d:%d parent offset not aligned correctly.\n",
proc->pid, thread->pid);
return -EINVAL;
}
+ ret = binder_add_fixup(pf_head, fda_offset, 0, fda->num_fds * sizeof(u32));
+ if (ret)
+ return ret;
+
for (fdi = 0; fdi < fda->num_fds; fdi++) {
u32 fd;
- int ret;
binder_size_t offset = fda_offset + fdi * sizeof(fd);
+ binder_size_t sender_uoffset = fdi * sizeof(fd);
- ret = binder_alloc_copy_from_buffer(&target_proc->alloc,
- &fd, t->buffer,
- offset, sizeof(fd));
+ ret = copy_from_user(&fd, sender_ufda_base + sender_uoffset, sizeof(fd));
if (!ret)
ret = binder_translate_fd(fd, offset, t, thread,
in_reply_to);
@@ -2324,7 +2580,8 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
return 0;
}
-static int binder_fixup_parent(struct binder_transaction *t,
+static int binder_fixup_parent(struct list_head *pf_head,
+ struct binder_transaction *t,
struct binder_thread *thread,
struct binder_buffer_object *bp,
binder_size_t off_start_offset,
@@ -2370,14 +2627,7 @@ static int binder_fixup_parent(struct binder_transaction *t,
}
buffer_offset = bp->parent_offset +
(uintptr_t)parent->buffer - (uintptr_t)b->user_data;
- if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
- &bp->buffer, sizeof(bp->buffer))) {
- binder_user_error("%d:%d got transaction with invalid parent offset\n",
- proc->pid, thread->pid);
- return -EINVAL;
- }
-
- return 0;
+ return binder_add_fixup(pf_head, buffer_offset, bp->buffer, 0);
}
/**
@@ -2519,8 +2769,12 @@ static void binder_transaction(struct binder_proc *proc,
int t_debug_id = atomic_inc_return(&binder_last_id);
char *secctx = NULL;
u32 secctx_sz = 0;
+ struct list_head sgc_head;
+ struct list_head pf_head;
const void __user *user_buffer = (const void __user *)
(uintptr_t)tr->data.ptr.buffer;
+ INIT_LIST_HEAD(&sgc_head);
+ INIT_LIST_HEAD(&pf_head);
e = binder_transaction_log_add(&binder_transaction_log);
e->debug_id = t_debug_id;
@@ -2987,6 +3241,8 @@ static void binder_transaction(struct binder_proc *proc,
case BINDER_TYPE_FDA: {
struct binder_object ptr_object;
binder_size_t parent_offset;
+ struct binder_object user_object;
+ size_t user_parent_size;
struct binder_fd_array_object *fda =
to_binder_fd_array_object(hdr);
size_t num_valid = (buffer_offset - off_start_offset) /
@@ -3018,8 +3274,27 @@ static void binder_transaction(struct binder_proc *proc,
return_error_line = __LINE__;
goto err_bad_parent;
}
- ret = binder_translate_fd_array(fda, parent, t, thread,
- in_reply_to);
+ /*
+ * We need to read the user version of the parent
+ * object to get the original user offset
+ */
+ user_parent_size =
+ binder_get_object(proc, user_buffer, t->buffer,
+ parent_offset, &user_object);
+ if (user_parent_size != sizeof(user_object.bbo)) {
+ binder_user_error("%d:%d invalid ptr object size: %zd vs %zd\n",
+ proc->pid, thread->pid,
+ user_parent_size,
+ sizeof(user_object.bbo));
+ return_error = BR_FAILED_REPLY;
+ return_error_param = -EINVAL;
+ return_error_line = __LINE__;
+ goto err_bad_parent;
+ }
+ ret = binder_translate_fd_array(&pf_head, fda,
+ user_buffer, parent,
+ &user_object.bbo, t,
+ thread, in_reply_to);
if (!ret)
ret = binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer,
@@ -3049,19 +3324,14 @@ static void binder_transaction(struct binder_proc *proc,
return_error_line = __LINE__;
goto err_bad_offset;
}
- if (binder_alloc_copy_user_to_buffer(
- &target_proc->alloc,
- t->buffer,
- sg_buf_offset,
- (const void __user *)
- (uintptr_t)bp->buffer,
- bp->length)) {
- binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
- proc->pid, thread->pid);
- return_error_param = -EFAULT;
+ ret = binder_defer_copy(&sgc_head, sg_buf_offset,
+ (const void __user *)(uintptr_t)bp->buffer,
+ bp->length);
+ if (ret) {
return_error = BR_FAILED_REPLY;
+ return_error_param = ret;
return_error_line = __LINE__;
- goto err_copy_data_failed;
+ goto err_translate_failed;
}
/* Fixup buffer pointer to target proc address space */
bp->buffer = (uintptr_t)
@@ -3070,7 +3340,8 @@ static void binder_transaction(struct binder_proc *proc,
num_valid = (buffer_offset - off_start_offset) /
sizeof(binder_size_t);
- ret = binder_fixup_parent(t, thread, bp,
+ ret = binder_fixup_parent(&pf_head, t,
+ thread, bp,
off_start_offset,
num_valid,
last_fixup_obj_off,
@@ -3110,6 +3381,17 @@ static void binder_transaction(struct binder_proc *proc,
return_error_line = __LINE__;
goto err_copy_data_failed;
}
+
+ ret = binder_do_deferred_txn_copies(&target_proc->alloc, t->buffer,
+ &sgc_head, &pf_head);
+ if (ret) {
+ binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
+ proc->pid, thread->pid);
+ return_error = BR_FAILED_REPLY;
+ return_error_param = ret;
+ return_error_line = __LINE__;
+ goto err_copy_data_failed;
+ }
if (t->buffer->oneway_spam_suspect)
tcomplete->type = BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT;
else
@@ -3183,6 +3465,7 @@ static void binder_transaction(struct binder_proc *proc,
err_bad_offset:
err_bad_parent:
err_copy_data_failed:
+ binder_cleanup_deferred_txn_lists(&sgc_head, &pf_head);
binder_free_txn_fixups(t);
trace_binder_transaction_failed_buffer_release(t->buffer);
binder_transaction_buffer_release(target_proc, NULL, t->buffer,
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index c430cd3..025260b 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -3076,7 +3076,7 @@ int sata_down_spd_limit(struct ata_link *link, u32 spd_limit)
*/
if (spd > 1)
mask &= (1 << (spd - 1)) - 1;
- else
+ else if (link->sata_spd)
return -EINVAL;
/* were we already at the bottom? */
diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
index 681cb37..49cb453 100644
--- a/drivers/atm/idt77252.c
+++ b/drivers/atm/idt77252.c
@@ -2909,6 +2909,7 @@ close_card_oam(struct idt77252_dev *card)
recycle_rx_pool_skb(card, &vc->rcv.rx_pool);
}
+ kfree(vc);
}
}
}
@@ -2952,6 +2953,15 @@ open_card_ubr0(struct idt77252_dev *card)
return 0;
}
+static void
+close_card_ubr0(struct idt77252_dev *card)
+{
+ struct vc_map *vc = card->vcs[0];
+
+ free_scq(card, vc->scq);
+ kfree(vc);
+}
+
static int
idt77252_dev_open(struct idt77252_dev *card)
{
@@ -3001,6 +3011,7 @@ static void idt77252_dev_close(struct atm_dev *dev)
struct idt77252_dev *card = dev->dev_data;
u32 conf;
+ close_card_ubr0(card);
close_card_oam(card);
conf = SAR_CFG_RXPTH | /* enable receive path */
diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c
index 8b2a0eb..d56a5d5 100644
--- a/drivers/auxdisplay/hd44780.c
+++ b/drivers/auxdisplay/hd44780.c
@@ -322,8 +322,10 @@ static int hd44780_probe(struct platform_device *pdev)
static int hd44780_remove(struct platform_device *pdev)
{
struct charlcd *lcd = platform_get_drvdata(pdev);
+ struct hd44780_common *hdc = lcd->drvdata;
charlcd_unregister(lcd);
+ kfree(hdc->hd44780);
kfree(lcd->drvdata);
kfree(lcd);
diff --git a/drivers/base/component.c b/drivers/base/component.c
index 870485c..058f1a2 100644
--- a/drivers/base/component.c
+++ b/drivers/base/component.c
@@ -130,7 +130,7 @@ static void component_master_debugfs_add(struct master *m)
static void component_master_debugfs_del(struct master *m)
{
- debugfs_remove(debugfs_lookup(dev_name(m->parent), component_debugfs_dir));
+ debugfs_lookup_and_remove(dev_name(m->parent), component_debugfs_dir);
}
#else
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 10e027e..adf003a 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -3328,7 +3328,7 @@ int device_add(struct device *dev)
/* we require the name to be set before, and pass NULL */
error = kobject_add(&dev->kobj, dev->kobj.parent, NULL);
if (error) {
- glue_dir = get_glue_dir(dev);
+ glue_dir = kobj;
goto Error;
}
@@ -3428,6 +3428,7 @@ int device_add(struct device *dev)
device_pm_remove(dev);
dpm_sysfs_remove(dev);
DPMError:
+ dev->driver = NULL;
bus_remove_device(dev);
BusError:
device_remove_attrs(dev);
diff --git a/drivers/base/dd.c b/drivers/base/dd.c
index 0603481..9cbf086 100644
--- a/drivers/base/dd.c
+++ b/drivers/base/dd.c
@@ -352,7 +352,7 @@ late_initcall(deferred_probe_initcall);
static void __exit deferred_probe_exit(void)
{
- debugfs_remove_recursive(debugfs_lookup("devices_deferred", NULL));
+ debugfs_lookup_and_remove("devices_deferred", NULL);
}
__exitcall(deferred_probe_exit);
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 94fe30c..24a82e2 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -217,13 +217,10 @@ static void genpd_debug_add(struct generic_pm_domain *genpd);
static void genpd_debug_remove(struct generic_pm_domain *genpd)
{
- struct dentry *d;
-
if (!genpd_debugfs_dir)
return;
- d = debugfs_lookup(genpd->name, genpd_debugfs_dir);
- debugfs_remove(d);
+ debugfs_lookup_and_remove(genpd->name, genpd_debugfs_dir);
}
static void genpd_update_accounting(struct generic_pm_domain *genpd)
diff --git a/drivers/base/property.c b/drivers/base/property.c
index 735a23d..17a648d 100644
--- a/drivers/base/property.c
+++ b/drivers/base/property.c
@@ -1055,26 +1055,32 @@ struct fwnode_handle *
fwnode_graph_get_next_endpoint(const struct fwnode_handle *fwnode,
struct fwnode_handle *prev)
{
+ struct fwnode_handle *ep, *port_parent = NULL;
const struct fwnode_handle *parent;
- struct fwnode_handle *ep;
/*
* If this function is in a loop and the previous iteration returned
* an endpoint from fwnode->secondary, then we need to use the secondary
* as parent rather than @fwnode.
*/
- if (prev)
- parent = fwnode_graph_get_port_parent(prev);
- else
+ if (prev) {
+ port_parent = fwnode_graph_get_port_parent(prev);
+ parent = port_parent;
+ } else {
parent = fwnode;
+ }
if (IS_ERR_OR_NULL(parent))
return NULL;
ep = fwnode_call_ptr_op(parent, graph_get_next_endpoint, prev);
if (ep)
- return ep;
+ goto out_put_port_parent;
- return fwnode_graph_get_next_endpoint(parent->secondary, NULL);
+ ep = fwnode_graph_get_next_endpoint(parent->secondary, NULL);
+
+out_put_port_parent:
+ fwnode_handle_put(port_parent);
+ return ep;
}
EXPORT_SYMBOL_GPL(fwnode_graph_get_next_endpoint);
diff --git a/drivers/base/test/test_async_driver_probe.c b/drivers/base/test/test_async_driver_probe.c
index 3bb7beb..c157a91 100644
--- a/drivers/base/test/test_async_driver_probe.c
+++ b/drivers/base/test/test_async_driver_probe.c
@@ -146,7 +146,7 @@ static int __init test_async_probe_init(void)
calltime = ktime_get();
for_each_online_cpu(cpu) {
nid = cpu_to_node(cpu);
- pdev = &sync_dev[sync_id];
+ pdev = &async_dev[async_id];
*pdev = test_platform_device_register_node("test_async_driver",
async_id,
diff --git a/drivers/base/transport_class.c b/drivers/base/transport_class.c
index ccc8620..09ee2a1 100644
--- a/drivers/base/transport_class.c
+++ b/drivers/base/transport_class.c
@@ -155,12 +155,27 @@ static int transport_add_class_device(struct attribute_container *cont,
struct device *dev,
struct device *classdev)
{
+ struct transport_class *tclass = class_to_transport_class(cont->class);
int error = attribute_container_add_class_device(classdev);
struct transport_container *tcont =
attribute_container_to_transport_container(cont);
- if (!error && tcont->statistics)
+ if (error)
+ goto err_remove;
+
+ if (tcont->statistics) {
error = sysfs_create_group(&classdev->kobj, tcont->statistics);
+ if (error)
+ goto err_del;
+ }
+
+ return 0;
+
+err_del:
+ attribute_container_class_device_del(classdev);
+err_remove:
+ if (tclass->remove)
+ tclass->remove(tcont, dev, classdev);
return error;
}
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 530b312..76ce6f7 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -78,11 +78,9 @@ static struct page *brd_lookup_page(struct brd_device *brd, sector_t sector)
}
/*
- * Look up and return a brd's page for a given sector.
- * If one does not exist, allocate an empty page, and insert that. Then
- * return it.
+ * Insert a new page for a given sector, if one does not already exist.
*/
-static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
+static int brd_insert_page(struct brd_device *brd, sector_t sector)
{
pgoff_t idx;
struct page *page;
@@ -90,7 +88,7 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
page = brd_lookup_page(brd, sector);
if (page)
- return page;
+ return 0;
/*
* Must use NOIO because we don't want to recurse back into the
@@ -99,11 +97,11 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
gfp_flags = GFP_NOIO | __GFP_ZERO | __GFP_HIGHMEM;
page = alloc_page(gfp_flags);
if (!page)
- return NULL;
+ return -ENOMEM;
if (radix_tree_preload(GFP_NOIO)) {
__free_page(page);
- return NULL;
+ return -ENOMEM;
}
spin_lock(&brd->brd_lock);
@@ -120,8 +118,7 @@ static struct page *brd_insert_page(struct brd_device *brd, sector_t sector)
spin_unlock(&brd->brd_lock);
radix_tree_preload_end();
-
- return page;
+ return 0;
}
/*
@@ -174,16 +171,17 @@ static int copy_to_brd_setup(struct brd_device *brd, sector_t sector, size_t n)
{
unsigned int offset = (sector & (PAGE_SECTORS-1)) << SECTOR_SHIFT;
size_t copy;
+ int ret;
copy = min_t(size_t, n, PAGE_SIZE - offset);
- if (!brd_insert_page(brd, sector))
- return -ENOSPC;
+ ret = brd_insert_page(brd, sector);
+ if (ret)
+ return ret;
if (copy < n) {
sector += copy >> SECTOR_SHIFT;
- if (!brd_insert_page(brd, sector))
- return -ENOSPC;
+ ret = brd_insert_page(brd, sector);
}
- return 0;
+ return ret;
}
/*
@@ -372,6 +370,7 @@ static int brd_alloc(int i)
struct brd_device *brd;
struct gendisk *disk;
char buf[DISK_NAME_LEN];
+ int err = -ENOMEM;
mutex_lock(&brd_devices_mutex);
list_for_each_entry(brd, &brd_devices, brd_list) {
@@ -422,16 +421,21 @@ static int brd_alloc(int i)
/* Tell the block layer that this is not a rotational device */
blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue);
- add_disk(disk);
+ blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
+ err = add_disk(disk);
+ if (err)
+ goto out_cleanup_disk;
return 0;
+out_cleanup_disk:
+ blk_cleanup_disk(disk);
out_free_dev:
mutex_lock(&brd_devices_mutex);
list_del(&brd->brd_list);
mutex_unlock(&brd_devices_mutex);
kfree(brd);
- return -ENOMEM;
+ return err;
}
static void brd_probe(dev_t dev)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 68a0c0f..c96bdb3 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -736,6 +736,10 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
if (!file)
return -EBADF;
+
+ /* suppress uevents while reconfiguring the device */
+ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 1);
+
is_loop = is_loop_device(file);
error = loop_global_lock_killable(lo, is_loop);
if (error)
@@ -790,13 +794,18 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
fput(old_file);
if (partscan)
loop_reread_partitions(lo);
- return 0;
+
+ error = 0;
+done:
+ /* enable and uncork uevent now that we are done */
+ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
+ return error;
out_err:
loop_global_unlock(lo, is_loop);
out_putf:
fput(file);
- return error;
+ goto done;
}
/* loop sysfs attributes */
@@ -1152,13 +1161,13 @@ loop_set_status_from_info(struct loop_device *lo,
if (err)
return err;
+ /* Avoid assigning overflow values */
+ if (info->lo_offset > LLONG_MAX || info->lo_sizelimit > LLONG_MAX)
+ return -EOVERFLOW;
+
lo->lo_offset = info->lo_offset;
lo->lo_sizelimit = info->lo_sizelimit;
- /* loff_t vars have been assigned __u64 */
- if (lo->lo_offset < 0 || lo->lo_sizelimit < 0)
- return -EOVERFLOW;
-
memcpy(lo->lo_file_name, info->lo_file_name, LO_NAME_SIZE);
memcpy(lo->lo_crypt_name, info->lo_crypt_name, LO_NAME_SIZE);
lo->lo_file_name[LO_NAME_SIZE-1] = 0;
@@ -1256,6 +1265,9 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
goto out_unlock;
}
+ /* suppress uevents while reconfiguring the device */
+ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 1);
+
disk_force_media_change(lo->lo_disk, DISK_EVENT_MEDIA_CHANGE);
set_disk_ro(lo->lo_disk, (lo->lo_flags & LO_FLAGS_READ_ONLY) != 0);
@@ -1304,11 +1316,16 @@ static int loop_configure(struct loop_device *lo, fmode_t mode,
if (partscan)
lo->lo_disk->flags &= ~GENHD_FL_NO_PART_SCAN;
+ /* enable and uncork uevent now that we are done */
+ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
+
loop_global_unlock(lo, is_loop);
if (partscan)
loop_reread_partitions(lo);
+
if (!(mode & FMODE_EXCL))
bd_abort_claiming(bdev, loop_configure);
+
return 0;
out_unlock:
@@ -2188,35 +2205,44 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,
static void loop_handle_cmd(struct loop_cmd *cmd)
{
+ struct cgroup_subsys_state *cmd_blkcg_css = cmd->blkcg_css;
+ struct cgroup_subsys_state *cmd_memcg_css = cmd->memcg_css;
struct request *rq = blk_mq_rq_from_pdu(cmd);
const bool write = op_is_write(req_op(rq));
struct loop_device *lo = rq->q->queuedata;
int ret = 0;
struct mem_cgroup *old_memcg = NULL;
+ const bool use_aio = cmd->use_aio;
if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) {
ret = -EIO;
goto failed;
}
- if (cmd->blkcg_css)
- kthread_associate_blkcg(cmd->blkcg_css);
- if (cmd->memcg_css)
+ if (cmd_blkcg_css)
+ kthread_associate_blkcg(cmd_blkcg_css);
+ if (cmd_memcg_css)
old_memcg = set_active_memcg(
- mem_cgroup_from_css(cmd->memcg_css));
+ mem_cgroup_from_css(cmd_memcg_css));
+ /*
+ * do_req_filebacked() may call blk_mq_complete_request() synchronously
+ * or asynchronously if using aio. Hence, do not touch 'cmd' after
+ * do_req_filebacked() has returned unless we are sure that 'cmd' has
+ * not yet been completed.
+ */
ret = do_req_filebacked(lo, rq);
- if (cmd->blkcg_css)
+ if (cmd_blkcg_css)
kthread_associate_blkcg(NULL);
- if (cmd->memcg_css) {
+ if (cmd_memcg_css) {
set_active_memcg(old_memcg);
- css_put(cmd->memcg_css);
+ css_put(cmd_memcg_css);
}
failed:
/* complete non-aio request */
- if (!cmd->use_aio || ret) {
+ if (!use_aio || ret) {
if (ret == -EOPNOTSUPP)
cmd->ret = ret;
else
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index ec2b5dd..ade8b83 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1167,11 +1167,11 @@ static int nbd_reconnect_socket(struct nbd_device *nbd, unsigned long arg)
return -ENOSPC;
}
-static void nbd_bdev_reset(struct block_device *bdev)
+static void nbd_bdev_reset(struct nbd_device *nbd)
{
- if (bdev->bd_openers > 1)
+ if (nbd->disk->part0->bd_openers > 1)
return;
- set_capacity(bdev->bd_disk, 0);
+ set_capacity(nbd->disk, 0);
}
static void nbd_parse_flags(struct nbd_device *nbd)
@@ -1337,7 +1337,7 @@ static int nbd_start_device(struct nbd_device *nbd)
return nbd_set_size(nbd, config->bytesize, nbd_blksize(config));
}
-static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *bdev)
+static int nbd_start_device_ioctl(struct nbd_device *nbd)
{
struct nbd_config *config = nbd->config;
int ret;
@@ -1358,7 +1358,7 @@ static int nbd_start_device_ioctl(struct nbd_device *nbd, struct block_device *b
flush_workqueue(nbd->recv_workq);
mutex_lock(&nbd->config_lock);
- nbd_bdev_reset(bdev);
+ nbd_bdev_reset(nbd);
/* user requested, ignore socket errors */
if (test_bit(NBD_RT_DISCONNECT_REQUESTED, &config->runtime_flags))
ret = 0;
@@ -1372,7 +1372,7 @@ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
{
nbd_clear_sock(nbd);
__invalidate_device(bdev, true);
- nbd_bdev_reset(bdev);
+ nbd_bdev_reset(nbd);
if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
&nbd->config->runtime_flags))
nbd_config_put(nbd);
@@ -1418,7 +1418,7 @@ static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
config->flags = arg;
return 0;
case NBD_DO_IT:
- return nbd_start_device_ioctl(nbd, bdev);
+ return nbd_start_device_ioctl(nbd);
case NBD_CLEAR_QUE:
/*
* This is for compatibility only. The queue is always cleared
@@ -1756,17 +1756,7 @@ static struct nbd_device *nbd_dev_add(int index, unsigned int refs)
refcount_set(&nbd->refs, 0);
INIT_LIST_HEAD(&nbd->list);
disk->major = NBD_MAJOR;
-
- /* Too big first_minor can cause duplicate creation of
- * sysfs files/links, since index << part_shift might overflow, or
- * MKDEV() expect that the max bits of first_minor is 20.
- */
disk->first_minor = index << part_shift;
- if (disk->first_minor < index || disk->first_minor > MINORMASK) {
- err = -EINVAL;
- goto out_free_work;
- }
-
disk->minors = 1 << part_shift;
disk->fops = &nbd_fops;
disk->private_data = nbd;
@@ -1871,8 +1861,19 @@ static int nbd_genl_connect(struct sk_buff *skb, struct genl_info *info)
if (!netlink_capable(skb, CAP_SYS_ADMIN))
return -EPERM;
- if (info->attrs[NBD_ATTR_INDEX])
+ if (info->attrs[NBD_ATTR_INDEX]) {
index = nla_get_u32(info->attrs[NBD_ATTR_INDEX]);
+
+ /*
+ * Too big first_minor can cause duplicate creation of
+ * sysfs files/links, since index << part_shift might overflow, or
+ * MKDEV() expect that the max bits of first_minor is 20.
+ */
+ if (index < 0 || index > MINORMASK >> part_shift) {
+ printk(KERN_ERR "nbd: illegal input index %d\n", index);
+ return -EINVAL;
+ }
+ }
if (!info->attrs[NBD_ATTR_SOCKETS]) {
printk(KERN_ERR "nbd: must specify at least one socket\n");
return -EINVAL;
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 4c8b410..033b0f6 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1314,8 +1314,7 @@ static inline void nullb_complete_cmd(struct nullb_cmd *cmd)
case NULL_IRQ_SOFTIRQ:
switch (cmd->nq->dev->queue_mode) {
case NULL_Q_MQ:
- if (likely(!blk_should_fake_timeout(cmd->rq->q)))
- blk_mq_complete_request(cmd->rq);
+ blk_mq_complete_request(cmd->rq);
break;
case NULL_Q_BIO:
/*
@@ -1491,7 +1490,8 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
cmd->rq = bd->rq;
cmd->error = BLK_STS_OK;
cmd->nq = nq;
- cmd->fake_timeout = should_timeout_request(bd->rq);
+ cmd->fake_timeout = should_timeout_request(bd->rq) ||
+ blk_should_fake_timeout(bd->rq->q);
blk_mq_start_request(bd->rq);
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index c4a52f3..f9d298c 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -5296,8 +5296,7 @@ static void rbd_dev_release(struct device *dev)
module_put(THIS_MODULE);
}
-static struct rbd_device *__rbd_dev_create(struct rbd_client *rbdc,
- struct rbd_spec *spec)
+static struct rbd_device *__rbd_dev_create(struct rbd_spec *spec)
{
struct rbd_device *rbd_dev;
@@ -5342,9 +5341,6 @@ static struct rbd_device *__rbd_dev_create(struct rbd_client *rbdc,
rbd_dev->dev.parent = &rbd_root_dev;
device_initialize(&rbd_dev->dev);
- rbd_dev->rbd_client = rbdc;
- rbd_dev->spec = spec;
-
return rbd_dev;
}
@@ -5357,12 +5353,10 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
{
struct rbd_device *rbd_dev;
- rbd_dev = __rbd_dev_create(rbdc, spec);
+ rbd_dev = __rbd_dev_create(spec);
if (!rbd_dev)
return NULL;
- rbd_dev->opts = opts;
-
/* get an id and fill in device name */
rbd_dev->dev_id = ida_simple_get(&rbd_dev_id_ida, 0,
minor_to_rbd_dev_id(1 << MINORBITS),
@@ -5379,6 +5373,10 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
/* we have a ref from do_rbd_add() */
__module_get(THIS_MODULE);
+ rbd_dev->rbd_client = rbdc;
+ rbd_dev->spec = spec;
+ rbd_dev->opts = opts;
+
dout("%s rbd_dev %p dev_id %d\n", __func__, rbd_dev, rbd_dev->dev_id);
return rbd_dev;
@@ -6739,7 +6737,7 @@ static int rbd_dev_probe_parent(struct rbd_device *rbd_dev, int depth)
goto out_err;
}
- parent = __rbd_dev_create(rbd_dev->rbd_client, rbd_dev->parent_spec);
+ parent = __rbd_dev_create(rbd_dev->parent_spec);
if (!parent) {
ret = -ENOMEM;
goto out_err;
@@ -6749,8 +6747,8 @@ static int rbd_dev_probe_parent(struct rbd_device *rbd_dev, int depth)
* Images related by parent/child relationships always share
* rbd_client and spec/parent_spec, so bump their refcounts.
*/
- __rbd_get_client(rbd_dev->rbd_client);
- rbd_spec_get(rbd_dev->parent_spec);
+ parent->rbd_client = __rbd_get_client(rbd_dev->rbd_client);
+ parent->spec = rbd_spec_get(rbd_dev->parent_spec);
__set_bit(RBD_DEV_FLAG_READONLY, &parent->flags);
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 4d4bb81..656d99f 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -964,6 +964,8 @@ static int vdc_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
print_version();
hp = mdesc_grab();
+ if (!hp)
+ return -ENODEV;
err = -ENODEV;
if ((vdev->dev_no << PARTITION_SHIFT) & ~(u64)MINORMASK) {
diff --git a/drivers/bluetooth/btqcomsmd.c b/drivers/bluetooth/btqcomsmd.c
index 2acb719..11c7e04 100644
--- a/drivers/bluetooth/btqcomsmd.c
+++ b/drivers/bluetooth/btqcomsmd.c
@@ -122,6 +122,21 @@ static int btqcomsmd_setup(struct hci_dev *hdev)
return 0;
}
+static int btqcomsmd_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr)
+{
+ int ret;
+
+ ret = qca_set_bdaddr_rome(hdev, bdaddr);
+ if (ret)
+ return ret;
+
+ /* The firmware stops responding for a while after setting the bdaddr,
+ * causing timeouts for subsequent commands. Sleep a bit to avoid this.
+ */
+ usleep_range(1000, 10000);
+ return 0;
+}
+
static int btqcomsmd_probe(struct platform_device *pdev)
{
struct btqcomsmd *btq;
@@ -162,7 +177,7 @@ static int btqcomsmd_probe(struct platform_device *pdev)
hdev->close = btqcomsmd_close;
hdev->send = btqcomsmd_send;
hdev->setup = btqcomsmd_setup;
- hdev->set_bdaddr = qca_set_bdaddr_rome;
+ hdev->set_bdaddr = btqcomsmd_set_bdaddr;
ret = hci_register_dev(hdev);
if (ret < 0)
diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
index 199e8f7..7050a16 100644
--- a/drivers/bluetooth/btsdio.c
+++ b/drivers/bluetooth/btsdio.c
@@ -352,6 +352,7 @@ static void btsdio_remove(struct sdio_func *func)
BT_DBG("func %p", func);
+ cancel_work_sync(&data->work);
if (!data)
return;
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 9c32263..9eb2267 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -400,6 +400,10 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01),
.driver_info = BTUSB_IGNORE },
+ /* Realtek 8821CE Bluetooth devices */
+ { USB_DEVICE(0x13d3, 0x3529), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+
/* Realtek 8822CE Bluetooth devices */
{ USB_DEVICE(0x0bda, 0xb00c), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
index 5d0428f..8041155 100644
--- a/drivers/bluetooth/hci_qca.c
+++ b/drivers/bluetooth/hci_qca.c
@@ -1582,10 +1582,11 @@ static bool qca_prevent_wake(struct hci_dev *hdev)
struct hci_uart *hu = hci_get_drvdata(hdev);
bool wakeup;
- /* UART driver handles the interrupt from BT SoC.So we need to use
- * device handle of UART driver to get the status of device may wakeup.
+ /* BT SoC attached through the serial bus is handled by the serdev driver.
+ * So we need to use the device handle of the serdev driver to get the
+ * status of device may wakeup.
*/
- wakeup = device_may_wakeup(hu->serdev->ctrl->dev.parent);
+ wakeup = device_may_wakeup(&hu->serdev->ctrl->dev);
bt_dev_dbg(hu->hdev, "wakeup status : %d", wakeup);
return !wakeup;
@@ -2156,10 +2157,17 @@ static void qca_serdev_shutdown(struct device *dev)
int timeout = msecs_to_jiffies(CMD_TRANS_TIMEOUT_MS);
struct serdev_device *serdev = to_serdev_device(dev);
struct qca_serdev *qcadev = serdev_device_get_drvdata(serdev);
+ struct hci_uart *hu = &qcadev->serdev_hu;
+ struct hci_dev *hdev = hu->hdev;
+ struct qca_data *qca = hu->priv;
const u8 ibs_wake_cmd[] = { 0xFD };
const u8 edl_reset_soc_cmd[] = { 0x01, 0x00, 0xFC, 0x01, 0x05 };
if (qcadev->btsoc_type == QCA_QCA6390) {
+ if (test_bit(QCA_BT_OFF, &qca->flags) ||
+ !test_bit(HCI_RUNNING, &hdev->flags))
+ return;
+
serdev_device_write_flush(serdev);
ret = serdev_device_write_buf(serdev, ibs_wake_cmd,
sizeof(ibs_wake_cmd));
diff --git a/drivers/bus/imx-weim.c b/drivers/bus/imx-weim.c
index 28bb65a..20176782 100644
--- a/drivers/bus/imx-weim.c
+++ b/drivers/bus/imx-weim.c
@@ -192,8 +192,8 @@ static int weim_parse_dt(struct platform_device *pdev, void __iomem *base)
const struct of_device_id *of_id = of_match_device(weim_id_table,
&pdev->dev);
const struct imx_weim_devtype *devtype = of_id->data;
+ int ret = 0, have_child = 0;
struct device_node *child;
- int ret, have_child = 0;
struct cs_timing_state ts = {};
u32 reg;
diff --git a/drivers/bus/sunxi-rsb.c b/drivers/bus/sunxi-rsb.c
index 20ed77f..fac8627 100644
--- a/drivers/bus/sunxi-rsb.c
+++ b/drivers/bus/sunxi-rsb.c
@@ -861,7 +861,13 @@ static int __init sunxi_rsb_init(void)
return ret;
}
- return platform_driver_register(&sunxi_rsb_driver);
+ ret = platform_driver_register(&sunxi_rsb_driver);
+ if (ret) {
+ bus_unregister(&sunxi_rsb_bus);
+ return ret;
+ }
+
+ return 0;
}
module_init(sunxi_rsb_init);
diff --git a/drivers/char/applicom.c b/drivers/char/applicom.c
index deb85a3..260573c 100644
--- a/drivers/char/applicom.c
+++ b/drivers/char/applicom.c
@@ -197,8 +197,10 @@ static int __init applicom_init(void)
if (!pci_match_id(applicom_pci_tbl, dev))
continue;
- if (pci_enable_device(dev))
+ if (pci_enable_device(dev)) {
+ pci_dev_put(dev);
return -EIO;
+ }
RamIO = ioremap(pci_resource_start(dev, 0), LEN_RAM_IO);
@@ -207,6 +209,7 @@ static int __init applicom_init(void)
"space at 0x%llx\n",
(unsigned long long)pci_resource_start(dev, 0));
pci_disable_device(dev);
+ pci_dev_put(dev);
return -EIO;
}
diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
index f366e8e..20dc245 100644
--- a/drivers/char/ipmi/ipmi_ssif.c
+++ b/drivers/char/ipmi/ipmi_ssif.c
@@ -74,7 +74,8 @@
/*
* Timer values
*/
-#define SSIF_MSG_USEC 20000 /* 20ms between message tries. */
+#define SSIF_MSG_USEC 60000 /* 60ms between message tries (T3). */
+#define SSIF_REQ_RETRY_USEC 60000 /* 60ms between send retries (T6). */
#define SSIF_MSG_PART_USEC 5000 /* 5ms for a message part */
/* How many times to we retry sending/receiving the message. */
@@ -82,7 +83,9 @@
#define SSIF_RECV_RETRIES 250
#define SSIF_MSG_MSEC (SSIF_MSG_USEC / 1000)
+#define SSIF_REQ_RETRY_MSEC (SSIF_REQ_RETRY_USEC / 1000)
#define SSIF_MSG_JIFFIES ((SSIF_MSG_USEC * 1000) / TICK_NSEC)
+#define SSIF_REQ_RETRY_JIFFIES ((SSIF_REQ_RETRY_USEC * 1000) / TICK_NSEC)
#define SSIF_MSG_PART_JIFFIES ((SSIF_MSG_PART_USEC * 1000) / TICK_NSEC)
/*
@@ -92,7 +95,7 @@
#define SSIF_WATCH_WATCHDOG_TIMEOUT msecs_to_jiffies(250)
enum ssif_intf_state {
- SSIF_NORMAL,
+ SSIF_IDLE,
SSIF_GETTING_FLAGS,
SSIF_GETTING_EVENTS,
SSIF_CLEARING_FLAGS,
@@ -100,8 +103,8 @@ enum ssif_intf_state {
/* FIXME - add watchdog stuff. */
};
-#define SSIF_IDLE(ssif) ((ssif)->ssif_state == SSIF_NORMAL \
- && (ssif)->curr_msg == NULL)
+#define IS_SSIF_IDLE(ssif) ((ssif)->ssif_state == SSIF_IDLE \
+ && (ssif)->curr_msg == NULL)
/*
* Indexes into stats[] in ssif_info below.
@@ -229,6 +232,9 @@ struct ssif_info {
bool got_alert;
bool waiting_alert;
+ /* Used to inform the timeout that it should do a resend. */
+ bool do_resend;
+
/*
* If set to true, this will request events the next time the
* state machine is idle.
@@ -348,9 +354,9 @@ static void return_hosed_msg(struct ssif_info *ssif_info,
/*
* Must be called with the message lock held. This will release the
- * message lock. Note that the caller will check SSIF_IDLE and start a
- * new operation, so there is no need to check for new messages to
- * start in here.
+ * message lock. Note that the caller will check IS_SSIF_IDLE and
+ * start a new operation, so there is no need to check for new
+ * messages to start in here.
*/
static void start_clear_flags(struct ssif_info *ssif_info, unsigned long *flags)
{
@@ -367,7 +373,7 @@ static void start_clear_flags(struct ssif_info *ssif_info, unsigned long *flags)
if (start_send(ssif_info, msg, 3) != 0) {
/* Error, just go to normal state. */
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
}
}
@@ -382,7 +388,7 @@ static void start_flag_fetch(struct ssif_info *ssif_info, unsigned long *flags)
mb[0] = (IPMI_NETFN_APP_REQUEST << 2);
mb[1] = IPMI_GET_MSG_FLAGS_CMD;
if (start_send(ssif_info, mb, 2) != 0)
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
}
static void check_start_send(struct ssif_info *ssif_info, unsigned long *flags,
@@ -393,7 +399,7 @@ static void check_start_send(struct ssif_info *ssif_info, unsigned long *flags,
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
ssif_info->curr_msg = NULL;
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
ipmi_ssif_unlock_cond(ssif_info, flags);
ipmi_free_smi_msg(msg);
}
@@ -407,7 +413,7 @@ static void start_event_fetch(struct ssif_info *ssif_info, unsigned long *flags)
msg = ipmi_alloc_smi_msg();
if (!msg) {
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
ipmi_ssif_unlock_cond(ssif_info, flags);
return;
}
@@ -430,7 +436,7 @@ static void start_recv_msg_fetch(struct ssif_info *ssif_info,
msg = ipmi_alloc_smi_msg();
if (!msg) {
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
ipmi_ssif_unlock_cond(ssif_info, flags);
return;
}
@@ -448,9 +454,9 @@ static void start_recv_msg_fetch(struct ssif_info *ssif_info,
/*
* Must be called with the message lock held. This will release the
- * message lock. Note that the caller will check SSIF_IDLE and start a
- * new operation, so there is no need to check for new messages to
- * start in here.
+ * message lock. Note that the caller will check IS_SSIF_IDLE and
+ * start a new operation, so there is no need to check for new
+ * messages to start in here.
*/
static void handle_flags(struct ssif_info *ssif_info, unsigned long *flags)
{
@@ -466,7 +472,7 @@ static void handle_flags(struct ssif_info *ssif_info, unsigned long *flags)
/* Events available. */
start_event_fetch(ssif_info, flags);
else {
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
ipmi_ssif_unlock_cond(ssif_info, flags);
}
}
@@ -538,22 +544,28 @@ static void start_get(struct ssif_info *ssif_info)
ssif_info->recv, I2C_SMBUS_BLOCK_DATA);
}
+static void start_resend(struct ssif_info *ssif_info);
+
static void retry_timeout(struct timer_list *t)
{
struct ssif_info *ssif_info = from_timer(ssif_info, t, retry_timer);
unsigned long oflags, *flags;
- bool waiting;
+ bool waiting, resend;
if (ssif_info->stopping)
return;
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
+ resend = ssif_info->do_resend;
+ ssif_info->do_resend = false;
waiting = ssif_info->waiting_alert;
ssif_info->waiting_alert = false;
ipmi_ssif_unlock_cond(ssif_info, flags);
if (waiting)
start_get(ssif_info);
+ if (resend)
+ start_resend(ssif_info);
}
static void watch_timeout(struct timer_list *t)
@@ -568,7 +580,7 @@ static void watch_timeout(struct timer_list *t)
if (ssif_info->watch_timeout) {
mod_timer(&ssif_info->watch_timer,
jiffies + ssif_info->watch_timeout);
- if (SSIF_IDLE(ssif_info)) {
+ if (IS_SSIF_IDLE(ssif_info)) {
start_flag_fetch(ssif_info, flags); /* Releases lock */
return;
}
@@ -602,8 +614,6 @@ static void ssif_alert(struct i2c_client *client, enum i2c_alert_protocol type,
start_get(ssif_info);
}
-static int start_resend(struct ssif_info *ssif_info);
-
static void msg_done_handler(struct ssif_info *ssif_info, int result,
unsigned char *data, unsigned int len)
{
@@ -756,7 +766,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
}
switch (ssif_info->ssif_state) {
- case SSIF_NORMAL:
+ case SSIF_IDLE:
ipmi_ssif_unlock_cond(ssif_info, flags);
if (!msg)
break;
@@ -774,7 +784,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
* Error fetching flags, or invalid length,
* just give up for now.
*/
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
ipmi_ssif_unlock_cond(ssif_info, flags);
dev_warn(&ssif_info->client->dev,
"Error getting flags: %d %d, %x\n",
@@ -809,7 +819,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
"Invalid response clearing flags: %x %x\n",
data[0], data[1]);
}
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
ipmi_ssif_unlock_cond(ssif_info, flags);
break;
@@ -887,7 +897,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result,
}
flags = ipmi_ssif_lock_cond(ssif_info, &oflags);
- if (SSIF_IDLE(ssif_info) && !ssif_info->stopping) {
+ if (IS_SSIF_IDLE(ssif_info) && !ssif_info->stopping) {
if (ssif_info->req_events)
start_event_fetch(ssif_info, flags);
else if (ssif_info->req_flags)
@@ -909,31 +919,23 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
if (result < 0) {
ssif_info->retries_left--;
if (ssif_info->retries_left > 0) {
- if (!start_resend(ssif_info)) {
- ssif_inc_stat(ssif_info, send_retries);
- return;
- }
- /* request failed, just return the error. */
- ssif_inc_stat(ssif_info, send_errors);
-
- if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
- dev_dbg(&ssif_info->client->dev,
- "%s: Out of retries\n", __func__);
- msg_done_handler(ssif_info, -EIO, NULL, 0);
+ /*
+ * Wait the retry timeout time per the spec,
+ * then redo the send.
+ */
+ ssif_info->do_resend = true;
+ mod_timer(&ssif_info->retry_timer,
+ jiffies + SSIF_REQ_RETRY_JIFFIES);
return;
}
ssif_inc_stat(ssif_info, send_errors);
- /*
- * Got an error on transmit, let the done routine
- * handle it.
- */
if (ssif_info->ssif_debug & SSIF_DEBUG_MSG)
dev_dbg(&ssif_info->client->dev,
- "%s: Error %d\n", __func__, result);
+ "%s: Out of retries\n", __func__);
- msg_done_handler(ssif_info, result, NULL, 0);
+ msg_done_handler(ssif_info, -EIO, NULL, 0);
return;
}
@@ -996,7 +998,7 @@ static void msg_written_handler(struct ssif_info *ssif_info, int result,
}
}
-static int start_resend(struct ssif_info *ssif_info)
+static void start_resend(struct ssif_info *ssif_info)
{
int command;
@@ -1021,7 +1023,6 @@ static int start_resend(struct ssif_info *ssif_info)
ssif_i2c_send(ssif_info, msg_written_handler, I2C_SMBUS_WRITE,
command, ssif_info->data, I2C_SMBUS_BLOCK_DATA);
- return 0;
}
static int start_send(struct ssif_info *ssif_info,
@@ -1036,7 +1037,8 @@ static int start_send(struct ssif_info *ssif_info,
ssif_info->retries_left = SSIF_SEND_RETRIES;
memcpy(ssif_info->data + 1, data, len);
ssif_info->data_len = len;
- return start_resend(ssif_info);
+ start_resend(ssif_info);
+ return 0;
}
/* Must be called with the message lock held. */
@@ -1046,7 +1048,7 @@ static void start_next_msg(struct ssif_info *ssif_info, unsigned long *flags)
unsigned long oflags;
restart:
- if (!SSIF_IDLE(ssif_info)) {
+ if (!IS_SSIF_IDLE(ssif_info)) {
ipmi_ssif_unlock_cond(ssif_info, flags);
return;
}
@@ -1269,7 +1271,7 @@ static void shutdown_ssif(void *send_info)
dev_set_drvdata(&ssif_info->client->dev, NULL);
/* make sure the driver is not looking for flags any more. */
- while (ssif_info->ssif_state != SSIF_NORMAL)
+ while (ssif_info->ssif_state != SSIF_IDLE)
schedule_timeout(1);
ssif_info->stopping = true;
@@ -1336,8 +1338,10 @@ static int do_cmd(struct i2c_client *client, int len, unsigned char *msg,
ret = i2c_smbus_write_block_data(client, SSIF_IPMI_REQUEST, len, msg);
if (ret) {
retry_cnt--;
- if (retry_cnt > 0)
+ if (retry_cnt > 0) {
+ msleep(SSIF_REQ_RETRY_MSEC);
goto retry1;
+ }
return -ENODEV;
}
@@ -1478,8 +1482,10 @@ static int start_multipart_test(struct i2c_client *client,
32, msg);
if (ret) {
retry_cnt--;
- if (retry_cnt > 0)
+ if (retry_cnt > 0) {
+ msleep(SSIF_REQ_RETRY_MSEC);
goto retry_write;
+ }
dev_err(&client->dev, "Could not write multi-part start, though the BMC said it could handle it. Just limit sends to one part.\n");
return ret;
}
@@ -1841,7 +1847,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
}
spin_lock_init(&ssif_info->lock);
- ssif_info->ssif_state = SSIF_NORMAL;
+ ssif_info->ssif_state = SSIF_IDLE;
timer_setup(&ssif_info->retry_timer, retry_timeout, 0);
timer_setup(&ssif_info->watch_timer, watch_timeout, 0);
diff --git a/drivers/char/pcmcia/cm4000_cs.c b/drivers/char/pcmcia/cm4000_cs.c
index 8f1bce0..7057b7b 100644
--- a/drivers/char/pcmcia/cm4000_cs.c
+++ b/drivers/char/pcmcia/cm4000_cs.c
@@ -530,7 +530,8 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
DEBUGP(5, dev, "NumRecBytes is valid\n");
break;
}
- usleep_range(10000, 11000);
+ /* can not sleep as this is in atomic context */
+ mdelay(10);
}
if (i == 100) {
DEBUGP(5, dev, "Timeout waiting for NumRecBytes getting "
@@ -550,7 +551,8 @@ static int set_protocol(struct cm4000_dev *dev, struct ptsreq *ptsreq)
}
break;
}
- usleep_range(10000, 11000);
+ /* can not sleep as this is in atomic context */
+ mdelay(10);
}
/* check whether it is a short PTS reply? */
diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c
index 0913d3eb..cd26602 100644
--- a/drivers/char/tpm/eventlog/acpi.c
+++ b/drivers/char/tpm/eventlog/acpi.c
@@ -143,8 +143,12 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
ret = -EIO;
virt = acpi_os_map_iomem(start, len);
- if (!virt)
+ if (!virt) {
+ dev_warn(&chip->dev, "%s: Failed to map ACPI memory\n", __func__);
+ /* try EFI log next */
+ ret = -ENODEV;
goto err;
+ }
memcpy_fromio(log->bios_event_log, virt, len);
diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
index c5b3dc9..100e474 100644
--- a/drivers/clk/Kconfig
+++ b/drivers/clk/Kconfig
@@ -83,7 +83,7 @@
config COMMON_CLK_HI655X
tristate "Clock driver for Hi655x" if EXPERT
depends on (MFD_HI655X_PMIC || COMPILE_TEST)
- depends on REGMAP
+ select REGMAP
default MFD_HI655X_PMIC
help
This driver supports the hi655x PMIC clock. This
diff --git a/drivers/clk/ingenic/jz4760-cgu.c b/drivers/clk/ingenic/jz4760-cgu.c
index 1448379..1190624 100644
--- a/drivers/clk/ingenic/jz4760-cgu.c
+++ b/drivers/clk/ingenic/jz4760-cgu.c
@@ -58,7 +58,7 @@ jz4760_cgu_calc_m_n_od(const struct ingenic_cgu_pll_info *pll_info,
unsigned long rate, unsigned long parent_rate,
unsigned int *pm, unsigned int *pn, unsigned int *pod)
{
- unsigned int m, n, od, m_max = (1 << pll_info->m_bits) - 2;
+ unsigned int m, n, od, m_max = (1 << pll_info->m_bits) - 1;
/* The frequency after the N divider must be between 1 and 50 MHz. */
n = parent_rate / (1 * MHZ);
@@ -66,19 +66,17 @@ jz4760_cgu_calc_m_n_od(const struct ingenic_cgu_pll_info *pll_info,
/* The N divider must be >= 2. */
n = clamp_val(n, 2, 1 << pll_info->n_bits);
- for (;; n >>= 1) {
- od = (unsigned int)-1;
+ rate /= MHZ;
+ parent_rate /= MHZ;
- do {
- m = (rate / MHZ) * (1 << ++od) * n / (parent_rate / MHZ);
- } while ((m > m_max || m & 1) && (od < 4));
-
- if (od < 4 && m >= 4 && m <= m_max)
- break;
+ for (m = m_max; m >= m_max && n >= 2; n--) {
+ m = rate * n / parent_rate;
+ od = m & 1;
+ m <<= od;
}
*pm = m;
- *pn = n;
+ *pn = n + 1;
*pod = 1 << od;
}
diff --git a/drivers/clk/x86/Kconfig b/drivers/clk/x86/Kconfig
index 69642e1..ced99e0 100644
--- a/drivers/clk/x86/Kconfig
+++ b/drivers/clk/x86/Kconfig
@@ -1,8 +1,9 @@
# SPDX-License-Identifier: GPL-2.0-only
config CLK_LGM_CGU
depends on OF && HAS_IOMEM && (X86 || COMPILE_TEST)
+ select MFD_SYSCON
select OF_EARLY_FLATTREE
bool "Clock driver for Lightning Mountain(LGM) platform"
help
- Clock Generation Unit(CGU) driver for Intel Lightning Mountain(LGM)
- network processor SoC.
+ Clock Generation Unit(CGU) driver for MaxLinear's x86 based
+ Lightning Mountain(LGM) network processor SoC.
diff --git a/drivers/clk/x86/clk-cgu-pll.c b/drivers/clk/x86/clk-cgu-pll.c
index 3179557..409dbf5 100644
--- a/drivers/clk/x86/clk-cgu-pll.c
+++ b/drivers/clk/x86/clk-cgu-pll.c
@@ -1,8 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
+ * Copyright (C) 2020-2022 MaxLinear, Inc.
* Copyright (C) 2020 Intel Corporation.
- * Zhu YiXin <yixin.zhu@intel.com>
- * Rahul Tanwar <rahul.tanwar@intel.com>
+ * Zhu Yixin <yzhu@maxlinear.com>
+ * Rahul Tanwar <rtanwar@maxlinear.com>
*/
#include <linux/clk-provider.h>
@@ -40,13 +41,10 @@ static unsigned long lgm_pll_recalc_rate(struct clk_hw *hw, unsigned long prate)
{
struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
unsigned int div, mult, frac;
- unsigned long flags;
- spin_lock_irqsave(&pll->lock, flags);
mult = lgm_get_clk_val(pll->membase, PLL_REF_DIV(pll->reg), 0, 12);
div = lgm_get_clk_val(pll->membase, PLL_REF_DIV(pll->reg), 18, 6);
frac = lgm_get_clk_val(pll->membase, pll->reg, 2, 24);
- spin_unlock_irqrestore(&pll->lock, flags);
if (pll->type == TYPE_LJPLL)
div *= 4;
@@ -57,12 +55,9 @@ static unsigned long lgm_pll_recalc_rate(struct clk_hw *hw, unsigned long prate)
static int lgm_pll_is_enabled(struct clk_hw *hw)
{
struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
- unsigned long flags;
unsigned int ret;
- spin_lock_irqsave(&pll->lock, flags);
ret = lgm_get_clk_val(pll->membase, pll->reg, 0, 1);
- spin_unlock_irqrestore(&pll->lock, flags);
return ret;
}
@@ -70,15 +65,13 @@ static int lgm_pll_is_enabled(struct clk_hw *hw)
static int lgm_pll_enable(struct clk_hw *hw)
{
struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
- unsigned long flags;
u32 val;
int ret;
- spin_lock_irqsave(&pll->lock, flags);
lgm_set_clk_val(pll->membase, pll->reg, 0, 1, 1);
- ret = readl_poll_timeout_atomic(pll->membase + pll->reg,
- val, (val & 0x1), 1, 100);
- spin_unlock_irqrestore(&pll->lock, flags);
+ ret = regmap_read_poll_timeout_atomic(pll->membase, pll->reg,
+ val, (val & 0x1), 1, 100);
+
return ret;
}
@@ -86,11 +79,8 @@ static int lgm_pll_enable(struct clk_hw *hw)
static void lgm_pll_disable(struct clk_hw *hw)
{
struct lgm_clk_pll *pll = to_lgm_clk_pll(hw);
- unsigned long flags;
- spin_lock_irqsave(&pll->lock, flags);
lgm_set_clk_val(pll->membase, pll->reg, 0, 1, 0);
- spin_unlock_irqrestore(&pll->lock, flags);
}
static const struct clk_ops lgm_pll_ops = {
@@ -121,7 +111,6 @@ lgm_clk_register_pll(struct lgm_clk_provider *ctx,
return ERR_PTR(-ENOMEM);
pll->membase = ctx->membase;
- pll->lock = ctx->lock;
pll->reg = list->reg;
pll->flags = list->flags;
pll->type = list->type;
diff --git a/drivers/clk/x86/clk-cgu.c b/drivers/clk/x86/clk-cgu.c
index 33de600..89b53f2 100644
--- a/drivers/clk/x86/clk-cgu.c
+++ b/drivers/clk/x86/clk-cgu.c
@@ -1,8 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
+ * Copyright (C) 2020-2022 MaxLinear, Inc.
* Copyright (C) 2020 Intel Corporation.
- * Zhu YiXin <yixin.zhu@intel.com>
- * Rahul Tanwar <rahul.tanwar@intel.com>
+ * Zhu Yixin <yzhu@maxlinear.com>
+ * Rahul Tanwar <rtanwar@maxlinear.com>
*/
#include <linux/clk-provider.h>
#include <linux/device.h>
@@ -24,14 +25,10 @@
static struct clk_hw *lgm_clk_register_fixed(struct lgm_clk_provider *ctx,
const struct lgm_clk_branch *list)
{
- unsigned long flags;
- if (list->div_flags & CLOCK_FLAG_VAL_INIT) {
- spin_lock_irqsave(&ctx->lock, flags);
+ if (list->div_flags & CLOCK_FLAG_VAL_INIT)
lgm_set_clk_val(ctx->membase, list->div_off, list->div_shift,
list->div_width, list->div_val);
- spin_unlock_irqrestore(&ctx->lock, flags);
- }
return clk_hw_register_fixed_rate(NULL, list->name,
list->parent_data[0].name,
@@ -41,33 +38,27 @@ static struct clk_hw *lgm_clk_register_fixed(struct lgm_clk_provider *ctx,
static u8 lgm_clk_mux_get_parent(struct clk_hw *hw)
{
struct lgm_clk_mux *mux = to_lgm_clk_mux(hw);
- unsigned long flags;
u32 val;
- spin_lock_irqsave(&mux->lock, flags);
if (mux->flags & MUX_CLK_SW)
val = mux->reg;
else
val = lgm_get_clk_val(mux->membase, mux->reg, mux->shift,
mux->width);
- spin_unlock_irqrestore(&mux->lock, flags);
return clk_mux_val_to_index(hw, NULL, mux->flags, val);
}
static int lgm_clk_mux_set_parent(struct clk_hw *hw, u8 index)
{
struct lgm_clk_mux *mux = to_lgm_clk_mux(hw);
- unsigned long flags;
u32 val;
val = clk_mux_index_to_val(NULL, mux->flags, index);
- spin_lock_irqsave(&mux->lock, flags);
if (mux->flags & MUX_CLK_SW)
mux->reg = val;
else
lgm_set_clk_val(mux->membase, mux->reg, mux->shift,
mux->width, val);
- spin_unlock_irqrestore(&mux->lock, flags);
return 0;
}
@@ -90,7 +81,7 @@ static struct clk_hw *
lgm_clk_register_mux(struct lgm_clk_provider *ctx,
const struct lgm_clk_branch *list)
{
- unsigned long flags, cflags = list->mux_flags;
+ unsigned long cflags = list->mux_flags;
struct device *dev = ctx->dev;
u8 shift = list->mux_shift;
u8 width = list->mux_width;
@@ -111,7 +102,6 @@ lgm_clk_register_mux(struct lgm_clk_provider *ctx,
init.num_parents = list->num_parents;
mux->membase = ctx->membase;
- mux->lock = ctx->lock;
mux->reg = reg;
mux->shift = shift;
mux->width = width;
@@ -123,11 +113,8 @@ lgm_clk_register_mux(struct lgm_clk_provider *ctx,
if (ret)
return ERR_PTR(ret);
- if (cflags & CLOCK_FLAG_VAL_INIT) {
- spin_lock_irqsave(&mux->lock, flags);
+ if (cflags & CLOCK_FLAG_VAL_INIT)
lgm_set_clk_val(mux->membase, reg, shift, width, list->mux_val);
- spin_unlock_irqrestore(&mux->lock, flags);
- }
return hw;
}
@@ -136,13 +123,10 @@ static unsigned long
lgm_clk_divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
{
struct lgm_clk_divider *divider = to_lgm_clk_divider(hw);
- unsigned long flags;
unsigned int val;
- spin_lock_irqsave(÷r->lock, flags);
val = lgm_get_clk_val(divider->membase, divider->reg,
divider->shift, divider->width);
- spin_unlock_irqrestore(÷r->lock, flags);
return divider_recalc_rate(hw, parent_rate, val, divider->table,
divider->flags, divider->width);
@@ -163,7 +147,6 @@ lgm_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
unsigned long prate)
{
struct lgm_clk_divider *divider = to_lgm_clk_divider(hw);
- unsigned long flags;
int value;
value = divider_get_val(rate, prate, divider->table,
@@ -171,10 +154,8 @@ lgm_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
if (value < 0)
return value;
- spin_lock_irqsave(÷r->lock, flags);
lgm_set_clk_val(divider->membase, divider->reg,
divider->shift, divider->width, value);
- spin_unlock_irqrestore(÷r->lock, flags);
return 0;
}
@@ -182,12 +163,10 @@ lgm_clk_divider_set_rate(struct clk_hw *hw, unsigned long rate,
static int lgm_clk_divider_enable_disable(struct clk_hw *hw, int enable)
{
struct lgm_clk_divider *div = to_lgm_clk_divider(hw);
- unsigned long flags;
- spin_lock_irqsave(&div->lock, flags);
- lgm_set_clk_val(div->membase, div->reg, div->shift_gate,
- div->width_gate, enable);
- spin_unlock_irqrestore(&div->lock, flags);
+ if (div->flags != DIV_CLK_NO_MASK)
+ lgm_set_clk_val(div->membase, div->reg, div->shift_gate,
+ div->width_gate, enable);
return 0;
}
@@ -213,7 +192,7 @@ static struct clk_hw *
lgm_clk_register_divider(struct lgm_clk_provider *ctx,
const struct lgm_clk_branch *list)
{
- unsigned long flags, cflags = list->div_flags;
+ unsigned long cflags = list->div_flags;
struct device *dev = ctx->dev;
struct lgm_clk_divider *div;
struct clk_init_data init = {};
@@ -236,7 +215,6 @@ lgm_clk_register_divider(struct lgm_clk_provider *ctx,
init.num_parents = 1;
div->membase = ctx->membase;
- div->lock = ctx->lock;
div->reg = reg;
div->shift = shift;
div->width = width;
@@ -251,11 +229,8 @@ lgm_clk_register_divider(struct lgm_clk_provider *ctx,
if (ret)
return ERR_PTR(ret);
- if (cflags & CLOCK_FLAG_VAL_INIT) {
- spin_lock_irqsave(&div->lock, flags);
+ if (cflags & CLOCK_FLAG_VAL_INIT)
lgm_set_clk_val(div->membase, reg, shift, width, list->div_val);
- spin_unlock_irqrestore(&div->lock, flags);
- }
return hw;
}
@@ -264,7 +239,6 @@ static struct clk_hw *
lgm_clk_register_fixed_factor(struct lgm_clk_provider *ctx,
const struct lgm_clk_branch *list)
{
- unsigned long flags;
struct clk_hw *hw;
hw = clk_hw_register_fixed_factor(ctx->dev, list->name,
@@ -273,12 +247,9 @@ lgm_clk_register_fixed_factor(struct lgm_clk_provider *ctx,
if (IS_ERR(hw))
return ERR_CAST(hw);
- if (list->div_flags & CLOCK_FLAG_VAL_INIT) {
- spin_lock_irqsave(&ctx->lock, flags);
+ if (list->div_flags & CLOCK_FLAG_VAL_INIT)
lgm_set_clk_val(ctx->membase, list->div_off, list->div_shift,
list->div_width, list->div_val);
- spin_unlock_irqrestore(&ctx->lock, flags);
- }
return hw;
}
@@ -286,13 +257,10 @@ lgm_clk_register_fixed_factor(struct lgm_clk_provider *ctx,
static int lgm_clk_gate_enable(struct clk_hw *hw)
{
struct lgm_clk_gate *gate = to_lgm_clk_gate(hw);
- unsigned long flags;
unsigned int reg;
- spin_lock_irqsave(&gate->lock, flags);
reg = GATE_HW_REG_EN(gate->reg);
lgm_set_clk_val(gate->membase, reg, gate->shift, 1, 1);
- spin_unlock_irqrestore(&gate->lock, flags);
return 0;
}
@@ -300,25 +268,19 @@ static int lgm_clk_gate_enable(struct clk_hw *hw)
static void lgm_clk_gate_disable(struct clk_hw *hw)
{
struct lgm_clk_gate *gate = to_lgm_clk_gate(hw);
- unsigned long flags;
unsigned int reg;
- spin_lock_irqsave(&gate->lock, flags);
reg = GATE_HW_REG_DIS(gate->reg);
lgm_set_clk_val(gate->membase, reg, gate->shift, 1, 1);
- spin_unlock_irqrestore(&gate->lock, flags);
}
static int lgm_clk_gate_is_enabled(struct clk_hw *hw)
{
struct lgm_clk_gate *gate = to_lgm_clk_gate(hw);
unsigned int reg, ret;
- unsigned long flags;
- spin_lock_irqsave(&gate->lock, flags);
reg = GATE_HW_REG_STAT(gate->reg);
ret = lgm_get_clk_val(gate->membase, reg, gate->shift, 1);
- spin_unlock_irqrestore(&gate->lock, flags);
return ret;
}
@@ -333,7 +295,7 @@ static struct clk_hw *
lgm_clk_register_gate(struct lgm_clk_provider *ctx,
const struct lgm_clk_branch *list)
{
- unsigned long flags, cflags = list->gate_flags;
+ unsigned long cflags = list->gate_flags;
const char *pname = list->parent_data[0].name;
struct device *dev = ctx->dev;
u8 shift = list->gate_shift;
@@ -354,7 +316,6 @@ lgm_clk_register_gate(struct lgm_clk_provider *ctx,
init.num_parents = pname ? 1 : 0;
gate->membase = ctx->membase;
- gate->lock = ctx->lock;
gate->reg = reg;
gate->shift = shift;
gate->flags = cflags;
@@ -366,9 +327,7 @@ lgm_clk_register_gate(struct lgm_clk_provider *ctx,
return ERR_PTR(ret);
if (cflags & CLOCK_FLAG_VAL_INIT) {
- spin_lock_irqsave(&gate->lock, flags);
lgm_set_clk_val(gate->membase, reg, shift, 1, list->gate_val);
- spin_unlock_irqrestore(&gate->lock, flags);
}
return hw;
@@ -396,8 +355,22 @@ int lgm_clk_register_branches(struct lgm_clk_provider *ctx,
hw = lgm_clk_register_fixed_factor(ctx, list);
break;
case CLK_TYPE_GATE:
- hw = lgm_clk_register_gate(ctx, list);
+ if (list->gate_flags & GATE_CLK_HW) {
+ hw = lgm_clk_register_gate(ctx, list);
+ } else {
+ /*
+ * GATE_CLKs can be controlled either from
+ * CGU clk driver i.e. this driver or directly
+ * from power management driver/daemon. It is
+ * dependent on the power policy/profile requirements
+ * of the end product. To override control of gate
+ * clks from this driver, provide NULL for this index
+ * of gate clk provider.
+ */
+ hw = NULL;
+ }
break;
+
default:
dev_err(ctx->dev, "invalid clk type\n");
return -EINVAL;
@@ -443,24 +416,18 @@ lgm_clk_ddiv_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
static int lgm_clk_ddiv_enable(struct clk_hw *hw)
{
struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
- unsigned long flags;
- spin_lock_irqsave(&ddiv->lock, flags);
lgm_set_clk_val(ddiv->membase, ddiv->reg, ddiv->shift_gate,
ddiv->width_gate, 1);
- spin_unlock_irqrestore(&ddiv->lock, flags);
return 0;
}
static void lgm_clk_ddiv_disable(struct clk_hw *hw)
{
struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
- unsigned long flags;
- spin_lock_irqsave(&ddiv->lock, flags);
lgm_set_clk_val(ddiv->membase, ddiv->reg, ddiv->shift_gate,
ddiv->width_gate, 0);
- spin_unlock_irqrestore(&ddiv->lock, flags);
}
static int
@@ -497,32 +464,25 @@ lgm_clk_ddiv_set_rate(struct clk_hw *hw, unsigned long rate,
{
struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
u32 div, ddiv1, ddiv2;
- unsigned long flags;
div = DIV_ROUND_CLOSEST_ULL((u64)prate, rate);
- spin_lock_irqsave(&ddiv->lock, flags);
if (lgm_get_clk_val(ddiv->membase, ddiv->reg, ddiv->shift2, 1)) {
div = DIV_ROUND_CLOSEST_ULL((u64)div, 5);
div = div * 2;
}
- if (div <= 0) {
- spin_unlock_irqrestore(&ddiv->lock, flags);
+ if (div <= 0)
return -EINVAL;
- }
- if (lgm_clk_get_ddiv_val(div, &ddiv1, &ddiv2)) {
- spin_unlock_irqrestore(&ddiv->lock, flags);
+ if (lgm_clk_get_ddiv_val(div, &ddiv1, &ddiv2))
return -EINVAL;
- }
lgm_set_clk_val(ddiv->membase, ddiv->reg, ddiv->shift0, ddiv->width0,
ddiv1 - 1);
lgm_set_clk_val(ddiv->membase, ddiv->reg, ddiv->shift1, ddiv->width1,
ddiv2 - 1);
- spin_unlock_irqrestore(&ddiv->lock, flags);
return 0;
}
@@ -533,18 +493,15 @@ lgm_clk_ddiv_round_rate(struct clk_hw *hw, unsigned long rate,
{
struct lgm_clk_ddiv *ddiv = to_lgm_clk_ddiv(hw);
u32 div, ddiv1, ddiv2;
- unsigned long flags;
u64 rate64;
div = DIV_ROUND_CLOSEST_ULL((u64)*prate, rate);
/* if predivide bit is enabled, modify div by factor of 2.5 */
- spin_lock_irqsave(&ddiv->lock, flags);
if (lgm_get_clk_val(ddiv->membase, ddiv->reg, ddiv->shift2, 1)) {
div = div * 2;
div = DIV_ROUND_CLOSEST_ULL((u64)div, 5);
}
- spin_unlock_irqrestore(&ddiv->lock, flags);
if (div <= 0)
return *prate;
@@ -558,12 +515,10 @@ lgm_clk_ddiv_round_rate(struct clk_hw *hw, unsigned long rate,
do_div(rate64, ddiv2);
/* if predivide bit is enabled, modify rounded rate by factor of 2.5 */
- spin_lock_irqsave(&ddiv->lock, flags);
if (lgm_get_clk_val(ddiv->membase, ddiv->reg, ddiv->shift2, 1)) {
rate64 = rate64 * 2;
rate64 = DIV_ROUND_CLOSEST_ULL(rate64, 5);
}
- spin_unlock_irqrestore(&ddiv->lock, flags);
return rate64;
}
@@ -600,7 +555,6 @@ int lgm_clk_register_ddiv(struct lgm_clk_provider *ctx,
init.num_parents = 1;
ddiv->membase = ctx->membase;
- ddiv->lock = ctx->lock;
ddiv->reg = list->reg;
ddiv->shift0 = list->shift0;
ddiv->width0 = list->width0;
diff --git a/drivers/clk/x86/clk-cgu.h b/drivers/clk/x86/clk-cgu.h
index 4e22bfb..bcaf8ae 100644
--- a/drivers/clk/x86/clk-cgu.h
+++ b/drivers/clk/x86/clk-cgu.h
@@ -1,28 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
- * Copyright(c) 2020 Intel Corporation.
- * Zhu YiXin <yixin.zhu@intel.com>
- * Rahul Tanwar <rahul.tanwar@intel.com>
+ * Copyright (C) 2020-2022 MaxLinear, Inc.
+ * Copyright (C) 2020 Intel Corporation.
+ * Zhu Yixin <yzhu@maxlinear.com>
+ * Rahul Tanwar <rtanwar@maxlinear.com>
*/
#ifndef __CLK_CGU_H
#define __CLK_CGU_H
-#include <linux/io.h>
+#include <linux/regmap.h>
struct lgm_clk_mux {
struct clk_hw hw;
- void __iomem *membase;
+ struct regmap *membase;
unsigned int reg;
u8 shift;
u8 width;
unsigned long flags;
- spinlock_t lock;
};
struct lgm_clk_divider {
struct clk_hw hw;
- void __iomem *membase;
+ struct regmap *membase;
unsigned int reg;
u8 shift;
u8 width;
@@ -30,12 +30,11 @@ struct lgm_clk_divider {
u8 width_gate;
unsigned long flags;
const struct clk_div_table *table;
- spinlock_t lock;
};
struct lgm_clk_ddiv {
struct clk_hw hw;
- void __iomem *membase;
+ struct regmap *membase;
unsigned int reg;
u8 shift0;
u8 width0;
@@ -48,16 +47,14 @@ struct lgm_clk_ddiv {
unsigned int mult;
unsigned int div;
unsigned long flags;
- spinlock_t lock;
};
struct lgm_clk_gate {
struct clk_hw hw;
- void __iomem *membase;
+ struct regmap *membase;
unsigned int reg;
u8 shift;
unsigned long flags;
- spinlock_t lock;
};
enum lgm_clk_type {
@@ -77,11 +74,10 @@ enum lgm_clk_type {
* @clk_data: array of hw clocks and clk number.
*/
struct lgm_clk_provider {
- void __iomem *membase;
+ struct regmap *membase;
struct device_node *np;
struct device *dev;
struct clk_hw_onecell_data clk_data;
- spinlock_t lock;
};
enum pll_type {
@@ -92,11 +88,10 @@ enum pll_type {
struct lgm_clk_pll {
struct clk_hw hw;
- void __iomem *membase;
+ struct regmap *membase;
unsigned int reg;
unsigned long flags;
enum pll_type type;
- spinlock_t lock;
};
/**
@@ -202,6 +197,8 @@ struct lgm_clk_branch {
/* clock flags definition */
#define CLOCK_FLAG_VAL_INIT BIT(16)
#define MUX_CLK_SW BIT(17)
+#define GATE_CLK_HW BIT(18)
+#define DIV_CLK_NO_MASK BIT(19)
#define LGM_MUX(_id, _name, _pdata, _f, _reg, \
_shift, _width, _cf, _v) \
@@ -300,29 +297,32 @@ struct lgm_clk_branch {
.div = _d, \
}
-static inline void lgm_set_clk_val(void __iomem *membase, u32 reg,
+static inline void lgm_set_clk_val(struct regmap *membase, u32 reg,
u8 shift, u8 width, u32 set_val)
{
u32 mask = (GENMASK(width - 1, 0) << shift);
- u32 regval;
- regval = readl(membase + reg);
- regval = (regval & ~mask) | ((set_val << shift) & mask);
- writel(regval, membase + reg);
+ regmap_update_bits(membase, reg, mask, set_val << shift);
}
-static inline u32 lgm_get_clk_val(void __iomem *membase, u32 reg,
+static inline u32 lgm_get_clk_val(struct regmap *membase, u32 reg,
u8 shift, u8 width)
{
u32 mask = (GENMASK(width - 1, 0) << shift);
u32 val;
- val = readl(membase + reg);
+ if (regmap_read(membase, reg, &val)) {
+ WARN_ONCE(1, "Failed to read clk reg: 0x%x\n", reg);
+ return 0;
+ }
+
val = (val & mask) >> shift;
return val;
}
+
+
int lgm_clk_register_branches(struct lgm_clk_provider *ctx,
const struct lgm_clk_branch *list,
unsigned int nr_clk);
diff --git a/drivers/clk/x86/clk-lgm.c b/drivers/clk/x86/clk-lgm.c
index 020f4e8..f69455d 100644
--- a/drivers/clk/x86/clk-lgm.c
+++ b/drivers/clk/x86/clk-lgm.c
@@ -1,10 +1,12 @@
// SPDX-License-Identifier: GPL-2.0
/*
+ * Copyright (C) 2020-2022 MaxLinear, Inc.
* Copyright (C) 2020 Intel Corporation.
- * Zhu YiXin <yixin.zhu@intel.com>
- * Rahul Tanwar <rahul.tanwar@intel.com>
+ * Zhu Yixin <yzhu@maxlinear.com>
+ * Rahul Tanwar <rtanwar@maxlinear.com>
*/
#include <linux/clk-provider.h>
+#include <linux/mfd/syscon.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <dt-bindings/clock/intel,lgm-clk.h>
@@ -253,8 +255,8 @@ static const struct lgm_clk_branch lgm_branch_clks[] = {
LGM_FIXED(LGM_CLK_SLIC, "slic", NULL, 0, CGU_IF_CLK1,
8, 2, CLOCK_FLAG_VAL_INIT, 8192000, 2),
LGM_FIXED(LGM_CLK_DOCSIS, "v_docsis", NULL, 0, 0, 0, 0, 0, 16000000, 0),
- LGM_DIV(LGM_CLK_DCL, "dcl", "v_ifclk", 0, CGU_PCMCR,
- 25, 3, 0, 0, 0, 0, dcl_div),
+ LGM_DIV(LGM_CLK_DCL, "dcl", "v_ifclk", CLK_SET_RATE_PARENT, CGU_PCMCR,
+ 25, 3, 0, 0, DIV_CLK_NO_MASK, 0, dcl_div),
LGM_MUX(LGM_CLK_PCM, "pcm", pcm_p, 0, CGU_C55_PCMCR,
0, 1, CLK_MUX_ROUND_CLOSEST, 0),
LGM_FIXED_FACTOR(LGM_CLK_DDR_PHY, "ddr_phy", "ddr",
@@ -433,13 +435,15 @@ static int lgm_cgu_probe(struct platform_device *pdev)
ctx->clk_data.num = CLK_NR_CLKS;
- ctx->membase = devm_platform_ioremap_resource(pdev, 0);
- if (IS_ERR(ctx->membase))
+ ctx->membase = syscon_node_to_regmap(np);
+ if (IS_ERR(ctx->membase)) {
+ dev_err(dev, "Failed to get clk CGU iomem\n");
return PTR_ERR(ctx->membase);
+ }
+
ctx->np = np;
ctx->dev = dev;
- spin_lock_init(&ctx->lock);
ret = lgm_clk_register_plls(ctx, lgm_pll_clks,
ARRAY_SIZE(lgm_pll_clks));
diff --git a/drivers/comedi/drivers/adv_pci1760.c b/drivers/comedi/drivers/adv_pci1760.c
index 6de8ab9..d6934b6 100644
--- a/drivers/comedi/drivers/adv_pci1760.c
+++ b/drivers/comedi/drivers/adv_pci1760.c
@@ -59,7 +59,7 @@
#define PCI1760_CMD_CLR_IMB2 0x00 /* Clears IMB2 */
#define PCI1760_CMD_SET_DO 0x01 /* Set output state */
#define PCI1760_CMD_GET_DO 0x02 /* Read output status */
-#define PCI1760_CMD_GET_STATUS 0x03 /* Read current status */
+#define PCI1760_CMD_GET_STATUS 0x07 /* Read current status */
#define PCI1760_CMD_GET_FW_VER 0x0e /* Read firmware version */
#define PCI1760_CMD_GET_HW_VER 0x0f /* Read hardware version */
#define PCI1760_CMD_SET_PWM_HI(x) (0x10 + (x) * 2) /* Set "hi" period */
diff --git a/drivers/cpufreq/armada-37xx-cpufreq.c b/drivers/cpufreq/armada-37xx-cpufreq.c
index c10fc33..b74289a 100644
--- a/drivers/cpufreq/armada-37xx-cpufreq.c
+++ b/drivers/cpufreq/armada-37xx-cpufreq.c
@@ -445,7 +445,7 @@ static int __init armada37xx_cpufreq_driver_init(void)
return -ENODEV;
}
- clk = clk_get(cpu_dev, 0);
+ clk = clk_get(cpu_dev, NULL);
if (IS_ERR(clk)) {
dev_err(cpu_dev, "Cannot get clock for CPU0\n");
return PTR_ERR(clk);
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index ca1d103..e1b5975 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -133,6 +133,7 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "nvidia,tegra30", },
{ .compatible = "nvidia,tegra124", },
{ .compatible = "nvidia,tegra210", },
+ { .compatible = "nvidia,tegra234", },
{ .compatible = "qcom,apq8096", },
{ .compatible = "qcom,msm8996", },
@@ -143,6 +144,7 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "qcom,sc8180x", },
{ .compatible = "qcom,sdm845", },
{ .compatible = "qcom,sm6350", },
+ { .compatible = "qcom,sm6375", },
{ .compatible = "qcom,sm8150", },
{ .compatible = "qcom,sm8250", },
{ .compatible = "qcom,sm8350", },
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 63f7c21..55c8031 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -388,6 +388,15 @@ static void free_policy_dbs_info(struct policy_dbs_info *policy_dbs,
gov->free(policy_dbs);
}
+static void cpufreq_dbs_data_release(struct kobject *kobj)
+{
+ struct dbs_data *dbs_data = to_dbs_data(to_gov_attr_set(kobj));
+ struct dbs_governor *gov = dbs_data->gov;
+
+ gov->exit(dbs_data);
+ kfree(dbs_data);
+}
+
int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
{
struct dbs_governor *gov = dbs_governor_of(policy);
@@ -425,6 +434,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
goto free_policy_dbs_info;
}
+ dbs_data->gov = gov;
gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list);
ret = gov->init(dbs_data);
@@ -447,6 +457,7 @@ int cpufreq_dbs_governor_init(struct cpufreq_policy *policy)
policy->governor_data = policy_dbs;
gov->kobj_type.sysfs_ops = &governor_sysfs_ops;
+ gov->kobj_type.release = cpufreq_dbs_data_release;
ret = kobject_init_and_add(&dbs_data->attr_set.kobj, &gov->kobj_type,
get_governor_parent_kobj(policy),
"%s", gov->gov.name);
@@ -488,13 +499,8 @@ void cpufreq_dbs_governor_exit(struct cpufreq_policy *policy)
policy->governor_data = NULL;
- if (!count) {
- if (!have_governor_per_policy())
- gov->gdbs_data = NULL;
-
- gov->exit(dbs_data);
- kfree(dbs_data);
- }
+ if (!count && !have_governor_per_policy())
+ gov->gdbs_data = NULL;
free_policy_dbs_info(policy_dbs, gov);
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index bab8e61..a6de263 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -37,6 +37,7 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
/* Governor demand based switching data (per-policy or global). */
struct dbs_data {
struct gov_attr_set attr_set;
+ struct dbs_governor *gov;
void *tuners;
unsigned int ignore_nice_load;
unsigned int sampling_rate;
diff --git a/drivers/cpufreq/cpufreq_governor_attr_set.c b/drivers/cpufreq/cpufreq_governor_attr_set.c
index a6f365b..771770e 100644
--- a/drivers/cpufreq/cpufreq_governor_attr_set.c
+++ b/drivers/cpufreq/cpufreq_governor_attr_set.c
@@ -8,11 +8,6 @@
#include "cpufreq_governor.h"
-static inline struct gov_attr_set *to_gov_attr_set(struct kobject *kobj)
-{
- return container_of(kobj, struct gov_attr_set, kobj);
-}
-
static inline struct governor_attr *to_gov_attr(struct attribute *attr)
{
return container_of(attr, struct governor_attr, attr);
diff --git a/drivers/cpuidle/cpuidle-psci-domain.c b/drivers/cpuidle/cpuidle-psci-domain.c
index ff2c3f8e..ce5c415 100644
--- a/drivers/cpuidle/cpuidle-psci-domain.c
+++ b/drivers/cpuidle/cpuidle-psci-domain.c
@@ -182,7 +182,8 @@ static void psci_pd_remove(void)
struct psci_pd_provider *pd_provider, *it;
struct generic_pm_domain *genpd;
- list_for_each_entry_safe(pd_provider, it, &psci_pd_providers, link) {
+ list_for_each_entry_safe_reverse(pd_provider, it,
+ &psci_pd_providers, link) {
of_genpd_del_provider(pd_provider->node);
genpd = of_genpd_remove_last(pd_provider->node);
diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
index 8278d98..e1556a3 100644
--- a/drivers/crypto/amcc/crypto4xx_core.c
+++ b/drivers/crypto/amcc/crypto4xx_core.c
@@ -522,7 +522,6 @@ static void crypto4xx_cipher_done(struct crypto4xx_device *dev,
{
struct skcipher_request *req;
struct scatterlist *dst;
- dma_addr_t addr;
req = skcipher_request_cast(pd_uinfo->async_req);
@@ -531,8 +530,8 @@ static void crypto4xx_cipher_done(struct crypto4xx_device *dev,
req->cryptlen, req->dst);
} else {
dst = pd_uinfo->dest_va;
- addr = dma_map_page(dev->core_dev->device, sg_page(dst),
- dst->offset, dst->length, DMA_FROM_DEVICE);
+ dma_unmap_page(dev->core_dev->device, pd->dest, dst->length,
+ DMA_FROM_DEVICE);
}
if (pd_uinfo->sa_va->sa_command_0.bf.save_iv == SA_SAVE_IV) {
@@ -557,10 +556,9 @@ static void crypto4xx_ahash_done(struct crypto4xx_device *dev,
struct ahash_request *ahash_req;
ahash_req = ahash_request_cast(pd_uinfo->async_req);
- ctx = crypto_tfm_ctx(ahash_req->base.tfm);
+ ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(ahash_req));
- crypto4xx_copy_digest_to_dst(ahash_req->result, pd_uinfo,
- crypto_tfm_ctx(ahash_req->base.tfm));
+ crypto4xx_copy_digest_to_dst(ahash_req->result, pd_uinfo, ctx);
crypto4xx_ret_sg_desc(dev, pd_uinfo);
if (pd_uinfo->state & PD_ENTRY_BUSY)
diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
index 9f753cb..b386a70 100644
--- a/drivers/crypto/ccp/ccp-dmaengine.c
+++ b/drivers/crypto/ccp/ccp-dmaengine.c
@@ -642,14 +642,26 @@ static void ccp_dma_release(struct ccp_device *ccp)
chan = ccp->ccp_dma_chan + i;
dma_chan = &chan->dma_chan;
- if (dma_chan->client_count)
- dma_release_channel(dma_chan);
-
tasklet_kill(&chan->cleanup_tasklet);
list_del_rcu(&dma_chan->device_node);
}
}
+static void ccp_dma_release_channels(struct ccp_device *ccp)
+{
+ struct ccp_dma_chan *chan;
+ struct dma_chan *dma_chan;
+ unsigned int i;
+
+ for (i = 0; i < ccp->cmd_q_count; i++) {
+ chan = ccp->ccp_dma_chan + i;
+ dma_chan = &chan->dma_chan;
+
+ if (dma_chan->client_count)
+ dma_release_channel(dma_chan);
+ }
+}
+
int ccp_dmaengine_register(struct ccp_device *ccp)
{
struct ccp_dma_chan *chan;
@@ -770,8 +782,9 @@ void ccp_dmaengine_unregister(struct ccp_device *ccp)
if (!dmaengine)
return;
- ccp_dma_release(ccp);
+ ccp_dma_release_channels(ccp);
dma_async_device_unregister(dma_dev);
+ ccp_dma_release(ccp);
kmem_cache_destroy(ccp->dma_desc_cache);
kmem_cache_destroy(ccp->dma_cmd_cache);
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index 900727b..70174a9 100644
--- a/drivers/crypto/ccp/sev-dev.c
+++ b/drivers/crypto/ccp/sev-dev.c
@@ -24,6 +24,7 @@
#include <linux/cpufeature.h>
#include <asm/smp.h>
+#include <asm/cacheflush.h>
#include "psp-dev.h"
#include "sev-dev.h"
@@ -141,6 +142,17 @@ static int sev_cmd_buffer_len(int cmd)
return 0;
}
+static void *sev_fw_alloc(unsigned long len)
+{
+ struct page *page;
+
+ page = alloc_pages(GFP_KERNEL, get_order(len));
+ if (!page)
+ return NULL;
+
+ return page_address(page);
+}
+
static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
{
struct psp_device *psp = psp_master;
@@ -667,7 +679,14 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
input_address = (void __user *)input.address;
if (input.address && input.length) {
- id_blob = kzalloc(input.length, GFP_KERNEL);
+ /*
+ * The length of the ID shouldn't be assumed by software since
+ * it may change in the future. The allocation size is limited
+ * to 1 << (PAGE_SHIFT + MAX_ORDER - 1) by the page allocator.
+ * If the allocation fails, simply return ENOMEM rather than
+ * warning in the kernel log.
+ */
+ id_blob = kzalloc(input.length, GFP_KERNEL | __GFP_NOWARN);
if (!id_blob)
return -ENOMEM;
@@ -1080,7 +1099,6 @@ EXPORT_SYMBOL_GPL(sev_issue_cmd_external_user);
void sev_pci_init(void)
{
struct sev_device *sev = psp_master->sev_data;
- struct page *tmr_page;
int error, rc;
if (!sev)
@@ -1096,14 +1114,13 @@ void sev_pci_init(void)
sev_get_api_version();
/* Obtain the TMR memory area for SEV-ES use */
- tmr_page = alloc_pages(GFP_KERNEL, get_order(SEV_ES_TMR_SIZE));
- if (tmr_page) {
- sev_es_tmr = page_address(tmr_page);
- } else {
- sev_es_tmr = NULL;
+ sev_es_tmr = sev_fw_alloc(SEV_ES_TMR_SIZE);
+ if (sev_es_tmr)
+ /* Must flush the cache before giving it to the firmware */
+ clflush_cache_range(sev_es_tmr, SEV_ES_TMR_SIZE);
+ else
dev_warn(sev->dev,
"SEV: TMR allocation failed, SEV-ES support unavailable\n");
- }
/* Initialize the platform */
rc = sev_platform_init(&error);
diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c
index 0572737..3dbe5405 100644
--- a/drivers/crypto/hisilicon/sgl.c
+++ b/drivers/crypto/hisilicon/sgl.c
@@ -122,9 +122,8 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev,
for (j = 0; j < i; j++) {
dma_free_coherent(dev, block_size, block[j].sgl,
block[j].sgl_dma);
- memset(block + j, 0, sizeof(*block));
}
- kfree(pool);
+ kfree_sensitive(pool);
return ERR_PTR(-ENOMEM);
}
EXPORT_SYMBOL_GPL(hisi_acc_create_sgl_pool);
diff --git a/drivers/crypto/qat/qat_common/qat_algs.c b/drivers/crypto/qat/qat_common/qat_algs.c
index 9abdaf7..906082f 100644
--- a/drivers/crypto/qat/qat_common/qat_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_algs.c
@@ -434,8 +434,8 @@ static void qat_alg_skcipher_init_com(struct qat_alg_skcipher_ctx *ctx,
} else if (aes_v2_capable && mode == ICP_QAT_HW_CIPHER_CTR_MODE) {
ICP_QAT_FW_LA_SLICE_TYPE_SET(header->serv_specif_flags,
ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE);
- keylen = round_up(keylen, 16);
memcpy(cd->ucs_aes.key, key, keylen);
+ keylen = round_up(keylen, 16);
} else {
memcpy(cd->aes.key, key, keylen);
}
diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
index 6cc4da4..e75b9ed 100644
--- a/drivers/dax/bus.c
+++ b/drivers/dax/bus.c
@@ -397,8 +397,8 @@ static void unregister_dev_dax(void *dev)
dev_dbg(dev, "%s\n", __func__);
kill_dev_dax(dev_dax);
- free_dev_dax_ranges(dev_dax);
device_del(dev);
+ free_dev_dax_ranges(dev_dax);
put_device(dev);
}
diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
index a376220..04f85f1 100644
--- a/drivers/dax/kmem.c
+++ b/drivers/dax/kmem.c
@@ -135,7 +135,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax)
if (rc) {
dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n",
i, range.start, range.end);
- release_resource(res);
+ remove_resource(res);
kfree(res);
data->res[i] = NULL;
if (mapped)
@@ -181,7 +181,7 @@ static void dev_dax_kmem_remove(struct dev_dax *dev_dax)
rc = remove_memory(range.start, range_len(&range));
if (rc == 0) {
- release_resource(data->res[i]);
+ remove_resource(data->res[i]);
kfree(data->res[i]);
data->res[i] = NULL;
success++;
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index 80c2c03..95344ae 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -236,7 +236,7 @@
config HISI_DMA
tristate "HiSilicon DMA Engine support"
- depends on ARM64 || COMPILE_TEST
+ depends on ARCH_HISI || COMPILE_TEST
depends on PCI_MSI
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index af3ee28..4ec7bb5 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -451,7 +451,8 @@ static int dma_chan_get(struct dma_chan *chan)
/* The channel is already in use, update client count */
if (chan->client_count) {
__module_get(owner);
- goto out;
+ chan->client_count++;
+ return 0;
}
if (!try_module_get(owner))
@@ -470,11 +471,11 @@ static int dma_chan_get(struct dma_chan *chan)
goto err_out;
}
+ chan->client_count++;
+
if (!dma_has_cap(DMA_PRIVATE, chan->device->cap_mask))
balance_ref_count(chan);
-out:
- chan->client_count++;
return 0;
err_out:
diff --git a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
index 48de8d2..cfc47ef 100644
--- a/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
+++ b/drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
@@ -288,8 +288,6 @@ dma_chan_tx_status(struct dma_chan *dchan, dma_cookie_t cookie,
len = vd_to_axi_desc(vdesc)->hw_desc[0].len;
completed_length = completed_blocks * len;
bytes = length - completed_length;
- } else {
- bytes = vd_to_axi_desc(vdesc)->length;
}
spin_unlock_irqrestore(&chan->vc.lock, flags);
@@ -980,6 +978,11 @@ static noinline void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
/* The bad descriptor currently is in the head of vc list */
vd = vchan_next_desc(&chan->vc);
+ if (!vd) {
+ dev_err(chan2dev(chan), "BUG: %s, IRQ with no descriptors\n",
+ axi_chan_name(chan));
+ goto out;
+ }
/* Remove the completed descriptor from issued list */
list_del(&vd->node);
@@ -994,6 +997,7 @@ static noinline void axi_chan_handle_err(struct axi_dma_chan *chan, u32 status)
/* Try to restart the controller */
axi_chan_start_first_queued(chan);
+out:
spin_unlock_irqrestore(&chan->vc.lock, flags);
}
diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index 36b3fe1..97f5e4e 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -438,6 +438,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
* and destination addresses are increased
* by the same portion (data length)
*/
+ } else if (xfer->type == EDMA_XFER_INTERLEAVED) {
+ burst->dar = dst_addr;
}
} else {
burst->dar = dst_addr;
@@ -453,6 +455,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
* and destination addresses are increased
* by the same portion (data length)
*/
+ } else if (xfer->type == EDMA_XFER_INTERLEAVED) {
+ burst->sar = src_addr;
}
}
diff --git a/drivers/dma/dw-edma/dw-edma-v0-core.c b/drivers/dma/dw-edma/dw-edma-v0-core.c
index b5b8f81..043a4f3 100644
--- a/drivers/dma/dw-edma/dw-edma-v0-core.c
+++ b/drivers/dma/dw-edma/dw-edma-v0-core.c
@@ -192,7 +192,7 @@ static inline void writeq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
static inline u64 readq_ch(struct dw_edma *dw, enum dw_edma_dir dir, u16 ch,
const void __iomem *addr)
{
- u32 value;
+ u64 value;
if (dw->mf == EDMA_MF_EDMA_LEGACY) {
u32 viewport_sel;
diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
index 11d3f2a..535f021 100644
--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -702,7 +702,7 @@ static void idxd_groups_clear_state(struct idxd_device *idxd)
group->use_rdbuf_limit = false;
group->rdbufs_allowed = 0;
group->rdbufs_reserved = 0;
- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override) {
+ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override) {
group->tc_a = 1;
group->tc_b = 1;
} else {
@@ -1248,8 +1248,7 @@ int __drv_enable_wq(struct idxd_wq *wq)
return 0;
err_map_portal:
- rc = idxd_wq_disable(wq, false);
- if (rc < 0)
+ if (idxd_wq_disable(wq, false))
dev_dbg(dev, "wq %s disable failed\n", dev_name(wq_confdev(wq)));
err:
return rc;
diff --git a/drivers/dma/idxd/init.c b/drivers/dma/idxd/init.c
index 6263d98..e0e0c7f 100644
--- a/drivers/dma/idxd/init.c
+++ b/drivers/dma/idxd/init.c
@@ -340,7 +340,7 @@ static int idxd_setup_groups(struct idxd_device *idxd)
}
idxd->groups[i] = group;
- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override) {
+ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override) {
group->tc_a = 1;
group->tc_b = 1;
} else {
diff --git a/drivers/dma/idxd/sysfs.c b/drivers/dma/idxd/sysfs.c
index 33d94c6..489a9d8 100644
--- a/drivers/dma/idxd/sysfs.c
+++ b/drivers/dma/idxd/sysfs.c
@@ -327,7 +327,7 @@ static ssize_t group_traffic_class_a_store(struct device *dev,
if (idxd->state == IDXD_DEV_ENABLED)
return -EPERM;
- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override)
+ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override)
return -EPERM;
if (val < 0 || val > 7)
@@ -369,7 +369,7 @@ static ssize_t group_traffic_class_b_store(struct device *dev,
if (idxd->state == IDXD_DEV_ENABLED)
return -EPERM;
- if (idxd->hw.version < DEVICE_VERSION_2 && !tc_override)
+ if (idxd->hw.version <= DEVICE_VERSION_2 && !tc_override)
return -EPERM;
if (val < 0 || val > 7)
diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index 5215a5e..292f4c9 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -1428,10 +1428,12 @@ static struct sdma_desc *sdma_transfer_init(struct sdma_channel *sdmac,
sdma_config_ownership(sdmac, false, true, false);
if (sdma_load_context(sdmac))
- goto err_desc_out;
+ goto err_bd_out;
return desc;
+err_bd_out:
+ sdma_free_bd(desc);
err_desc_out:
kfree(desc);
err_out:
diff --git a/drivers/dma/lgm/lgm-dma.c b/drivers/dma/lgm/lgm-dma.c
index 9b9184f..1709d15 100644
--- a/drivers/dma/lgm/lgm-dma.c
+++ b/drivers/dma/lgm/lgm-dma.c
@@ -914,7 +914,7 @@ static void ldma_dev_init(struct ldma_dev *d)
}
}
-static int ldma_cfg_init(struct ldma_dev *d)
+static int ldma_parse_dt(struct ldma_dev *d)
{
struct fwnode_handle *fwnode = dev_fwnode(d->dev);
struct ldma_port *p;
@@ -1661,10 +1661,6 @@ static int intel_ldma_probe(struct platform_device *pdev)
p->ldev = d;
}
- ret = ldma_cfg_init(d);
- if (ret)
- return ret;
-
dma_dev->dev = &pdev->dev;
ch_mask = (unsigned long)d->channels_mask;
@@ -1675,6 +1671,10 @@ static int intel_ldma_probe(struct platform_device *pdev)
ldma_dma_init_v3X(j, d);
}
+ ret = ldma_parse_dt(d);
+ if (ret)
+ return ret;
+
dma_dev->device_alloc_chan_resources = ldma_alloc_chan_resources;
dma_dev->device_free_chan_resources = ldma_free_chan_resources;
dma_dev->device_terminate_all = ldma_terminate_all;
diff --git a/drivers/dma/ptdma/ptdma-dev.c b/drivers/dma/ptdma/ptdma-dev.c
index daafea5..bca4063 100644
--- a/drivers/dma/ptdma/ptdma-dev.c
+++ b/drivers/dma/ptdma/ptdma-dev.c
@@ -71,12 +71,13 @@ static int pt_core_execute_cmd(struct ptdma_desc *desc, struct pt_cmd_queue *cmd
bool soc = FIELD_GET(DWORD0_SOC, desc->dw0);
u8 *q_desc = (u8 *)&cmd_q->qbase[cmd_q->qidx];
u32 tail;
+ unsigned long flags;
if (soc) {
desc->dw0 |= FIELD_PREP(DWORD0_IOC, desc->dw0);
desc->dw0 &= ~DWORD0_SOC;
}
- mutex_lock(&cmd_q->q_mutex);
+ spin_lock_irqsave(&cmd_q->q_lock, flags);
/* Copy 32-byte command descriptor to hw queue. */
memcpy(q_desc, desc, 32);
@@ -91,7 +92,7 @@ static int pt_core_execute_cmd(struct ptdma_desc *desc, struct pt_cmd_queue *cmd
/* Turn the queue back on using our cached control register */
pt_start_queue(cmd_q);
- mutex_unlock(&cmd_q->q_mutex);
+ spin_unlock_irqrestore(&cmd_q->q_lock, flags);
return 0;
}
@@ -197,7 +198,7 @@ int pt_core_init(struct pt_device *pt)
cmd_q->pt = pt;
cmd_q->dma_pool = dma_pool;
- mutex_init(&cmd_q->q_mutex);
+ spin_lock_init(&cmd_q->q_lock);
/* Page alignment satisfies our needs for N <= 128 */
cmd_q->qsize = Q_SIZE(Q_DESC_SIZE);
diff --git a/drivers/dma/ptdma/ptdma.h b/drivers/dma/ptdma/ptdma.h
index afbf192..0f0b400a 100644
--- a/drivers/dma/ptdma/ptdma.h
+++ b/drivers/dma/ptdma/ptdma.h
@@ -196,7 +196,7 @@ struct pt_cmd_queue {
struct ptdma_desc *qbase;
/* Aligned queue start address (per requirement) */
- struct mutex q_mutex ____cacheline_aligned;
+ spinlock_t q_lock ____cacheline_aligned;
unsigned int qidx;
unsigned int qsize;
diff --git a/drivers/dma/sf-pdma/sf-pdma.c b/drivers/dma/sf-pdma/sf-pdma.c
index ab0ad7a..dcf2b7a 100644
--- a/drivers/dma/sf-pdma/sf-pdma.c
+++ b/drivers/dma/sf-pdma/sf-pdma.c
@@ -96,7 +96,6 @@ sf_pdma_prep_dma_memcpy(struct dma_chan *dchan, dma_addr_t dest, dma_addr_t src,
if (!desc)
return NULL;
- desc->in_use = true;
desc->dirn = DMA_MEM_TO_MEM;
desc->async_tx = vchan_tx_prep(&chan->vchan, &desc->vdesc, flags);
@@ -290,7 +289,7 @@ static void sf_pdma_free_desc(struct virt_dma_desc *vdesc)
struct sf_pdma_desc *desc;
desc = to_sf_pdma_desc(vdesc);
- desc->in_use = false;
+ kfree(desc);
}
static void sf_pdma_donebh_tasklet(struct tasklet_struct *t)
diff --git a/drivers/dma/sf-pdma/sf-pdma.h b/drivers/dma/sf-pdma/sf-pdma.h
index 0c20167..02a229a 100644
--- a/drivers/dma/sf-pdma/sf-pdma.h
+++ b/drivers/dma/sf-pdma/sf-pdma.h
@@ -82,7 +82,6 @@ struct sf_pdma_desc {
u64 src_addr;
struct virt_dma_desc vdesc;
struct sf_pdma_chan *chan;
- bool in_use;
enum dma_transfer_direction dirn;
struct dma_async_tx_descriptor *async_tx;
};
diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
index d1dff3a..f4f722e 100644
--- a/drivers/dma/tegra210-adma.c
+++ b/drivers/dma/tegra210-adma.c
@@ -224,7 +224,7 @@ static int tegra_adma_init(struct tegra_adma *tdma)
int ret;
/* Clear any interrupts */
- tdma_write(tdma, tdma->cdata->global_int_clear, 0x1);
+ tdma_write(tdma, tdma->cdata->ch_base_offset + tdma->cdata->global_int_clear, 0x1);
/* Assert soft reset */
tdma_write(tdma, ADMA_GLOBAL_SOFT_RESET, 0x1);
diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
index 75f2a00..d796e50 100644
--- a/drivers/dma/ti/k3-udma.c
+++ b/drivers/dma/ti/k3-udma.c
@@ -760,11 +760,12 @@ static void udma_decrement_byte_counters(struct udma_chan *uc, u32 val)
if (uc->desc->dir == DMA_DEV_TO_MEM) {
udma_rchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
udma_rchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
- udma_rchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
+ if (uc->config.ep_type != PSIL_EP_NATIVE)
+ udma_rchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
} else {
udma_tchanrt_write(uc, UDMA_CHAN_RT_BCNT_REG, val);
udma_tchanrt_write(uc, UDMA_CHAN_RT_SBCNT_REG, val);
- if (!uc->bchan)
+ if (!uc->bchan && uc->config.ep_type != PSIL_EP_NATIVE)
udma_tchanrt_write(uc, UDMA_CHAN_RT_PEER_BCNT_REG, val);
}
}
diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
index 4273150..edc2bb8 100644
--- a/drivers/dma/xilinx/xilinx_dma.c
+++ b/drivers/dma/xilinx/xilinx_dma.c
@@ -3138,8 +3138,10 @@ static int xilinx_dma_probe(struct platform_device *pdev)
/* Initialize the channels */
for_each_child_of_node(node, child) {
err = xilinx_dma_child_probe(xdev, child);
- if (err < 0)
+ if (err < 0) {
+ of_node_put(child);
goto error;
+ }
}
if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
diff --git a/drivers/edac/edac_device.c b/drivers/edac/edac_device.c
index 8220ce5..85c2299 100644
--- a/drivers/edac/edac_device.c
+++ b/drivers/edac/edac_device.c
@@ -34,6 +34,9 @@
static DEFINE_MUTEX(device_ctls_mutex);
static LIST_HEAD(edac_device_list);
+/* Default workqueue processing interval on this instance, in msecs */
+#define DEFAULT_POLL_INTERVAL 1000
+
#ifdef CONFIG_EDAC_DEBUG
static void edac_device_dump_device(struct edac_device_ctl_info *edac_dev)
{
@@ -366,7 +369,7 @@ static void edac_device_workq_function(struct work_struct *work_req)
* whole one second to save timers firing all over the period
* between integral seconds
*/
- if (edac_dev->poll_msec == 1000)
+ if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
else
edac_queue_work(&edac_dev->work, edac_dev->delay);
@@ -396,7 +399,7 @@ static void edac_device_workq_setup(struct edac_device_ctl_info *edac_dev,
* timers firing on sub-second basis, while they are happy
* to fire together on the 1 second exactly
*/
- if (edac_dev->poll_msec == 1000)
+ if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
edac_queue_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
else
edac_queue_work(&edac_dev->work, edac_dev->delay);
@@ -430,7 +433,7 @@ void edac_device_reset_delay_period(struct edac_device_ctl_info *edac_dev,
edac_dev->delay = msecs_to_jiffies(msec);
/* See comment in edac_device_workq_setup() above */
- if (edac_dev->poll_msec == 1000)
+ if (edac_dev->poll_msec == DEFAULT_POLL_INTERVAL)
edac_mod_work(&edac_dev->work, round_jiffies_relative(edac_dev->delay));
else
edac_mod_work(&edac_dev->work, edac_dev->delay);
@@ -472,11 +475,7 @@ int edac_device_add_device(struct edac_device_ctl_info *edac_dev)
/* This instance is NOW RUNNING */
edac_dev->op_state = OP_RUNNING_POLL;
- /*
- * enable workq processing on this instance,
- * default = 1000 msec
- */
- edac_device_workq_setup(edac_dev, 1000);
+ edac_device_workq_setup(edac_dev, edac_dev->poll_msec ?: DEFAULT_POLL_INTERVAL);
} else {
edac_dev->op_state = OP_RUNNING_INTERRUPT;
}
diff --git a/drivers/edac/highbank_mc_edac.c b/drivers/edac/highbank_mc_edac.c
index 61b76ec..19fba25 100644
--- a/drivers/edac/highbank_mc_edac.c
+++ b/drivers/edac/highbank_mc_edac.c
@@ -174,8 +174,10 @@ static int highbank_mc_probe(struct platform_device *pdev)
drvdata = mci->pvt_info;
platform_set_drvdata(pdev, mci);
- if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL))
- return -ENOMEM;
+ if (!devres_open_group(&pdev->dev, NULL, GFP_KERNEL)) {
+ res = -ENOMEM;
+ goto free;
+ }
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!r) {
@@ -243,6 +245,7 @@ static int highbank_mc_probe(struct platform_device *pdev)
edac_mc_del_mc(&pdev->dev);
err:
devres_release_group(&pdev->dev, NULL);
+free:
edac_mc_free(mci);
return res;
}
diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c
index 97a27e4..c45519f 100644
--- a/drivers/edac/qcom_edac.c
+++ b/drivers/edac/qcom_edac.c
@@ -252,7 +252,7 @@ dump_syn_reg_values(struct llcc_drv_data *drv, u32 bank, int err_type)
static int
dump_syn_reg(struct edac_device_ctl_info *edev_ctl, int err_type, u32 bank)
{
- struct llcc_drv_data *drv = edev_ctl->pvt_info;
+ struct llcc_drv_data *drv = edev_ctl->dev->platform_data;
int ret;
ret = dump_syn_reg_values(drv, bank, err_type);
@@ -289,7 +289,7 @@ static irqreturn_t
llcc_ecc_irq_handler(int irq, void *edev_ctl)
{
struct edac_device_ctl_info *edac_dev_ctl = edev_ctl;
- struct llcc_drv_data *drv = edac_dev_ctl->pvt_info;
+ struct llcc_drv_data *drv = edac_dev_ctl->dev->platform_data;
irqreturn_t irq_rc = IRQ_NONE;
u32 drp_error, trp_error, i;
int ret;
@@ -358,7 +358,6 @@ static int qcom_llcc_edac_probe(struct platform_device *pdev)
edev_ctl->dev_name = dev_name(dev);
edev_ctl->ctl_name = "llcc";
edev_ctl->panic_on_ue = LLCC_ERP_PANIC_ON_UE;
- edev_ctl->pvt_info = llcc_driv_data;
rc = edac_device_add_device(edev_ctl);
if (rc)
diff --git a/drivers/extcon/extcon-usbc-tusb320.c b/drivers/extcon/extcon-usbc-tusb320.c
index 7223c4b..9dfa545 100644
--- a/drivers/extcon/extcon-usbc-tusb320.c
+++ b/drivers/extcon/extcon-usbc-tusb320.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-/**
+/*
* drivers/extcon/extcon-tusb320.c - TUSB320 extcon driver
*
* Copyright (C) 2020 National Instruments Corporation
diff --git a/drivers/firewire/core-cdev.c b/drivers/firewire/core-cdev.c
index b0cc3f1..16ea847 100644
--- a/drivers/firewire/core-cdev.c
+++ b/drivers/firewire/core-cdev.c
@@ -818,8 +818,10 @@ static int ioctl_send_response(struct client *client, union ioctl_arg *arg)
r = container_of(resource, struct inbound_transaction_resource,
resource);
- if (is_fcp_request(r->request))
+ if (is_fcp_request(r->request)) {
+ kfree(r->data);
goto out;
+ }
if (a->length != fw_get_response_length(r->request)) {
ret = -EINVAL;
diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c
index a8ff4c9..1184249 100644
--- a/drivers/firmware/arm_scmi/driver.c
+++ b/drivers/firmware/arm_scmi/driver.c
@@ -783,6 +783,8 @@ static int do_xfer(const struct scmi_protocol_handle *ph,
xfer->hdr.protocol_id, xfer->hdr.seq,
xfer->hdr.poll_completion);
+ /* Clear any stale status */
+ xfer->hdr.status = SCMI_SUCCESS;
xfer->state = SCMI_XFER_SENT_OK;
/*
* Even though spinlocking is not needed here since no race is possible
diff --git a/drivers/firmware/arm_scmi/mailbox.c b/drivers/firmware/arm_scmi/mailbox.c
index e09eb12..ed9b83a 100644
--- a/drivers/firmware/arm_scmi/mailbox.c
+++ b/drivers/firmware/arm_scmi/mailbox.c
@@ -52,6 +52,39 @@ static bool mailbox_chan_available(struct device *dev, int idx)
"#mbox-cells", idx, NULL);
}
+static int mailbox_chan_validate(struct device *cdev)
+{
+ int num_mb, num_sh, ret = 0;
+ struct device_node *np = cdev->of_node;
+
+ num_mb = of_count_phandle_with_args(np, "mboxes", "#mbox-cells");
+ num_sh = of_count_phandle_with_args(np, "shmem", NULL);
+ /* Bail out if mboxes and shmem descriptors are inconsistent */
+ if (num_mb <= 0 || num_sh > 2 || num_mb != num_sh) {
+ dev_warn(cdev, "Invalid channel descriptor for '%s'\n",
+ of_node_full_name(np));
+ return -EINVAL;
+ }
+
+ if (num_sh > 1) {
+ struct device_node *np_tx, *np_rx;
+
+ np_tx = of_parse_phandle(np, "shmem", 0);
+ np_rx = of_parse_phandle(np, "shmem", 1);
+ /* SCMI Tx and Rx shared mem areas have to be distinct */
+ if (!np_tx || !np_rx || np_tx == np_rx) {
+ dev_warn(cdev, "Invalid shmem descriptor for '%s'\n",
+ of_node_full_name(np));
+ ret = -EINVAL;
+ }
+
+ of_node_put(np_tx);
+ of_node_put(np_rx);
+ }
+
+ return ret;
+}
+
static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
bool tx)
{
@@ -64,6 +97,10 @@ static int mailbox_chan_setup(struct scmi_chan_info *cinfo, struct device *dev,
resource_size_t size;
struct resource res;
+ ret = mailbox_chan_validate(cdev);
+ if (ret)
+ return ret;
+
smbox = devm_kzalloc(dev, sizeof(*smbox), GFP_KERNEL);
if (!smbox)
return -ENOMEM;
diff --git a/drivers/firmware/arm_scmi/shmem.c b/drivers/firmware/arm_scmi/shmem.c
index 0e3eaea..56a1f61 100644
--- a/drivers/firmware/arm_scmi/shmem.c
+++ b/drivers/firmware/arm_scmi/shmem.c
@@ -58,10 +58,11 @@ u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem)
void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
struct scmi_xfer *xfer)
{
+ size_t len = ioread32(&shmem->length);
+
xfer->hdr.status = ioread32(shmem->msg_payload);
/* Skip the length of header and status in shmem area i.e 8 bytes */
- xfer->rx.len = min_t(size_t, xfer->rx.len,
- ioread32(&shmem->length) - 8);
+ xfer->rx.len = min_t(size_t, xfer->rx.len, len > 8 ? len - 8 : 0);
/* Take a copy to the rx buffer.. */
memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len);
@@ -70,8 +71,10 @@ void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem,
void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem,
size_t max_len, struct scmi_xfer *xfer)
{
+ size_t len = ioread32(&shmem->length);
+
/* Skip only the length of header in shmem area i.e 4 bytes */
- xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4);
+ xfer->rx.len = min_t(size_t, max_len, len > 4 ? len - 4 : 0);
/* Take a copy to the rx buffer.. */
memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len);
diff --git a/drivers/firmware/dmi-sysfs.c b/drivers/firmware/dmi-sysfs.c
index 4a93fb4..3d57b08 100644
--- a/drivers/firmware/dmi-sysfs.c
+++ b/drivers/firmware/dmi-sysfs.c
@@ -602,16 +602,16 @@ static void __init dmi_sysfs_register_handle(const struct dmi_header *dh,
*ret = kobject_init_and_add(&entry->kobj, &dmi_sysfs_entry_ktype, NULL,
"%d-%d", dh->type, entry->instance);
- if (*ret) {
- kobject_put(&entry->kobj);
- return;
- }
-
/* Thread on the global list for cleanup */
spin_lock(&entry_list_lock);
list_add_tail(&entry->list, &entry_list);
spin_unlock(&entry_list_lock);
+ if (*ret) {
+ kobject_put(&entry->kobj);
+ return;
+ }
+
/* Handle specializations by type */
switch (dh->type) {
case DMI_ENTRY_SYSTEM_EVENT_LOG:
diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index a2765d6..332739f 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -950,6 +950,8 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size)
/* first try to find a slot in an existing linked list entry */
for (prsv = efi_memreserve_root->next; prsv; ) {
rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB);
+ if (!rsv)
+ return -ENOMEM;
index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size);
if (index < rsv->size) {
rsv->entry[index].base = addr;
diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
index 0a9aba5..f178b29 100644
--- a/drivers/firmware/efi/memattr.c
+++ b/drivers/firmware/efi/memattr.c
@@ -33,7 +33,7 @@ int __init efi_memattr_init(void)
return -ENOMEM;
}
- if (tbl->version > 1) {
+ if (tbl->version > 2) {
pr_warn("Unexpected EFI Memory Attributes table version %d\n",
tbl->version);
goto unmap;
diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
index f3e54f6..60075e0 100644
--- a/drivers/firmware/efi/runtime-wrappers.c
+++ b/drivers/firmware/efi/runtime-wrappers.c
@@ -62,6 +62,7 @@ struct efi_runtime_work efi_rts_work;
\
if (!efi_enabled(EFI_RUNTIME_SERVICES)) { \
pr_warn_once("EFI Runtime Services are disabled!\n"); \
+ efi_rts_work.status = EFI_DEVICE_ERROR; \
goto exit; \
} \
\
diff --git a/drivers/firmware/efi/sysfb_efi.c b/drivers/firmware/efi/sysfb_efi.c
index 4c7c9dd..7ac7578 100644
--- a/drivers/firmware/efi/sysfb_efi.c
+++ b/drivers/firmware/efi/sysfb_efi.c
@@ -266,6 +266,14 @@ static const struct dmi_system_id efifb_dmi_swap_width_height[] __initconst = {
"Lenovo ideapad D330-10IGM"),
},
},
+ {
+ /* Lenovo IdeaPad Duet 3 10IGL5 with 1200x1920 portrait screen */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION,
+ "IdeaPad Duet 3 10IGL5"),
+ },
+ },
{},
};
@@ -335,7 +343,7 @@ static const struct fwnode_operations efifb_fwnode_ops = {
#ifdef CONFIG_EFI
static struct fwnode_handle efifb_fwnode;
-__init void sysfb_apply_efi_quirks(struct platform_device *pd)
+__init void sysfb_apply_efi_quirks(void)
{
if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI ||
!(screen_info.capabilities & VIDEO_CAPABILITY_SKIP_QUIRKS))
@@ -349,7 +357,10 @@ __init void sysfb_apply_efi_quirks(struct platform_device *pd)
screen_info.lfb_height = temp;
screen_info.lfb_linelength = 4 * screen_info.lfb_width;
}
+}
+__init void sysfb_set_efifb_fwnode(struct platform_device *pd)
+{
if (screen_info.orig_video_isVGA == VIDEO_TYPE_EFI && IS_ENABLED(CONFIG_PCI)) {
fwnode_init(&efifb_fwnode, &efifb_fwnode_ops);
pd->dev.fwnode = &efifb_fwnode;
diff --git a/drivers/firmware/google/coreboot_table.c b/drivers/firmware/google/coreboot_table.c
index 9ca21fe..f3694d3 100644
--- a/drivers/firmware/google/coreboot_table.c
+++ b/drivers/firmware/google/coreboot_table.c
@@ -93,7 +93,12 @@ static int coreboot_table_populate(struct device *dev, void *ptr)
for (i = 0; i < header->table_entries; i++) {
entry = ptr_entry;
- device = kzalloc(sizeof(struct device) + entry->size, GFP_KERNEL);
+ if (entry->size < sizeof(*entry)) {
+ dev_warn(dev, "coreboot table entry too small!\n");
+ return -EINVAL;
+ }
+
+ device = kzalloc(sizeof(device->dev) + entry->size, GFP_KERNEL);
if (!device)
return -ENOMEM;
@@ -101,7 +106,7 @@ static int coreboot_table_populate(struct device *dev, void *ptr)
device->dev.parent = dev;
device->dev.bus = &coreboot_bus_type;
device->dev.release = coreboot_device_release;
- memcpy(&device->entry, ptr_entry, entry->size);
+ memcpy(device->raw, ptr_entry, entry->size);
ret = device_register(&device->dev);
if (ret) {
diff --git a/drivers/firmware/google/coreboot_table.h b/drivers/firmware/google/coreboot_table.h
index beb7786..4a89277 100644
--- a/drivers/firmware/google/coreboot_table.h
+++ b/drivers/firmware/google/coreboot_table.h
@@ -66,6 +66,7 @@ struct coreboot_device {
struct coreboot_table_entry entry;
struct lb_cbmem_ref cbmem_ref;
struct lb_framebuffer framebuffer;
+ DECLARE_FLEX_ARRAY(u8, raw);
};
};
diff --git a/drivers/firmware/google/framebuffer-coreboot.c b/drivers/firmware/google/framebuffer-coreboot.c
index c6dcc1e..c323a81 100644
--- a/drivers/firmware/google/framebuffer-coreboot.c
+++ b/drivers/firmware/google/framebuffer-coreboot.c
@@ -43,9 +43,7 @@ static int framebuffer_probe(struct coreboot_device *dev)
fb->green_mask_pos == formats[i].green.offset &&
fb->green_mask_size == formats[i].green.length &&
fb->blue_mask_pos == formats[i].blue.offset &&
- fb->blue_mask_size == formats[i].blue.length &&
- fb->reserved_mask_pos == formats[i].transp.offset &&
- fb->reserved_mask_size == formats[i].transp.length)
+ fb->blue_mask_size == formats[i].blue.length)
pdata.format = formats[i].name;
}
if (!pdata.format)
diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
index 4e2575d..871bedf 100644
--- a/drivers/firmware/google/gsmi.c
+++ b/drivers/firmware/google/gsmi.c
@@ -361,9 +361,10 @@ static efi_status_t gsmi_get_variable(efi_char16_t *name,
memcpy(data, gsmi_dev.data_buf->start, *data_size);
/* All variables are have the following attributes */
- *attr = EFI_VARIABLE_NON_VOLATILE |
- EFI_VARIABLE_BOOTSERVICE_ACCESS |
- EFI_VARIABLE_RUNTIME_ACCESS;
+ if (attr)
+ *attr = EFI_VARIABLE_NON_VOLATILE |
+ EFI_VARIABLE_BOOTSERVICE_ACCESS |
+ EFI_VARIABLE_RUNTIME_ACCESS;
}
spin_unlock_irqrestore(&gsmi_dev.lock, flags);
diff --git a/drivers/firmware/stratix10-svc.c b/drivers/firmware/stratix10-svc.c
index 7dd0ac1..4fdd75f 100644
--- a/drivers/firmware/stratix10-svc.c
+++ b/drivers/firmware/stratix10-svc.c
@@ -994,13 +994,17 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
/* allocate service controller and supporting channel */
controller = devm_kzalloc(dev, sizeof(*controller), GFP_KERNEL);
- if (!controller)
- return -ENOMEM;
+ if (!controller) {
+ ret = -ENOMEM;
+ goto err_destroy_pool;
+ }
chans = devm_kmalloc_array(dev, SVC_NUM_CHANNEL,
sizeof(*chans), GFP_KERNEL | __GFP_ZERO);
- if (!chans)
- return -ENOMEM;
+ if (!chans) {
+ ret = -ENOMEM;
+ goto err_destroy_pool;
+ }
controller->dev = dev;
controller->num_chans = SVC_NUM_CHANNEL;
@@ -1015,7 +1019,7 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL);
if (ret) {
dev_err(dev, "failed to allocate FIFO\n");
- return ret;
+ goto err_destroy_pool;
}
spin_lock_init(&controller->svc_fifo_lock);
@@ -1060,6 +1064,8 @@ static int stratix10_svc_drv_probe(struct platform_device *pdev)
platform_device_put(svc->stratix10_svc_rsu);
err_free_kfifo:
kfifo_free(&controller->svc_fifo);
+err_destroy_pool:
+ gen_pool_destroy(genpool);
return ret;
}
diff --git a/drivers/firmware/sysfb.c b/drivers/firmware/sysfb.c
index 1f276f1..abc3279 100644
--- a/drivers/firmware/sysfb.c
+++ b/drivers/firmware/sysfb.c
@@ -81,6 +81,8 @@ static __init int sysfb_init(void)
if (disabled)
goto unlock_mutex;
+ sysfb_apply_efi_quirks();
+
/* try to create a simple-framebuffer device */
compatible = sysfb_parse_mode(si, &mode);
if (compatible) {
@@ -103,7 +105,7 @@ static __init int sysfb_init(void)
goto unlock_mutex;
}
- sysfb_apply_efi_quirks(pd);
+ sysfb_set_efifb_fwnode(pd);
ret = platform_device_add_data(pd, si, sizeof(*si));
if (ret)
diff --git a/drivers/firmware/sysfb_simplefb.c b/drivers/firmware/sysfb_simplefb.c
index eac51c2..fd4fa92 100644
--- a/drivers/firmware/sysfb_simplefb.c
+++ b/drivers/firmware/sysfb_simplefb.c
@@ -110,7 +110,7 @@ __init struct platform_device *sysfb_create_simplefb(const struct screen_info *s
if (!pd)
return ERR_PTR(-ENOMEM);
- sysfb_apply_efi_quirks(pd);
+ sysfb_set_efifb_fwnode(pd);
ret = platform_device_add_resources(pd, &res, 1);
if (ret)
diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
index a3cadba..0dac354 100644
--- a/drivers/firmware/xilinx/zynqmp.c
+++ b/drivers/firmware/xilinx/zynqmp.c
@@ -171,7 +171,7 @@ static int zynqmp_pm_feature(u32 api_id)
}
/* Add new entry if not present */
- feature_data = kmalloc(sizeof(*feature_data), GFP_KERNEL);
+ feature_data = kmalloc(sizeof(*feature_data), GFP_ATOMIC);
if (!feature_data)
return -ENOMEM;
diff --git a/drivers/fpga/stratix10-soc.c b/drivers/fpga/stratix10-soc.c
index 047fd7f..91212ba 100644
--- a/drivers/fpga/stratix10-soc.c
+++ b/drivers/fpga/stratix10-soc.c
@@ -213,9 +213,9 @@ static int s10_ops_write_init(struct fpga_manager *mgr,
/* Allocate buffers from the service layer's pool. */
for (i = 0; i < NUM_SVC_BUFS; i++) {
kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE);
- if (!kbuf) {
+ if (IS_ERR(kbuf)) {
s10_free_buffers(mgr);
- ret = -ENOMEM;
+ ret = PTR_ERR(kbuf);
goto init_done;
}
diff --git a/drivers/fsi/fsi-sbefifo.c b/drivers/fsi/fsi-sbefifo.c
index 84cb965..97045a8 100644
--- a/drivers/fsi/fsi-sbefifo.c
+++ b/drivers/fsi/fsi-sbefifo.c
@@ -640,7 +640,7 @@ static void sbefifo_collect_async_ffdc(struct sbefifo *sbefifo)
}
ffdc_iov.iov_base = ffdc;
ffdc_iov.iov_len = SBEFIFO_MAX_FFDC_SIZE;
- iov_iter_kvec(&ffdc_iter, WRITE, &ffdc_iov, 1, SBEFIFO_MAX_FFDC_SIZE);
+ iov_iter_kvec(&ffdc_iter, READ, &ffdc_iov, 1, SBEFIFO_MAX_FFDC_SIZE);
cmd[0] = cpu_to_be32(2);
cmd[1] = cpu_to_be32(SBEFIFO_CMD_GET_SBE_FFDC);
rc = sbefifo_do_command(sbefifo, cmd, 2, &ffdc_iter);
@@ -737,7 +737,7 @@ int sbefifo_submit(struct device *dev, const __be32 *command, size_t cmd_len,
rbytes = (*resp_len) * sizeof(__be32);
resp_iov.iov_base = response;
resp_iov.iov_len = rbytes;
- iov_iter_kvec(&resp_iter, WRITE, &resp_iov, 1, rbytes);
+ iov_iter_kvec(&resp_iter, READ, &resp_iov, 1, rbytes);
/* Perform the command */
mutex_lock(&sbefifo->lock);
@@ -817,7 +817,7 @@ static ssize_t sbefifo_user_read(struct file *file, char __user *buf,
/* Prepare iov iterator */
resp_iov.iov_base = buf;
resp_iov.iov_len = len;
- iov_iter_init(&resp_iter, WRITE, &resp_iov, 1, len);
+ iov_iter_init(&resp_iter, READ, &resp_iov, 1, len);
/* Perform the command */
mutex_lock(&sbefifo->lock);
diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
index 947474f..7b9def6 100644
--- a/drivers/gpio/Kconfig
+++ b/drivers/gpio/Kconfig
@@ -100,7 +100,7 @@
tristate
config GPIO_REGMAP
- depends on REGMAP
+ select REGMAP
tristate
# put drivers in the right section, in alphabetical order
diff --git a/drivers/gpio/gpio-amdpt.c b/drivers/gpio/gpio-amdpt.c
index 4439899..dba4836 100644
--- a/drivers/gpio/gpio-amdpt.c
+++ b/drivers/gpio/gpio-amdpt.c
@@ -35,19 +35,19 @@ static int pt_gpio_request(struct gpio_chip *gc, unsigned offset)
dev_dbg(gc->parent, "pt_gpio_request offset=%x\n", offset);
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
if (using_pins & BIT(offset)) {
dev_warn(gc->parent, "PT GPIO pin %x reconfigured\n",
offset);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return -EINVAL;
}
writel(using_pins | BIT(offset), pt_gpio->reg_base + PT_SYNC_REG);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
@@ -58,13 +58,13 @@ static void pt_gpio_free(struct gpio_chip *gc, unsigned offset)
unsigned long flags;
u32 using_pins;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
using_pins &= ~BIT(offset);
writel(using_pins, pt_gpio->reg_base + PT_SYNC_REG);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
dev_dbg(gc->parent, "pt_gpio_free offset=%x\n", offset);
}
diff --git a/drivers/gpio/gpio-brcmstb.c b/drivers/gpio/gpio-brcmstb.c
index 895a799..c5d85e9 100644
--- a/drivers/gpio/gpio-brcmstb.c
+++ b/drivers/gpio/gpio-brcmstb.c
@@ -92,9 +92,9 @@ brcmstb_gpio_get_active_irqs(struct brcmstb_gpio_bank *bank)
unsigned long status;
unsigned long flags;
- spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
status = __brcmstb_gpio_get_active_irqs(bank);
- spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
return status;
}
@@ -114,14 +114,14 @@ static void brcmstb_gpio_set_imask(struct brcmstb_gpio_bank *bank,
u32 imask;
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
imask = gc->read_reg(priv->reg_base + GIO_MASK(bank->id));
if (enable)
imask |= mask;
else
imask &= ~mask;
gc->write_reg(priv->reg_base + GIO_MASK(bank->id), imask);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static int brcmstb_gpio_to_irq(struct gpio_chip *gc, unsigned offset)
@@ -204,7 +204,7 @@ static int brcmstb_gpio_irq_set_type(struct irq_data *d, unsigned int type)
return -EINVAL;
}
- spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&bank->gc.bgpio_lock, flags);
iedge_config = bank->gc.read_reg(priv->reg_base +
GIO_EC(bank->id)) & ~mask;
@@ -220,7 +220,7 @@ static int brcmstb_gpio_irq_set_type(struct irq_data *d, unsigned int type)
bank->gc.write_reg(priv->reg_base + GIO_LEVEL(bank->id),
ilevel | level);
- spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&bank->gc.bgpio_lock, flags);
return 0;
}
diff --git a/drivers/gpio/gpio-cadence.c b/drivers/gpio/gpio-cadence.c
index 562f8f7..137aea4 100644
--- a/drivers/gpio/gpio-cadence.c
+++ b/drivers/gpio/gpio-cadence.c
@@ -41,12 +41,12 @@ static int cdns_gpio_request(struct gpio_chip *chip, unsigned int offset)
struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip);
unsigned long flags;
- spin_lock_irqsave(&chip->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&chip->bgpio_lock, flags);
iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) & ~BIT(offset),
cgpio->regs + CDNS_GPIO_BYPASS_MODE);
- spin_unlock_irqrestore(&chip->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&chip->bgpio_lock, flags);
return 0;
}
@@ -55,13 +55,13 @@ static void cdns_gpio_free(struct gpio_chip *chip, unsigned int offset)
struct cdns_gpio_chip *cgpio = gpiochip_get_data(chip);
unsigned long flags;
- spin_lock_irqsave(&chip->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&chip->bgpio_lock, flags);
iowrite32(ioread32(cgpio->regs + CDNS_GPIO_BYPASS_MODE) |
(BIT(offset) & cgpio->bypass_orig),
cgpio->regs + CDNS_GPIO_BYPASS_MODE);
- spin_unlock_irqrestore(&chip->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&chip->bgpio_lock, flags);
}
static void cdns_gpio_irq_mask(struct irq_data *d)
@@ -90,7 +90,7 @@ static int cdns_gpio_irq_set_type(struct irq_data *d, unsigned int type)
u32 mask = BIT(d->hwirq);
int ret = 0;
- spin_lock_irqsave(&chip->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&chip->bgpio_lock, flags);
int_value = ioread32(cgpio->regs + CDNS_GPIO_IRQ_VALUE) & ~mask;
int_type = ioread32(cgpio->regs + CDNS_GPIO_IRQ_TYPE) & ~mask;
@@ -115,7 +115,7 @@ static int cdns_gpio_irq_set_type(struct irq_data *d, unsigned int type)
iowrite32(int_type, cgpio->regs + CDNS_GPIO_IRQ_TYPE);
err_irq_type:
- spin_unlock_irqrestore(&chip->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&chip->bgpio_lock, flags);
return ret;
}
diff --git a/drivers/gpio/gpio-davinci.c b/drivers/gpio/gpio-davinci.c
index cb5afaa..0214244 100644
--- a/drivers/gpio/gpio-davinci.c
+++ b/drivers/gpio/gpio-davinci.c
@@ -326,7 +326,7 @@ static struct irq_chip gpio_irqchip = {
.irq_enable = gpio_irq_enable,
.irq_disable = gpio_irq_disable,
.irq_set_type = gpio_irq_type,
- .flags = IRQCHIP_SET_TYPE_MASKED,
+ .flags = IRQCHIP_SET_TYPE_MASKED | IRQCHIP_SKIP_SET_WAKE,
};
static void gpio_irq_handler(struct irq_desc *desc)
diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
index e981e7a..a503f37 100644
--- a/drivers/gpio/gpio-dwapb.c
+++ b/drivers/gpio/gpio-dwapb.c
@@ -242,9 +242,9 @@ static void dwapb_irq_ack(struct irq_data *d)
u32 val = BIT(irqd_to_hwirq(d));
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
dwapb_write(gpio, GPIO_PORTA_EOI, val);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void dwapb_irq_mask(struct irq_data *d)
@@ -254,10 +254,10 @@ static void dwapb_irq_mask(struct irq_data *d)
unsigned long flags;
u32 val;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
val = dwapb_read(gpio, GPIO_INTMASK) | BIT(irqd_to_hwirq(d));
dwapb_write(gpio, GPIO_INTMASK, val);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void dwapb_irq_unmask(struct irq_data *d)
@@ -267,10 +267,10 @@ static void dwapb_irq_unmask(struct irq_data *d)
unsigned long flags;
u32 val;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
val = dwapb_read(gpio, GPIO_INTMASK) & ~BIT(irqd_to_hwirq(d));
dwapb_write(gpio, GPIO_INTMASK, val);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void dwapb_irq_enable(struct irq_data *d)
@@ -280,11 +280,11 @@ static void dwapb_irq_enable(struct irq_data *d)
unsigned long flags;
u32 val;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
val = dwapb_read(gpio, GPIO_INTEN);
val |= BIT(irqd_to_hwirq(d));
dwapb_write(gpio, GPIO_INTEN, val);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void dwapb_irq_disable(struct irq_data *d)
@@ -294,11 +294,11 @@ static void dwapb_irq_disable(struct irq_data *d)
unsigned long flags;
u32 val;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
val = dwapb_read(gpio, GPIO_INTEN);
val &= ~BIT(irqd_to_hwirq(d));
dwapb_write(gpio, GPIO_INTEN, val);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static int dwapb_irq_set_type(struct irq_data *d, u32 type)
@@ -308,7 +308,7 @@ static int dwapb_irq_set_type(struct irq_data *d, u32 type)
irq_hw_number_t bit = irqd_to_hwirq(d);
unsigned long level, polarity, flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
level = dwapb_read(gpio, GPIO_INTTYPE_LEVEL);
polarity = dwapb_read(gpio, GPIO_INT_POLARITY);
@@ -343,7 +343,7 @@ static int dwapb_irq_set_type(struct irq_data *d, u32 type)
dwapb_write(gpio, GPIO_INTTYPE_LEVEL, level);
if (type != IRQ_TYPE_EDGE_BOTH)
dwapb_write(gpio, GPIO_INT_POLARITY, polarity);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
@@ -373,7 +373,7 @@ static int dwapb_gpio_set_debounce(struct gpio_chip *gc,
unsigned long flags, val_deb;
unsigned long mask = BIT(offset);
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
val_deb = dwapb_read(gpio, GPIO_PORTA_DEBOUNCE);
if (debounce)
@@ -382,7 +382,7 @@ static int dwapb_gpio_set_debounce(struct gpio_chip *gc,
val_deb &= ~mask;
dwapb_write(gpio, GPIO_PORTA_DEBOUNCE, val_deb);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
@@ -738,7 +738,7 @@ static int dwapb_gpio_suspend(struct device *dev)
unsigned long flags;
int i;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
for (i = 0; i < gpio->nr_ports; i++) {
unsigned int offset;
unsigned int idx = gpio->ports[i].idx;
@@ -765,7 +765,7 @@ static int dwapb_gpio_suspend(struct device *dev)
dwapb_write(gpio, GPIO_INTMASK, ~ctx->wake_en);
}
}
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
clk_bulk_disable_unprepare(DWAPB_NR_CLOCKS, gpio->clks);
@@ -785,7 +785,7 @@ static int dwapb_gpio_resume(struct device *dev)
return err;
}
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
for (i = 0; i < gpio->nr_ports; i++) {
unsigned int offset;
unsigned int idx = gpio->ports[i].idx;
@@ -812,7 +812,7 @@ static int dwapb_gpio_resume(struct device *dev)
dwapb_write(gpio, GPIO_PORTA_EOI, 0xffffffff);
}
}
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
diff --git a/drivers/gpio/gpio-grgpio.c b/drivers/gpio/gpio-grgpio.c
index f954359..21204a5 100644
--- a/drivers/gpio/gpio-grgpio.c
+++ b/drivers/gpio/gpio-grgpio.c
@@ -145,7 +145,7 @@ static int grgpio_irq_set_type(struct irq_data *d, unsigned int type)
return -EINVAL;
}
- spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
ipol = priv->gc.read_reg(priv->regs + GRGPIO_IPOL) & ~mask;
iedge = priv->gc.read_reg(priv->regs + GRGPIO_IEDGE) & ~mask;
@@ -153,7 +153,7 @@ static int grgpio_irq_set_type(struct irq_data *d, unsigned int type)
priv->gc.write_reg(priv->regs + GRGPIO_IPOL, ipol | pol);
priv->gc.write_reg(priv->regs + GRGPIO_IEDGE, iedge | edge);
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
return 0;
}
@@ -164,11 +164,11 @@ static void grgpio_irq_mask(struct irq_data *d)
int offset = d->hwirq;
unsigned long flags;
- spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
grgpio_set_imask(priv, offset, 0);
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
}
static void grgpio_irq_unmask(struct irq_data *d)
@@ -177,11 +177,11 @@ static void grgpio_irq_unmask(struct irq_data *d)
int offset = d->hwirq;
unsigned long flags;
- spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
grgpio_set_imask(priv, offset, 1);
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
}
static struct irq_chip grgpio_irq_chip = {
@@ -199,7 +199,7 @@ static irqreturn_t grgpio_irq_handler(int irq, void *dev)
int i;
int match = 0;
- spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
/*
* For each gpio line, call its interrupt handler if it its underlying
@@ -215,7 +215,7 @@ static irqreturn_t grgpio_irq_handler(int irq, void *dev)
}
}
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
if (!match)
dev_warn(priv->dev, "No gpio line matched irq %d\n", irq);
@@ -247,13 +247,13 @@ static int grgpio_irq_map(struct irq_domain *d, unsigned int irq,
dev_dbg(priv->dev, "Mapping irq %d for gpio line %d\n",
irq, offset);
- spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
/* Request underlying irq if not already requested */
lirq->irq = irq;
uirq = &priv->uirqs[lirq->index];
if (uirq->refcnt == 0) {
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
ret = request_irq(uirq->uirq, grgpio_irq_handler, 0,
dev_name(priv->dev), priv);
if (ret) {
@@ -262,11 +262,11 @@ static int grgpio_irq_map(struct irq_domain *d, unsigned int irq,
uirq->uirq);
return ret;
}
- spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
}
uirq->refcnt++;
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
/* Setup irq */
irq_set_chip_data(irq, priv);
@@ -290,7 +290,7 @@ static void grgpio_irq_unmap(struct irq_domain *d, unsigned int irq)
irq_set_chip_and_handler(irq, NULL, NULL);
irq_set_chip_data(irq, NULL);
- spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&priv->gc.bgpio_lock, flags);
/* Free underlying irq if last user unmapped */
index = -1;
@@ -309,13 +309,13 @@ static void grgpio_irq_unmap(struct irq_domain *d, unsigned int irq)
uirq = &priv->uirqs[lirq->index];
uirq->refcnt--;
if (uirq->refcnt == 0) {
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
free_irq(uirq->uirq, priv);
return;
}
}
- spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&priv->gc.bgpio_lock, flags);
}
static const struct irq_domain_ops grgpio_irq_domain_ops = {
diff --git a/drivers/gpio/gpio-hlwd.c b/drivers/gpio/gpio-hlwd.c
index 641719a..4e13e93 100644
--- a/drivers/gpio/gpio-hlwd.c
+++ b/drivers/gpio/gpio-hlwd.c
@@ -65,7 +65,7 @@ static void hlwd_gpio_irqhandler(struct irq_desc *desc)
int hwirq;
u32 emulated_pending;
- spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
pending = ioread32be(hlwd->regs + HW_GPIOB_INTFLAG);
pending &= ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
@@ -93,7 +93,7 @@ static void hlwd_gpio_irqhandler(struct irq_desc *desc)
/* Mark emulated interrupts as pending */
pending |= rising | falling;
}
- spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
chained_irq_enter(chip, desc);
@@ -118,11 +118,11 @@ static void hlwd_gpio_irq_mask(struct irq_data *data)
unsigned long flags;
u32 mask;
- spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
mask = ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
mask &= ~BIT(data->hwirq);
iowrite32be(mask, hlwd->regs + HW_GPIOB_INTMASK);
- spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
}
static void hlwd_gpio_irq_unmask(struct irq_data *data)
@@ -132,11 +132,11 @@ static void hlwd_gpio_irq_unmask(struct irq_data *data)
unsigned long flags;
u32 mask;
- spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
mask = ioread32be(hlwd->regs + HW_GPIOB_INTMASK);
mask |= BIT(data->hwirq);
iowrite32be(mask, hlwd->regs + HW_GPIOB_INTMASK);
- spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
}
static void hlwd_gpio_irq_enable(struct irq_data *data)
@@ -173,7 +173,7 @@ static int hlwd_gpio_irq_set_type(struct irq_data *data, unsigned int flow_type)
unsigned long flags;
u32 level;
- spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&hlwd->gpioc.bgpio_lock, flags);
hlwd->edge_emulation &= ~BIT(data->hwirq);
@@ -194,11 +194,11 @@ static int hlwd_gpio_irq_set_type(struct irq_data *data, unsigned int flow_type)
hlwd_gpio_irq_setup_emulation(hlwd, data->hwirq, flow_type);
break;
default:
- spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
return -EINVAL;
}
- spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&hlwd->gpioc.bgpio_lock, flags);
return 0;
}
diff --git a/drivers/gpio/gpio-idt3243x.c b/drivers/gpio/gpio-idt3243x.c
index 52b8b72..1cafdf4 100644
--- a/drivers/gpio/gpio-idt3243x.c
+++ b/drivers/gpio/gpio-idt3243x.c
@@ -57,7 +57,7 @@ static int idt_gpio_irq_set_type(struct irq_data *d, unsigned int flow_type)
if (sense == IRQ_TYPE_NONE || (sense & IRQ_TYPE_EDGE_BOTH))
return -EINVAL;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
ilevel = readl(ctrl->gpio + IDT_GPIO_ILEVEL);
if (sense & IRQ_TYPE_LEVEL_HIGH)
@@ -68,7 +68,7 @@ static int idt_gpio_irq_set_type(struct irq_data *d, unsigned int flow_type)
writel(ilevel, ctrl->gpio + IDT_GPIO_ILEVEL);
irq_set_handler_locked(d, handle_level_irq);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
@@ -86,12 +86,12 @@ static void idt_gpio_mask(struct irq_data *d)
struct idt_gpio_ctrl *ctrl = gpiochip_get_data(gc);
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
ctrl->mask_cache |= BIT(d->hwirq);
writel(ctrl->mask_cache, ctrl->pic + IDT_PIC_IRQ_MASK);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void idt_gpio_unmask(struct irq_data *d)
@@ -100,12 +100,12 @@ static void idt_gpio_unmask(struct irq_data *d)
struct idt_gpio_ctrl *ctrl = gpiochip_get_data(gc);
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
ctrl->mask_cache &= ~BIT(d->hwirq);
writel(ctrl->mask_cache, ctrl->pic + IDT_PIC_IRQ_MASK);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static int idt_gpio_irq_init_hw(struct gpio_chip *gc)
diff --git a/drivers/gpio/gpio-ixp4xx.c b/drivers/gpio/gpio-ixp4xx.c
index b3b0506..6b18450 100644
--- a/drivers/gpio/gpio-ixp4xx.c
+++ b/drivers/gpio/gpio-ixp4xx.c
@@ -128,7 +128,7 @@ static int ixp4xx_gpio_irq_set_type(struct irq_data *d, unsigned int type)
int_reg = IXP4XX_REG_GPIT1;
}
- spin_lock_irqsave(&g->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&g->gc.bgpio_lock, flags);
/* Clear the style for the appropriate pin */
val = __raw_readl(g->base + int_reg);
@@ -147,7 +147,7 @@ static int ixp4xx_gpio_irq_set_type(struct irq_data *d, unsigned int type)
val |= BIT(d->hwirq);
__raw_writel(val, g->base + IXP4XX_REG_GPOE);
- spin_unlock_irqrestore(&g->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&g->gc.bgpio_lock, flags);
/* This parent only accept level high (asserted) */
return irq_chip_set_type_parent(d, IRQ_TYPE_LEVEL_HIGH);
diff --git a/drivers/gpio/gpio-loongson1.c b/drivers/gpio/gpio-loongson1.c
index 1b1ee94..5d90b3b 100644
--- a/drivers/gpio/gpio-loongson1.c
+++ b/drivers/gpio/gpio-loongson1.c
@@ -25,10 +25,10 @@ static int ls1x_gpio_request(struct gpio_chip *gc, unsigned int offset)
{
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
__raw_writel(__raw_readl(gpio_reg_base + GPIO_CFG) | BIT(offset),
gpio_reg_base + GPIO_CFG);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
@@ -37,10 +37,10 @@ static void ls1x_gpio_free(struct gpio_chip *gc, unsigned int offset)
{
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
__raw_writel(__raw_readl(gpio_reg_base + GPIO_CFG) & ~BIT(offset),
gpio_reg_base + GPIO_CFG);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static int ls1x_gpio_probe(struct platform_device *pdev)
diff --git a/drivers/gpio/gpio-menz127.c b/drivers/gpio/gpio-menz127.c
index 1e21c66..a035a9b 100644
--- a/drivers/gpio/gpio-menz127.c
+++ b/drivers/gpio/gpio-menz127.c
@@ -64,7 +64,7 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
debounce /= 50;
}
- spin_lock(&gc->bgpio_lock);
+ raw_spin_lock(&gc->bgpio_lock);
db_en = readl(priv->reg_base + MEN_Z127_DBER);
@@ -79,7 +79,7 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
writel(db_en, priv->reg_base + MEN_Z127_DBER);
writel(db_cnt, priv->reg_base + GPIO_TO_DBCNT_REG(gpio));
- spin_unlock(&gc->bgpio_lock);
+ raw_spin_unlock(&gc->bgpio_lock);
return 0;
}
@@ -91,7 +91,7 @@ static int men_z127_set_single_ended(struct gpio_chip *gc,
struct men_z127_gpio *priv = gpiochip_get_data(gc);
u32 od_en;
- spin_lock(&gc->bgpio_lock);
+ raw_spin_lock(&gc->bgpio_lock);
od_en = readl(priv->reg_base + MEN_Z127_ODER);
if (param == PIN_CONFIG_DRIVE_OPEN_DRAIN)
@@ -101,7 +101,7 @@ static int men_z127_set_single_ended(struct gpio_chip *gc,
od_en &= ~BIT(offset);
writel(od_en, priv->reg_base + MEN_Z127_ODER);
- spin_unlock(&gc->bgpio_lock);
+ raw_spin_unlock(&gc->bgpio_lock);
return 0;
}
diff --git a/drivers/gpio/gpio-mlxbf2.c b/drivers/gpio/gpio-mlxbf2.c
index 40a052b..5a09070e 100644
--- a/drivers/gpio/gpio-mlxbf2.c
+++ b/drivers/gpio/gpio-mlxbf2.c
@@ -120,7 +120,7 @@ static int mlxbf2_gpio_lock_acquire(struct mlxbf2_gpio_context *gs)
u32 arm_gpio_lock_val;
mutex_lock(yu_arm_gpio_lock_param.lock);
- spin_lock(&gs->gc.bgpio_lock);
+ raw_spin_lock(&gs->gc.bgpio_lock);
arm_gpio_lock_val = readl(yu_arm_gpio_lock_param.io);
@@ -128,7 +128,7 @@ static int mlxbf2_gpio_lock_acquire(struct mlxbf2_gpio_context *gs)
* When lock active bit[31] is set, ModeX is write enabled
*/
if (YU_LOCK_ACTIVE_BIT(arm_gpio_lock_val)) {
- spin_unlock(&gs->gc.bgpio_lock);
+ raw_spin_unlock(&gs->gc.bgpio_lock);
mutex_unlock(yu_arm_gpio_lock_param.lock);
return -EINVAL;
}
@@ -146,7 +146,7 @@ static void mlxbf2_gpio_lock_release(struct mlxbf2_gpio_context *gs)
__releases(yu_arm_gpio_lock_param.lock)
{
writel(YU_ARM_GPIO_LOCK_RELEASE, yu_arm_gpio_lock_param.io);
- spin_unlock(&gs->gc.bgpio_lock);
+ raw_spin_unlock(&gs->gc.bgpio_lock);
mutex_unlock(yu_arm_gpio_lock_param.lock);
}
diff --git a/drivers/gpio/gpio-mmio.c b/drivers/gpio/gpio-mmio.c
index c335a03..d9dff3d 100644
--- a/drivers/gpio/gpio-mmio.c
+++ b/drivers/gpio/gpio-mmio.c
@@ -220,7 +220,7 @@ static void bgpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
unsigned long mask = bgpio_line2mask(gc, gpio);
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
if (val)
gc->bgpio_data |= mask;
@@ -229,7 +229,7 @@ static void bgpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
gc->write_reg(gc->reg_dat, gc->bgpio_data);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void bgpio_set_with_clear(struct gpio_chip *gc, unsigned int gpio,
@@ -248,7 +248,7 @@ static void bgpio_set_set(struct gpio_chip *gc, unsigned int gpio, int val)
unsigned long mask = bgpio_line2mask(gc, gpio);
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
if (val)
gc->bgpio_data |= mask;
@@ -257,7 +257,7 @@ static void bgpio_set_set(struct gpio_chip *gc, unsigned int gpio, int val)
gc->write_reg(gc->reg_set, gc->bgpio_data);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void bgpio_multiple_get_masks(struct gpio_chip *gc,
@@ -286,7 +286,7 @@ static void bgpio_set_multiple_single_reg(struct gpio_chip *gc,
unsigned long flags;
unsigned long set_mask, clear_mask;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
bgpio_multiple_get_masks(gc, mask, bits, &set_mask, &clear_mask);
@@ -295,7 +295,7 @@ static void bgpio_set_multiple_single_reg(struct gpio_chip *gc,
gc->write_reg(reg, gc->bgpio_data);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void bgpio_set_multiple(struct gpio_chip *gc, unsigned long *mask,
@@ -347,7 +347,7 @@ static int bgpio_dir_in(struct gpio_chip *gc, unsigned int gpio)
{
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
gc->bgpio_dir &= ~bgpio_line2mask(gc, gpio);
@@ -356,7 +356,7 @@ static int bgpio_dir_in(struct gpio_chip *gc, unsigned int gpio)
if (gc->reg_dir_out)
gc->write_reg(gc->reg_dir_out, gc->bgpio_dir);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
@@ -387,7 +387,7 @@ static void bgpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
{
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
gc->bgpio_dir |= bgpio_line2mask(gc, gpio);
@@ -396,7 +396,7 @@ static void bgpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
if (gc->reg_dir_out)
gc->write_reg(gc->reg_dir_out, gc->bgpio_dir);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static int bgpio_dir_out_dir_first(struct gpio_chip *gc, unsigned int gpio,
@@ -610,7 +610,7 @@ int bgpio_init(struct gpio_chip *gc, struct device *dev,
if (gc->bgpio_bits > BITS_PER_LONG)
return -EINVAL;
- spin_lock_init(&gc->bgpio_lock);
+ raw_spin_lock_init(&gc->bgpio_lock);
gc->parent = dev;
gc->label = dev_name(dev);
gc->base = -1;
diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c
index c871602..853d9aa 100644
--- a/drivers/gpio/gpio-mxc.c
+++ b/drivers/gpio/gpio-mxc.c
@@ -18,6 +18,7 @@
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
+#include <linux/spinlock.h>
#include <linux/syscore_ops.h>
#include <linux/gpio/driver.h>
#include <linux/of.h>
@@ -147,6 +148,7 @@ static int gpio_set_irq_type(struct irq_data *d, u32 type)
{
struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
struct mxc_gpio_port *port = gc->private;
+ unsigned long flags;
u32 bit, val;
u32 gpio_idx = d->hwirq;
int edge;
@@ -185,6 +187,8 @@ static int gpio_set_irq_type(struct irq_data *d, u32 type)
return -EINVAL;
}
+ raw_spin_lock_irqsave(&port->gc.bgpio_lock, flags);
+
if (GPIO_EDGE_SEL >= 0) {
val = readl(port->base + GPIO_EDGE_SEL);
if (edge == GPIO_INT_BOTH_EDGES)
@@ -204,15 +208,20 @@ static int gpio_set_irq_type(struct irq_data *d, u32 type)
writel(1 << gpio_idx, port->base + GPIO_ISR);
- return 0;
+ raw_spin_unlock_irqrestore(&port->gc.bgpio_lock, flags);
+
+ return port->gc.direction_input(&port->gc, gpio_idx);
}
static void mxc_flip_edge(struct mxc_gpio_port *port, u32 gpio)
{
void __iomem *reg = port->base;
+ unsigned long flags;
u32 bit, val;
int edge;
+ raw_spin_lock_irqsave(&port->gc.bgpio_lock, flags);
+
reg += GPIO_ICR1 + ((gpio & 0x10) >> 2); /* lower or upper register */
bit = gpio & 0xf;
val = readl(reg);
@@ -227,9 +236,12 @@ static void mxc_flip_edge(struct mxc_gpio_port *port, u32 gpio)
} else {
pr_err("mxc: invalid configuration for GPIO %d: %x\n",
gpio, edge);
- return;
+ goto unlock;
}
writel(val | (edge << (bit << 1)), reg);
+
+unlock:
+ raw_spin_unlock_irqrestore(&port->gc.bgpio_lock, flags);
}
/* handle 32 interrupts in one status register */
diff --git a/drivers/gpio/gpio-sifive.c b/drivers/gpio/gpio-sifive.c
index f50236e6..f41123d 100644
--- a/drivers/gpio/gpio-sifive.c
+++ b/drivers/gpio/gpio-sifive.c
@@ -44,7 +44,7 @@ static void sifive_gpio_set_ie(struct sifive_gpio *chip, unsigned int offset)
unsigned long flags;
unsigned int trigger;
- spin_lock_irqsave(&chip->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&chip->gc.bgpio_lock, flags);
trigger = (chip->irq_state & BIT(offset)) ? chip->trigger[offset] : 0;
regmap_update_bits(chip->regs, SIFIVE_GPIO_RISE_IE, BIT(offset),
(trigger & IRQ_TYPE_EDGE_RISING) ? BIT(offset) : 0);
@@ -54,7 +54,7 @@ static void sifive_gpio_set_ie(struct sifive_gpio *chip, unsigned int offset)
(trigger & IRQ_TYPE_LEVEL_HIGH) ? BIT(offset) : 0);
regmap_update_bits(chip->regs, SIFIVE_GPIO_LOW_IE, BIT(offset),
(trigger & IRQ_TYPE_LEVEL_LOW) ? BIT(offset) : 0);
- spin_unlock_irqrestore(&chip->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&chip->gc.bgpio_lock, flags);
}
static int sifive_gpio_irq_set_type(struct irq_data *d, unsigned int trigger)
@@ -84,13 +84,13 @@ static void sifive_gpio_irq_enable(struct irq_data *d)
/* Switch to input */
gc->direction_input(gc, offset);
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
/* Clear any sticky pending interrupts */
regmap_write(chip->regs, SIFIVE_GPIO_RISE_IP, bit);
regmap_write(chip->regs, SIFIVE_GPIO_FALL_IP, bit);
regmap_write(chip->regs, SIFIVE_GPIO_HIGH_IP, bit);
regmap_write(chip->regs, SIFIVE_GPIO_LOW_IP, bit);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
/* Enable interrupts */
assign_bit(offset, &chip->irq_state, 1);
@@ -116,13 +116,13 @@ static void sifive_gpio_irq_eoi(struct irq_data *d)
u32 bit = BIT(offset);
unsigned long flags;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
/* Clear all pending interrupts */
regmap_write(chip->regs, SIFIVE_GPIO_RISE_IP, bit);
regmap_write(chip->regs, SIFIVE_GPIO_FALL_IP, bit);
regmap_write(chip->regs, SIFIVE_GPIO_HIGH_IP, bit);
regmap_write(chip->regs, SIFIVE_GPIO_LOW_IP, bit);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
irq_chip_eoi_parent(d);
}
diff --git a/drivers/gpio/gpio-tb10x.c b/drivers/gpio/gpio-tb10x.c
index 718a508..de6afa3 100644
--- a/drivers/gpio/gpio-tb10x.c
+++ b/drivers/gpio/gpio-tb10x.c
@@ -62,14 +62,14 @@ static inline void tb10x_set_bits(struct tb10x_gpio *gpio, unsigned int offs,
u32 r;
unsigned long flags;
- spin_lock_irqsave(&gpio->gc.bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gpio->gc.bgpio_lock, flags);
r = tb10x_reg_read(gpio, offs);
r = (r & ~mask) | (val & mask);
tb10x_reg_write(gpio, offs, r);
- spin_unlock_irqrestore(&gpio->gc.bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gpio->gc.bgpio_lock, flags);
}
static int tb10x_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
index 47e191e..edb28af 100644
--- a/drivers/gpio/gpio-vf610.c
+++ b/drivers/gpio/gpio-vf610.c
@@ -304,7 +304,7 @@ static int vf610_gpio_probe(struct platform_device *pdev)
gc = &port->gc;
gc->of_node = np;
gc->parent = dev;
- gc->label = "vf610-gpio";
+ gc->label = dev_name(dev);
gc->ngpio = VF610_GPIO_PER_PORT;
gc->base = of_alias_get_id(np, "gpio") * VF610_GPIO_PER_PORT;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 4f62f42..d90da38 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1286,6 +1286,7 @@ void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
bool amdgpu_device_need_post(struct amdgpu_device *adev);
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
+bool amdgpu_device_aspm_support_quirk(void);
void amdgpu_cs_report_moved_bytes(struct amdgpu_device *adev, u64 num_bytes,
u64 num_vis_bytes);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 0d998bc..2f51789 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -32,6 +32,7 @@
#include <linux/slab.h>
#include <linux/pci.h>
+#include <drm/drm_aperture.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_probe_helper.h>
#include <drm/amdgpu_drm.h>
@@ -74,6 +75,10 @@
#include <drm/drm_drv.h>
+#if IS_ENABLED(CONFIG_X86)
+#include <asm/intel-family.h>
+#endif
+
MODULE_FIRMWARE("amdgpu/vega10_gpu_info.bin");
MODULE_FIRMWARE("amdgpu/vega12_gpu_info.bin");
MODULE_FIRMWARE("amdgpu/raven_gpu_info.bin");
@@ -89,6 +94,8 @@ MODULE_FIRMWARE("amdgpu/yellow_carp_gpu_info.bin");
#define AMDGPU_RESUME_MS 2000
+static const struct drm_driver amdgpu_kms_driver;
+
const char *amdgpu_asic_name[] = {
"TAHITI",
"PITCAIRN",
@@ -1334,6 +1341,17 @@ bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev)
return pcie_aspm_enabled(adev->pdev);
}
+bool amdgpu_device_aspm_support_quirk(void)
+{
+#if IS_ENABLED(CONFIG_X86)
+ struct cpuinfo_x86 *c = &cpu_data(0);
+
+ return !(c->x86 == 6 && c->x86_model == INTEL_FAM6_ALDERLAKE);
+#else
+ return true;
+#endif
+}
+
/* if we get transitioned to only one device, take VGA back */
/**
* amdgpu_device_vga_set_decode - enable/disable vga decode
@@ -3637,6 +3655,11 @@ int amdgpu_device_init(struct amdgpu_device *adev,
if (r)
return r;
+ /* Get rid of things like offb */
+ r = drm_aperture_remove_conflicting_pci_framebuffers(adev->pdev, &amdgpu_kms_driver);
+ if (r)
+ return r;
+
/* doorbell bar mapping and doorbell index init*/
amdgpu_device_doorbell_init(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 49756b9..67220c6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -23,7 +23,6 @@
*/
#include <drm/amdgpu_drm.h>
-#include <drm/drm_aperture.h>
#include <drm/drm_drv.h>
#include <drm/drm_gem.h>
#include <drm/drm_vblank.h>
@@ -1937,10 +1936,10 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x73FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_DIMGREY_CAVEFISH},
/* Aldebaran */
- {0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
- {0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
- {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
- {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN|AMD_EXP_HW_SUPPORT},
+ {0x1002, 0x7408, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+ {0x1002, 0x740C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+ {0x1002, 0x740F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
+ {0x1002, 0x7410, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_ALDEBARAN},
/* CYAN_SKILLFISH */
{0x1002, 0x13FE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_CYAN_SKILLFISH|AMD_IS_APU},
@@ -2069,11 +2068,6 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
size = pci_resource_len(pdev, 0);
is_fw_fb = amdgpu_is_fw_framebuffer(base, size);
- /* Get rid of things like offb */
- ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &amdgpu_kms_driver);
- if (ret)
- return ret;
-
adev = devm_drm_dev_alloc(&pdev->dev, &amdgpu_kms_driver, typeof(*adev), ddev);
if (IS_ERR(adev))
return PTR_ERR(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 14499f0..bbd6f7a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -579,7 +579,13 @@ void amdgpu_fence_driver_sw_fini(struct amdgpu_device *adev)
if (!ring || !ring->fence_drv.initialized)
continue;
- if (!ring->no_scheduler)
+ /*
+ * Notice we check for sched.ops since there's some
+ * override on the meaning of sched.ready by amdgpu.
+ * The natural check would be sched.ready, which is
+ * set as drm_sched_init() finishes...
+ */
+ if (ring->sched.ops)
drm_sched_fini(&ring->sched);
for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index e7845df..5e32906 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -580,10 +580,14 @@ void amdgpu_gfx_off_ctrl(struct amdgpu_device *adev, bool enable)
if (adev->gfx.gfx_off_req_count == 0 &&
!adev->gfx.gfx_off_state) {
/* If going to s2idle, no need to wait */
- if (adev->in_s0ix)
- delay = GFX_OFF_NO_DELAY;
- schedule_delayed_work(&adev->gfx.gfx_off_delay_work,
+ if (adev->in_s0ix) {
+ if (!amdgpu_dpm_set_powergating_by_smu(adev,
+ AMD_IP_BLOCK_TYPE_GFX, true))
+ adev->gfx.gfx_off_state = true;
+ } else {
+ schedule_delayed_work(&adev->gfx.gfx_off_delay_work,
delay);
+ }
}
} else {
if (adev->gfx.gfx_off_req_count == 0) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 6744427..43e30b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -43,6 +43,17 @@
#include "amdgpu_display.h"
#include "amdgpu_ras.h"
+static void amdgpu_runtime_pm_quirk(struct amdgpu_device *adev)
+{
+ /*
+ * Add below quirk on several sienna_cichlid cards to disable
+ * runtime pm to fix EMI failures.
+ */
+ if (((adev->pdev->device == 0x73A1) && (adev->pdev->revision == 0x00)) ||
+ ((adev->pdev->device == 0x73BF) && (adev->pdev->revision == 0xCF)))
+ adev->runpm = false;
+}
+
void amdgpu_unregister_gpu_instance(struct amdgpu_device *adev)
{
struct amdgpu_gpu_instance *gpu_instance;
@@ -201,6 +212,9 @@ int amdgpu_driver_load_kms(struct amdgpu_device *adev, unsigned long flags)
*/
if (adev->is_fw_fb)
adev->runpm = false;
+
+ amdgpu_runtime_pm_quirk(adev);
+
if (adev->runpm)
dev_info(adev->dev, "Using BACO for runtime pm\n");
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index b1d0cad..a0b1bf1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -1510,8 +1510,7 @@ u64 amdgpu_bo_gpu_offset_no_check(struct amdgpu_bo *bo)
uint32_t amdgpu_bo_get_preferred_domain(struct amdgpu_device *adev,
uint32_t domain)
{
- if ((domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) &&
- ((adev->asic_type == CHIP_CARRIZO) || (adev->asic_type == CHIP_STONEY))) {
+ if (domain == (AMDGPU_GEM_DOMAIN_VRAM | AMDGPU_GEM_DOMAIN_GTT)) {
domain = AMDGPU_GEM_DOMAIN_VRAM;
if (adev->gmc.real_vram_size <= AMDGPU_SG_THRESHOLD)
domain = AMDGPU_GEM_DOMAIN_GTT;
diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
index 6e27723..947e8c0 100644
--- a/drivers/gpu/drm/amd/amdgpu/nv.c
+++ b/drivers/gpu/drm/amd/amdgpu/nv.c
@@ -584,7 +584,7 @@ static void nv_pcie_gen3_enable(struct amdgpu_device *adev)
static void nv_program_aspm(struct amdgpu_device *adev)
{
- if (!amdgpu_device_should_use_aspm(adev))
+ if (!amdgpu_device_should_use_aspm(adev) || !amdgpu_device_aspm_support_quirk())
return;
if (!(adev->flags & AMD_IS_APU) &&
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 0cde8fb..529bb6c 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -461,8 +461,9 @@ static int soc15_read_register(struct amdgpu_device *adev, u32 se_num,
*value = 0;
for (i = 0; i < ARRAY_SIZE(soc15_allowed_read_registers); i++) {
en = &soc15_allowed_read_registers[i];
- if (adev->reg_offset[en->hwip][en->inst] &&
- reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ if (!adev->reg_offset[en->hwip][en->inst])
+ continue;
+ else if (reg_offset != (adev->reg_offset[en->hwip][en->inst][en->seg]
+ en->reg_offset))
continue;
diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
index 45f0188..bb414e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/vi.c
+++ b/drivers/gpu/drm/amd/amdgpu/vi.c
@@ -81,10 +81,6 @@
#include "mxgpu_vi.h"
#include "amdgpu_dm.h"
-#if IS_ENABLED(CONFIG_X86)
-#include <asm/intel-family.h>
-#endif
-
#define ixPCIE_LC_L1_PM_SUBSTATE 0x100100C6
#define PCIE_LC_L1_PM_SUBSTATE__LC_L1_SUBSTATES_OVERRIDE_EN_MASK 0x00000001L
#define PCIE_LC_L1_PM_SUBSTATE__LC_PCI_PM_L1_2_OVERRIDE_MASK 0x00000002L
@@ -1138,24 +1134,13 @@ static void vi_enable_aspm(struct amdgpu_device *adev)
WREG32_PCIE(ixPCIE_LC_CNTL, data);
}
-static bool aspm_support_quirk_check(void)
-{
-#if IS_ENABLED(CONFIG_X86)
- struct cpuinfo_x86 *c = &cpu_data(0);
-
- return !(c->x86 == 6 && c->x86_model == INTEL_FAM6_ALDERLAKE);
-#else
- return true;
-#endif
-}
-
static void vi_program_aspm(struct amdgpu_device *adev)
{
u32 data, data1, orig;
bool bL1SS = false;
bool bClkReqSupport = true;
- if (!amdgpu_device_should_use_aspm(adev) || !aspm_support_quirk_check())
+ if (!amdgpu_device_should_use_aspm(adev) || !amdgpu_device_aspm_support_quirk())
return;
if (adev->flags & AMD_IS_APU ||
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
index b8bdd79..8b5c82a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
@@ -528,16 +528,13 @@ static struct kfd_event_waiter *alloc_event_waiters(uint32_t num_events)
struct kfd_event_waiter *event_waiters;
uint32_t i;
- event_waiters = kmalloc_array(num_events,
- sizeof(struct kfd_event_waiter),
- GFP_KERNEL);
+ event_waiters = kcalloc(num_events, sizeof(struct kfd_event_waiter),
+ GFP_KERNEL);
if (!event_waiters)
return NULL;
- for (i = 0; (event_waiters) && (i < num_events) ; i++) {
+ for (i = 0; i < num_events; i++)
init_wait(&event_waiters[i].wait);
- event_waiters[i].activated = false;
- }
return event_waiters;
}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 0ebabcc..68c98e3 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -1688,10 +1688,6 @@ static void amdgpu_dm_fini(struct amdgpu_device *adev)
}
#endif
- for (i = 0; i < adev->dm.display_indexes_num; i++) {
- drm_encoder_cleanup(&adev->dm.mst_encoders[i].base);
- }
-
amdgpu_dm_destroy_drm_device(&adev->dm);
#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
@@ -2691,12 +2687,14 @@ static int dm_resume(void *handle)
drm_for_each_connector_iter(connector, &iter) {
aconnector = to_amdgpu_dm_connector(connector);
+ if (!aconnector->dc_link)
+ continue;
+
/*
* this is the case when traversing through already created
* MST connectors, should be skipped
*/
- if (aconnector->dc_link &&
- aconnector->dc_link->type == dc_connection_mst_branch)
+ if (aconnector->dc_link->type == dc_connection_mst_branch)
continue;
mutex_lock(&aconnector->hpd_lock);
@@ -4432,6 +4430,17 @@ DEVICE_ATTR_WO(s3_debug);
static int dm_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ struct amdgpu_mode_info *mode_info = &adev->mode_info;
+ struct atom_context *ctx = mode_info->atom_context;
+ int index = GetIndexIntoMasterTable(DATA, Object_Header);
+ u16 data_offset;
+
+ /* if there is no object header, skip DM */
+ if (!amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) {
+ adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK;
+ dev_info(adev->dev, "No object header, skipping DM\n");
+ return -ENOENT;
+ }
switch (adev->asic_type) {
#if defined(CONFIG_DRM_AMD_DC_SI)
@@ -5855,8 +5864,6 @@ static void fill_stream_properties_from_drm_display_mode(
timing_out->aspect_ratio = get_aspect_ratio(mode_in);
- stream->output_color_space = get_output_color_space(timing_out);
-
stream->out_transfer_func->type = TF_TYPE_PREDEFINED;
stream->out_transfer_func->tf = TRANSFER_FUNCTION_SRGB;
if (stream->signal == SIGNAL_TYPE_HDMI_TYPE_A) {
@@ -5867,6 +5874,8 @@ static void fill_stream_properties_from_drm_display_mode(
adjust_colour_depth_from_display_info(timing_out, info);
}
}
+
+ stream->output_color_space = get_output_color_space(timing_out);
}
static void fill_audio_info(struct audio_info *audio_info,
@@ -10140,6 +10149,13 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm,
if (!dm_old_crtc_state->stream)
goto skip_modeset;
+ /* Unset freesync video if it was active before */
+ if (dm_old_crtc_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED) {
+ dm_new_crtc_state->freesync_config.state = VRR_STATE_INACTIVE;
+ dm_new_crtc_state->freesync_config.fixed_refresh_in_uhz = 0;
+ }
+
+ /* Now check if we should set freesync video mode */
if (amdgpu_freesync_vid_mode && dm_new_crtc_state->stream &&
is_timing_unchanged_for_freesync(new_crtc_state,
old_crtc_state)) {
@@ -10789,8 +10805,8 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
goto fail;
}
- if (dm_old_con_state->abm_level !=
- dm_new_con_state->abm_level)
+ if (dm_old_con_state->abm_level != dm_new_con_state->abm_level ||
+ dm_old_con_state->scaling != dm_new_con_state->scaling)
new_crtc_state->connectors_changed = true;
}
@@ -10875,7 +10891,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
* `dcn10_can_pipe_disable_cursor`). By now, all modified planes are in
* atomic state, so call drm helper to normalize zpos.
*/
- drm_atomic_normalize_zpos(dev, state);
+ ret = drm_atomic_normalize_zpos(dev, state);
+ if (ret) {
+ drm_dbg(dev, "drm_atomic_normalize_zpos() failed\n");
+ goto fail;
+ }
/* Remove exiting planes if they are modified */
for_each_oldnew_plane_in_state_reverse(state, plane, old_plane_state, new_plane_state, i) {
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 652cf10..7a3fee7 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -184,6 +184,21 @@ static bool needs_dsc_aux_workaround(struct dc_link *link)
return false;
}
+bool is_synaptics_cascaded_panamera(struct dc_link *link, struct drm_dp_mst_port *port)
+{
+ u8 branch_vendor_data[4] = { 0 }; // Vendor data 0x50C ~ 0x50F
+
+ if (drm_dp_dpcd_read(port->mgr->aux, DP_BRANCH_VENDOR_SPECIFIC_START, &branch_vendor_data, 4) == 4) {
+ if (link->dpcd_caps.branch_dev_id == DP_BRANCH_DEVICE_ID_90CC24 &&
+ IS_SYNAPTICS_CASCADED_PANAMERA(link->dpcd_caps.branch_dev_name, branch_vendor_data)) {
+ DRM_INFO("Synaptics Cascaded MST hub\n");
+ return true;
+ }
+ }
+
+ return false;
+}
+
static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnector)
{
struct dc_sink *dc_sink = aconnector->dc_sink;
@@ -207,6 +222,10 @@ static bool validate_dsc_caps_on_connector(struct amdgpu_dm_connector *aconnecto
needs_dsc_aux_workaround(aconnector->dc_link))
aconnector->dsc_aux = &aconnector->mst_port->dm_dp_aux.aux;
+ /* synaptics cascaded MST hub case */
+ if (!aconnector->dsc_aux && is_synaptics_cascaded_panamera(aconnector->dc_link, port))
+ aconnector->dsc_aux = port->mgr->aux;
+
if (!aconnector->dsc_aux)
return false;
@@ -385,7 +404,6 @@ static const struct drm_connector_helper_funcs dm_dp_mst_connector_helper_funcs
static void amdgpu_dm_encoder_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
- kfree(encoder);
}
static const struct drm_encoder_funcs amdgpu_dm_encoder_funcs = {
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
index 900d3f7..f7523fd 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.h
@@ -26,6 +26,18 @@
#ifndef __DAL_AMDGPU_DM_MST_TYPES_H__
#define __DAL_AMDGPU_DM_MST_TYPES_H__
+#define DP_BRANCH_VENDOR_SPECIFIC_START 0x50C
+
+/**
+ * Panamera MST Hub detection
+ * Offset DPCD 050Eh == 0x5A indicates cascaded MST hub case
+ * Check from beginning of branch device vendor specific field (050Ch)
+ */
+#define IS_SYNAPTICS_PANAMERA(branchDevName) (((int)branchDevName[4] & 0xF0) == 0x50 ? 1 : 0)
+#define BRANCH_HW_REVISION_PANAMERA_A2 0x10
+#define SYNAPTICS_CASCADED_HUB_ID 0x5A
+#define IS_SYNAPTICS_CASCADED_PANAMERA(devName, data) ((IS_SYNAPTICS_PANAMERA(devName) && ((int)data[2] == SYNAPTICS_CASCADED_HUB_ID)) ? 1 : 0)
+
struct amdgpu_display_manager;
struct amdgpu_dm_connector;
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 6c93782..eca8824 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -771,6 +771,7 @@ static bool dc_construct_ctx(struct dc *dc,
dc_ctx->perf_trace = dc_perf_trace_create();
if (!dc_ctx->perf_trace) {
+ kfree(dc_ctx);
ASSERT_CRITICAL(false);
return false;
}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
index 9039fb1..f858ae6 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
@@ -92,8 +92,8 @@ static const struct out_csc_color_matrix_type output_csc_matrix[] = {
{ 0xE00, 0xF349, 0xFEB7, 0x1000, 0x6CE, 0x16E3,
0x24F, 0x200, 0xFCCB, 0xF535, 0xE00, 0x1000} },
{ COLOR_SPACE_YCBCR2020_TYPE,
- { 0x1000, 0xF149, 0xFEB7, 0x0000, 0x0868, 0x15B2,
- 0x01E6, 0x0000, 0xFB88, 0xF478, 0x1000, 0x0000} },
+ { 0x1000, 0xF149, 0xFEB7, 0x1004, 0x0868, 0x15B2,
+ 0x01E6, 0x201, 0xFB88, 0xF478, 0x1000, 0x1004} },
{ COLOR_SPACE_YCBCR709_BLACK_TYPE,
{ 0x0000, 0x0000, 0x0000, 0x1000, 0x0000, 0x0000,
0x0000, 0x0200, 0x0000, 0x0000, 0x0000, 0x1000} },
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 3c42052..b727bd7 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -1665,12 +1665,6 @@ struct dc_link *link_create(const struct link_init_data *init_params)
if (false == dc_link_construct(link, init_params))
goto construct_fail;
- /*
- * Must use preferred_link_setting, not reported_link_cap or verified_link_cap,
- * since struct preferred_link_setting won't be reset after S3.
- */
- link->preferred_link_setting.dpcd_source_device_specific_field_support = true;
-
return link;
construct_fail:
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
index a6ff1b1..6777adb 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
@@ -4841,18 +4841,10 @@ void dpcd_set_source_specific_data(struct dc_link *link)
uint8_t hblank_size = (uint8_t)link->dc->caps.min_horizontal_blanking_period;
- if (link->preferred_link_setting.dpcd_source_device_specific_field_support) {
- result_write_min_hblank = core_link_write_dpcd(link,
- DP_SOURCE_MINIMUM_HBLANK_SUPPORTED, (uint8_t *)(&hblank_size),
- sizeof(hblank_size));
-
- if (result_write_min_hblank == DC_ERROR_UNEXPECTED)
- link->preferred_link_setting.dpcd_source_device_specific_field_support = false;
- } else {
- DC_LOG_DC("Sink device does not support 00340h DPCD write. Skipping on purpose.\n");
- }
+ result_write_min_hblank = core_link_write_dpcd(link,
+ DP_SOURCE_MINIMUM_HBLANK_SUPPORTED, (uint8_t *)(&hblank_size),
+ sizeof(hblank_size));
}
-
DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_INFORMATION,
WPP_BIT_FLAG_DC_DETECTION_DP_CAPS,
"result=%u link_index=%u enum dce_version=%d DPCD=0x%04X min_hblank=%u branch_dev_id=0x%x branch_dev_name='%c%c%c%c%c%c'",
diff --git a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
index 4f54bde..1948cd9 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_dp_types.h
@@ -109,7 +109,6 @@ struct dc_link_settings {
enum dc_link_spread link_spread;
bool use_link_rate_set;
uint8_t link_rate_set;
- bool dpcd_source_device_specific_field_support;
};
struct dc_lane_settings {
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
index d3b5b6f..6266b07 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20.c
@@ -3897,14 +3897,14 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
* (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
- locals->ODMCombineEnablePerState[i][k] = false;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
if (mode_lib->vba.ODMCapability) {
if (locals->PlaneRequiredDISPCLKWithoutODMCombine > mode_lib->vba.MaxDispclkRoundedDownToDFSGranularity) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
} else if (locals->HActive[k] > DCN20_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
}
}
@@ -3957,7 +3957,7 @@ void dml20_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
locals->RequiredDISPCLK[i][j] = 0.0;
locals->DISPCLK_DPPCLK_Support[i][j] = true;
for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
- locals->ODMCombineEnablePerState[i][k] = false;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
locals->NoOfDPP[i][j][k] = 1;
locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
index 63bbdf8..0053a6d 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/display_mode_vba_20v2.c
@@ -4008,17 +4008,17 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
* (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
- locals->ODMCombineEnablePerState[i][k] = false;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
if (mode_lib->vba.ODMCapability) {
if (locals->PlaneRequiredDISPCLKWithoutODMCombine > MaxMaxDispclkRoundedDown) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
} else if (locals->DSCEnabled[k] && (locals->HActive[k] > DCN20_MAX_DSC_IMAGE_WIDTH)) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
} else if (locals->HActive[k] > DCN20_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
}
}
@@ -4071,7 +4071,7 @@ void dml20v2_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode
locals->RequiredDISPCLK[i][j] = 0.0;
locals->DISPCLK_DPPCLK_Support[i][j] = true;
for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
- locals->ODMCombineEnablePerState[i][k] = false;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
locals->NoOfDPP[i][j][k] = 1;
locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
index 4136eb8..26f839c 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn21/display_mode_vba_21.c
@@ -3979,17 +3979,17 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine = mode_lib->vba.PixelClock[k] / 2
* (1 + mode_lib->vba.DISPCLKDPPCLKDSCCLKDownSpreading / 100.0);
- locals->ODMCombineEnablePerState[i][k] = false;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithoutODMCombine;
if (mode_lib->vba.ODMCapability) {
if (locals->PlaneRequiredDISPCLKWithoutODMCombine > MaxMaxDispclkRoundedDown) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
} else if (locals->DSCEnabled[k] && (locals->HActive[k] > DCN21_MAX_DSC_IMAGE_WIDTH)) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
} else if (locals->HActive[k] > DCN21_MAX_420_IMAGE_WIDTH && locals->OutputFormat[k] == dm_420) {
- locals->ODMCombineEnablePerState[i][k] = true;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_2to1;
mode_lib->vba.PlaneRequiredDISPCLK = mode_lib->vba.PlaneRequiredDISPCLKWithODMCombine;
}
}
@@ -4042,7 +4042,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
locals->RequiredDISPCLK[i][j] = 0.0;
locals->DISPCLK_DPPCLK_Support[i][j] = true;
for (k = 0; k <= mode_lib->vba.NumberOfActivePlanes - 1; k++) {
- locals->ODMCombineEnablePerState[i][k] = false;
+ locals->ODMCombineEnablePerState[i][k] = dm_odm_combine_mode_disabled;
if (locals->SwathWidthYSingleDPP[k] <= locals->MaximumSwathWidth[k]) {
locals->NoOfDPP[i][j][k] = 1;
locals->RequiredDPPCLK[i][j][k] = locals->MinDPPCLKUsingSingleDPP[k]
@@ -5218,7 +5218,7 @@ void dml21_ModeSupportAndSystemConfigurationFull(struct display_mode_lib *mode_l
mode_lib->vba.ODMCombineEnabled[k] =
locals->ODMCombineEnablePerState[mode_lib->vba.VoltageLevel][k];
} else {
- mode_lib->vba.ODMCombineEnabled[k] = false;
+ mode_lib->vba.ODMCombineEnabled[k] = dm_odm_combine_mode_disabled;
}
mode_lib->vba.DSCEnabled[k] =
locals->RequiresDSC[mode_lib->vba.VoltageLevel][k];
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
index 518672a..de0fa87 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
@@ -1868,7 +1868,10 @@ static unsigned int CalculateVMAndRowBytes(
}
if (SurfaceTiling == dm_sw_linear) {
- *dpte_row_height = dml_min(128, 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch), 1));
+ if (PTEBufferSizeInRequests == 0)
+ *dpte_row_height = 1;
+ else
+ *dpte_row_height = dml_min(128, 1 << (unsigned int) dml_floor(dml_log2(PTEBufferSizeInRequests * *PixelPTEReqWidth / Pitch), 1));
*dpte_row_width_ub = (dml_ceil(((double) SwathWidth - 1) / *PixelPTEReqWidth, 1) + 1) * *PixelPTEReqWidth;
*PixelPTEBytesPerRow = *dpte_row_width_ub / *PixelPTEReqWidth * *PTERequestSize;
} else if (ScanDirection != dm_vert) {
diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
index 82a8c18..dbcabae 100644
--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
@@ -2013,16 +2013,9 @@ static int sienna_cichlid_set_default_od_settings(struct smu_context *smu)
(OverDriveTable_t *)smu->smu_table.boot_overdrive_table;
OverDriveTable_t *user_od_table =
(OverDriveTable_t *)smu->smu_table.user_overdrive_table;
+ OverDriveTable_t user_od_table_bak;
int ret = 0;
- /*
- * For S3/S4/Runpm resume, no need to setup those overdrive tables again as
- * - either they already have the default OD settings got during cold bootup
- * - or they have some user customized OD settings which cannot be overwritten
- */
- if (smu->adev->in_suspend)
- return 0;
-
ret = smu_cmn_update_table(smu, SMU_TABLE_OVERDRIVE,
0, (void *)boot_od_table, false);
if (ret) {
@@ -2033,7 +2026,23 @@ static int sienna_cichlid_set_default_od_settings(struct smu_context *smu)
sienna_cichlid_dump_od_table(smu, boot_od_table);
memcpy(od_table, boot_od_table, sizeof(OverDriveTable_t));
- memcpy(user_od_table, boot_od_table, sizeof(OverDriveTable_t));
+
+ /*
+ * For S3/S4/Runpm resume, we need to setup those overdrive tables again,
+ * but we have to preserve user defined values in "user_od_table".
+ */
+ if (!smu->adev->in_suspend) {
+ memcpy(user_od_table, boot_od_table, sizeof(OverDriveTable_t));
+ smu->user_dpm_profile.user_od = false;
+ } else if (smu->user_dpm_profile.user_od) {
+ memcpy(&user_od_table_bak, user_od_table, sizeof(OverDriveTable_t));
+ memcpy(user_od_table, boot_od_table, sizeof(OverDriveTable_t));
+ user_od_table->GfxclkFmin = user_od_table_bak.GfxclkFmin;
+ user_od_table->GfxclkFmax = user_od_table_bak.GfxclkFmax;
+ user_od_table->UclkFmin = user_od_table_bak.UclkFmin;
+ user_od_table->UclkFmax = user_od_table_bak.UclkFmax;
+ user_od_table->VddGfxOffset = user_od_table_bak.VddGfxOffset;
+ }
return 0;
}
@@ -2243,6 +2252,20 @@ static int sienna_cichlid_od_edit_dpm_table(struct smu_context *smu,
return ret;
}
+static int sienna_cichlid_restore_user_od_settings(struct smu_context *smu)
+{
+ struct smu_table_context *table_context = &smu->smu_table;
+ OverDriveTable_t *od_table = table_context->overdrive_table;
+ OverDriveTable_t *user_od_table = table_context->user_overdrive_table;
+ int res;
+
+ res = smu_v11_0_restore_user_od_settings(smu);
+ if (res == 0)
+ memcpy(od_table, user_od_table, sizeof(OverDriveTable_t));
+
+ return res;
+}
+
static int sienna_cichlid_run_btc(struct smu_context *smu)
{
return smu_cmn_send_smc_msg(smu, SMU_MSG_RunDcBtc, NULL);
@@ -3980,7 +4003,7 @@ static const struct pptable_funcs sienna_cichlid_ppt_funcs = {
.set_soft_freq_limited_range = smu_v11_0_set_soft_freq_limited_range,
.set_default_od_settings = sienna_cichlid_set_default_od_settings,
.od_edit_dpm_table = sienna_cichlid_od_edit_dpm_table,
- .restore_user_od_settings = smu_v11_0_restore_user_od_settings,
+ .restore_user_od_settings = sienna_cichlid_restore_user_od_settings,
.run_btc = sienna_cichlid_run_btc,
.set_power_source = smu_v11_0_set_power_source,
.get_pp_feature_mask = smu_cmn_get_pp_feature_mask,
diff --git a/drivers/gpu/drm/arm/malidp_planes.c b/drivers/gpu/drm/arm/malidp_planes.c
index 8c2ab3d..f67c816 100644
--- a/drivers/gpu/drm/arm/malidp_planes.c
+++ b/drivers/gpu/drm/arm/malidp_planes.c
@@ -348,7 +348,7 @@ static bool malidp_check_pages_threshold(struct malidp_plane_state *ms,
else
sgt = obj->funcs->get_sg_table(obj);
- if (!sgt)
+ if (IS_ERR(sgt))
return false;
sgl = sgt->sgl;
diff --git a/drivers/gpu/drm/bridge/lontium-lt8912b.c b/drivers/gpu/drm/bridge/lontium-lt8912b.c
index 82169b6..9680819 100644
--- a/drivers/gpu/drm/bridge/lontium-lt8912b.c
+++ b/drivers/gpu/drm/bridge/lontium-lt8912b.c
@@ -670,8 +670,8 @@ static int lt8912_parse_dt(struct lt8912 *lt)
lt->hdmi_port = of_drm_find_bridge(port_node);
if (!lt->hdmi_port) {
- dev_err(lt->dev, "%s: Failed to get hdmi port\n", __func__);
- ret = -ENODEV;
+ ret = -EPROBE_DEFER;
+ dev_err_probe(lt->dev, ret, "%s: Failed to get hdmi port\n", __func__);
goto err_free_host_node;
}
diff --git a/drivers/gpu/drm/bridge/lontium-lt9611.c b/drivers/gpu/drm/bridge/lontium-lt9611.c
index 1dcc28a..660e05f 100644
--- a/drivers/gpu/drm/bridge/lontium-lt9611.c
+++ b/drivers/gpu/drm/bridge/lontium-lt9611.c
@@ -185,12 +185,14 @@ static void lt9611_mipi_video_setup(struct lt9611 *lt9611,
regmap_write(lt9611->regmap, 0x8319, (u8)(hfront_porch % 256));
- regmap_write(lt9611->regmap, 0x831a, (u8)(hsync_porch / 256));
+ regmap_write(lt9611->regmap, 0x831a, (u8)(hsync_porch / 256) |
+ ((hfront_porch / 256) << 4));
regmap_write(lt9611->regmap, 0x831b, (u8)(hsync_porch % 256));
}
-static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode)
+static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode, unsigned int postdiv)
{
+ unsigned int pcr_m = mode->clock * 5 * postdiv / 27000;
const struct reg_sequence reg_cfg[] = {
{ 0x830b, 0x01 },
{ 0x830c, 0x10 },
@@ -205,7 +207,6 @@ static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mod
/* stage 2 */
{ 0x834a, 0x40 },
- { 0x831d, 0x10 },
/* MK limit */
{ 0x832d, 0x38 },
@@ -220,30 +221,28 @@ static void lt9611_pcr_setup(struct lt9611 *lt9611, const struct drm_display_mod
{ 0x8325, 0x00 },
{ 0x832a, 0x01 },
{ 0x834a, 0x10 },
- { 0x831d, 0x10 },
- { 0x8326, 0x37 },
};
+ u8 pol = 0x10;
- regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
+ if (mode->flags & DRM_MODE_FLAG_NHSYNC)
+ pol |= 0x2;
+ if (mode->flags & DRM_MODE_FLAG_NVSYNC)
+ pol |= 0x1;
+ regmap_write(lt9611->regmap, 0x831d, pol);
- switch (mode->hdisplay) {
- case 640:
- regmap_write(lt9611->regmap, 0x8326, 0x14);
- break;
- case 1920:
- regmap_write(lt9611->regmap, 0x8326, 0x37);
- break;
- case 3840:
+ if (mode->hdisplay == 3840)
regmap_multi_reg_write(lt9611->regmap, reg_cfg2, ARRAY_SIZE(reg_cfg2));
- break;
- }
+ else
+ regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
+
+ regmap_write(lt9611->regmap, 0x8326, pcr_m);
/* pcr rst */
regmap_write(lt9611->regmap, 0x8011, 0x5a);
regmap_write(lt9611->regmap, 0x8011, 0xfa);
}
-static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode)
+static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode *mode, unsigned int *postdiv)
{
unsigned int pclk = mode->clock;
const struct reg_sequence reg_cfg[] = {
@@ -257,16 +256,21 @@ static int lt9611_pll_setup(struct lt9611 *lt9611, const struct drm_display_mode
{ 0x8126, 0x55 },
{ 0x8127, 0x66 },
{ 0x8128, 0x88 },
+ { 0x812a, 0x20 },
};
regmap_multi_reg_write(lt9611->regmap, reg_cfg, ARRAY_SIZE(reg_cfg));
- if (pclk > 150000)
+ if (pclk > 150000) {
regmap_write(lt9611->regmap, 0x812d, 0x88);
- else if (pclk > 70000)
+ *postdiv = 1;
+ } else if (pclk > 70000) {
regmap_write(lt9611->regmap, 0x812d, 0x99);
- else
+ *postdiv = 2;
+ } else {
regmap_write(lt9611->regmap, 0x812d, 0xaa);
+ *postdiv = 4;
+ }
/*
* first divide pclk by 2 first
@@ -446,12 +450,11 @@ static void lt9611_sleep_setup(struct lt9611 *lt9611)
{ 0x8023, 0x01 },
{ 0x8157, 0x03 }, /* set addr pin as output */
{ 0x8149, 0x0b },
- { 0x8151, 0x30 }, /* disable IRQ */
+
{ 0x8102, 0x48 }, /* MIPI Rx power down */
{ 0x8123, 0x80 },
{ 0x8130, 0x00 },
- { 0x8100, 0x01 }, /* bandgap power down */
- { 0x8101, 0x00 }, /* system clk power down */
+ { 0x8011, 0x0a },
};
regmap_multi_reg_write(lt9611->regmap,
@@ -757,7 +760,7 @@ static const struct drm_connector_funcs lt9611_bridge_connector_funcs = {
static struct mipi_dsi_device *lt9611_attach_dsi(struct lt9611 *lt9611,
struct device_node *dsi_node)
{
- const struct mipi_dsi_device_info info = { "lt9611", 0, NULL };
+ const struct mipi_dsi_device_info info = { "lt9611", 0, lt9611->dev->of_node};
struct mipi_dsi_device *dsi;
struct mipi_dsi_host *host;
int ret;
@@ -881,12 +884,18 @@ static enum drm_mode_status lt9611_bridge_mode_valid(struct drm_bridge *bridge,
static void lt9611_bridge_pre_enable(struct drm_bridge *bridge)
{
struct lt9611 *lt9611 = bridge_to_lt9611(bridge);
+ static const struct reg_sequence reg_cfg[] = {
+ { 0x8102, 0x12 },
+ { 0x8123, 0x40 },
+ { 0x8130, 0xea },
+ { 0x8011, 0xfa },
+ };
if (!lt9611->sleep)
return;
- lt9611_reset(lt9611);
- regmap_write(lt9611->regmap, 0x80ee, 0x01);
+ regmap_multi_reg_write(lt9611->regmap,
+ reg_cfg, ARRAY_SIZE(reg_cfg));
lt9611->sleep = false;
}
@@ -904,14 +913,15 @@ static void lt9611_bridge_mode_set(struct drm_bridge *bridge,
{
struct lt9611 *lt9611 = bridge_to_lt9611(bridge);
struct hdmi_avi_infoframe avi_frame;
+ unsigned int postdiv;
int ret;
lt9611_bridge_pre_enable(bridge);
lt9611_mipi_input_digital(lt9611, mode);
- lt9611_pll_setup(lt9611, mode);
+ lt9611_pll_setup(lt9611, mode, &postdiv);
lt9611_mipi_video_setup(lt9611, mode);
- lt9611_pcr_setup(lt9611, mode);
+ lt9611_pcr_setup(lt9611, mode, postdiv);
ret = drm_hdmi_avi_infoframe_from_display_mode(&avi_frame,
<9611->connector,
diff --git a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
index 72248a56..e41afcc 100644
--- a/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
+++ b/drivers/gpu/drm/bridge/megachips-stdpxxxx-ge-b850v3-fw.c
@@ -444,7 +444,11 @@ static int __init stdpxxxx_ge_b850v3_init(void)
if (ret)
return ret;
- return i2c_add_driver(&stdp2690_ge_b850v3_fw_driver);
+ ret = i2c_add_driver(&stdp2690_ge_b850v3_fw_driver);
+ if (ret)
+ i2c_del_driver(&stdp4028_ge_b850v3_fw_driver);
+
+ return ret;
}
module_init(stdpxxxx_ge_b850v3_init);
diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
index c901c0e..b3cb910 100644
--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
@@ -381,6 +381,8 @@ static void sn65dsi83_atomic_enable(struct drm_bridge *bridge,
u16 val;
int ret;
+ usleep_range(10000, 11000);
+
/* Get the LVDS format from the bridge state. */
bridge_state = drm_atomic_get_new_bridge_state(state, bridge);
diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c
index a1e4c790..1fcb5f8 100644
--- a/drivers/gpu/drm/drm_atomic.c
+++ b/drivers/gpu/drm/drm_atomic.c
@@ -1052,6 +1052,7 @@ static void drm_atomic_connector_print_state(struct drm_printer *p,
drm_printf(p, "connector[%u]: %s\n", connector->base.id, connector->name);
drm_printf(p, "\tcrtc=%s\n", state->crtc ? state->crtc->name : "(null)");
drm_printf(p, "\tself_refresh_aware=%d\n", state->self_refresh_aware);
+ drm_printf(p, "\tmax_requested_bpc=%d\n", state->max_requested_bpc);
if (connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK)
if (state->writeback_job && state->writeback_job->fb)
diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
index 0d915fe..d02e323 100644
--- a/drivers/gpu/drm/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/drm_dp_mst_topology.c
@@ -3781,6 +3781,9 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
set_bit(0, &mgr->payload_mask);
mgr->vcpi_mask = 0;
mgr->payload_id_table_cleared = false;
+
+ memset(&mgr->down_rep_recv, 0, sizeof(mgr->down_rep_recv));
+ memset(&mgr->up_req_recv, 0, sizeof(mgr->up_req_recv));
}
out_unlock:
@@ -3994,7 +3997,7 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
struct drm_dp_sideband_msg_rx *msg = &mgr->down_rep_recv;
if (!drm_dp_get_one_sb_msg(mgr, false, &mstb))
- goto out;
+ goto out_clear_reply;
/* Multi-packet message transmission, don't clear the reply */
if (!msg->have_eomt)
diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
index 6ab048b..7209568 100644
--- a/drivers/gpu/drm/drm_edid.c
+++ b/drivers/gpu/drm/drm_edid.c
@@ -4942,7 +4942,8 @@ static void drm_parse_hdmi_forum_vsdb(struct drm_connector *connector,
else if (hf_vsdb[11] & DRM_EDID_DSC_10BPC)
hdmi_dsc->bpc_supported = 10;
else
- hdmi_dsc->bpc_supported = 0;
+ /* Supports min 8 BPC if DSC 1.2 is supported*/
+ hdmi_dsc->bpc_supported = 8;
dsc_max_frl_rate = (hf_vsdb[12] & DRM_EDID_DSC_MAX_FRL_RATE_MASK) >> 4;
drm_get_max_frl_rate(dsc_max_frl_rate, &hdmi_dsc->max_lanes,
@@ -5597,8 +5598,6 @@ static u8 drm_mode_hdmi_vic(const struct drm_connector *connector,
static u8 drm_mode_cea_vic(const struct drm_connector *connector,
const struct drm_display_mode *mode)
{
- u8 vic;
-
/*
* HDMI spec says if a mode is found in HDMI 1.4b 4K modes
* we should send its VIC in vendor infoframes, else send the
@@ -5608,13 +5607,18 @@ static u8 drm_mode_cea_vic(const struct drm_connector *connector,
if (drm_mode_hdmi_vic(connector, mode))
return 0;
- vic = drm_match_cea_mode(mode);
+ return drm_match_cea_mode(mode);
+}
- /*
- * HDMI 1.4 VIC range: 1 <= VIC <= 64 (CEA-861-D) but
- * HDMI 2.0 VIC range: 1 <= VIC <= 107 (CEA-861-F). So we
- * have to make sure we dont break HDMI 1.4 sinks.
- */
+/*
+ * Avoid sending VICs defined in HDMI 2.0 in AVI infoframes to sinks that
+ * conform to HDMI 1.4.
+ *
+ * HDMI 1.4 (CTA-861-D) VIC range: [1..64]
+ * HDMI 2.0 (CTA-861-F) VIC range: [1..107]
+ */
+static u8 vic_for_avi_infoframe(const struct drm_connector *connector, u8 vic)
+{
if (!is_hdmi2_sink(connector) && vic > 64)
return 0;
@@ -5690,7 +5694,7 @@ drm_hdmi_avi_infoframe_from_display_mode(struct hdmi_avi_infoframe *frame,
picture_aspect = HDMI_PICTURE_ASPECT_NONE;
}
- frame->video_code = vic;
+ frame->video_code = vic_for_avi_infoframe(connector, vic);
frame->picture_aspect = picture_aspect;
frame->active_aspect = HDMI_ACTIVE_ASPECT_PICTURE;
frame->scan_mode = HDMI_SCAN_MODE_UNDERSCAN;
diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c
index 32ee023..7940d94 100644
--- a/drivers/gpu/drm/drm_fourcc.c
+++ b/drivers/gpu/drm/drm_fourcc.c
@@ -153,6 +153,10 @@ const struct drm_format_info *__drm_format_info(u32 format)
{ .format = DRM_FORMAT_BGRA5551, .depth = 15, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1, .has_alpha = true },
{ .format = DRM_FORMAT_RGB565, .depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
{ .format = DRM_FORMAT_BGR565, .depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
+#ifdef __BIG_ENDIAN
+ { .format = DRM_FORMAT_XRGB1555 | DRM_FORMAT_BIG_ENDIAN, .depth = 15, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
+ { .format = DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN, .depth = 16, .num_planes = 1, .cpp = { 2, 0, 0 }, .hsub = 1, .vsub = 1 },
+#endif
{ .format = DRM_FORMAT_RGB888, .depth = 24, .num_planes = 1, .cpp = { 3, 0, 0 }, .hsub = 1, .vsub = 1 },
{ .format = DRM_FORMAT_BGR888, .depth = 24, .num_planes = 1, .cpp = { 3, 0, 0 }, .hsub = 1, .vsub = 1 },
{ .format = DRM_FORMAT_XRGB8888, .depth = 24, .num_planes = 1, .cpp = { 4, 0, 0 }, .hsub = 1, .vsub = 1 },
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 15c3849..0c29680 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -591,11 +591,14 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, struct vm_area_struct
int ret;
if (obj->import_attach) {
- /* Drop the reference drm_gem_mmap_obj() acquired.*/
- drm_gem_object_put(obj);
vma->vm_private_data = NULL;
+ ret = dma_buf_mmap(obj->dma_buf, vma, 0);
- return dma_buf_mmap(obj->dma_buf, vma, 0);
+ /* Drop the reference drm_gem_mmap_obj() acquired.*/
+ if (!ret)
+ drm_gem_object_put(obj);
+
+ return ret;
}
ret = drm_gem_shmem_get_pages(shmem);
@@ -651,23 +654,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object *shmem)
}
EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
-/**
- * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
- * scatter/gather table for a shmem GEM object.
- * @shmem: shmem GEM object
- *
- * This function returns a scatter/gather table suitable for driver usage. If
- * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
- * table created.
- *
- * This is the main function for drivers to get at backing storage, and it hides
- * and difference between dma-buf imported and natively allocated objects.
- * drm_gem_shmem_get_sg_table() should not be directly called by drivers.
- *
- * Returns:
- * A pointer to the scatter/gather table of pinned pages or errno on failure.
- */
-struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
+static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_object *shmem)
{
struct drm_gem_object *obj = &shmem->base;
int ret;
@@ -678,7 +665,7 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
WARN_ON(obj->import_attach);
- ret = drm_gem_shmem_get_pages(shmem);
+ ret = drm_gem_shmem_get_pages_locked(shmem);
if (ret)
return ERR_PTR(ret);
@@ -700,9 +687,39 @@ struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
sg_free_table(sgt);
kfree(sgt);
err_put_pages:
- drm_gem_shmem_put_pages(shmem);
+ drm_gem_shmem_put_pages_locked(shmem);
return ERR_PTR(ret);
}
+
+/**
+ * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
+ * scatter/gather table for a shmem GEM object.
+ * @shmem: shmem GEM object
+ *
+ * This function returns a scatter/gather table suitable for driver usage. If
+ * the sg table doesn't exist, the pages are pinned, dma-mapped, and a sg
+ * table created.
+ *
+ * This is the main function for drivers to get at backing storage, and it hides
+ * and difference between dma-buf imported and natively allocated objects.
+ * drm_gem_shmem_get_sg_table() should not be directly called by drivers.
+ *
+ * Returns:
+ * A pointer to the scatter/gather table of pinned pages or errno on failure.
+ */
+struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object *shmem)
+{
+ int ret;
+ struct sg_table *sgt;
+
+ ret = mutex_lock_interruptible(&shmem->pages_lock);
+ if (ret)
+ return ERR_PTR(ret);
+ sgt = drm_gem_shmem_get_pages_sgt_locked(shmem);
+ mutex_unlock(&shmem->pages_lock);
+
+ return sgt;
+}
EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt);
/**
diff --git a/drivers/gpu/drm/drm_mipi_dsi.c b/drivers/gpu/drm/drm_mipi_dsi.c
index 2c43d54..19fb1d9 100644
--- a/drivers/gpu/drm/drm_mipi_dsi.c
+++ b/drivers/gpu/drm/drm_mipi_dsi.c
@@ -1143,6 +1143,58 @@ int mipi_dsi_dcs_get_display_brightness(struct mipi_dsi_device *dsi,
}
EXPORT_SYMBOL(mipi_dsi_dcs_get_display_brightness);
+/**
+ * mipi_dsi_dcs_set_display_brightness_large() - sets the 16-bit brightness value
+ * of the display
+ * @dsi: DSI peripheral device
+ * @brightness: brightness value
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int mipi_dsi_dcs_set_display_brightness_large(struct mipi_dsi_device *dsi,
+ u16 brightness)
+{
+ u8 payload[2] = { brightness >> 8, brightness & 0xff };
+ ssize_t err;
+
+ err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_DISPLAY_BRIGHTNESS,
+ payload, sizeof(payload));
+ if (err < 0)
+ return err;
+
+ return 0;
+}
+EXPORT_SYMBOL(mipi_dsi_dcs_set_display_brightness_large);
+
+/**
+ * mipi_dsi_dcs_get_display_brightness_large() - gets the current 16-bit
+ * brightness value of the display
+ * @dsi: DSI peripheral device
+ * @brightness: brightness value
+ *
+ * Return: 0 on success or a negative error code on failure.
+ */
+int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
+ u16 *brightness)
+{
+ u8 brightness_be[2];
+ ssize_t err;
+
+ err = mipi_dsi_dcs_read(dsi, MIPI_DCS_GET_DISPLAY_BRIGHTNESS,
+ brightness_be, sizeof(brightness_be));
+ if (err <= 0) {
+ if (err == 0)
+ err = -ENODATA;
+
+ return err;
+ }
+
+ *brightness = (brightness_be[0] << 8) | brightness_be[1];
+
+ return 0;
+}
+EXPORT_SYMBOL(mipi_dsi_dcs_get_display_brightness_large);
+
static int mipi_dsi_drv_probe(struct device *dev)
{
struct mipi_dsi_driver *drv = to_mipi_dsi_driver(dev->driver);
diff --git a/drivers/gpu/drm/drm_mode_config.c b/drivers/gpu/drm/drm_mode_config.c
index 37b4b9f..1bd4f0b 100644
--- a/drivers/gpu/drm/drm_mode_config.c
+++ b/drivers/gpu/drm/drm_mode_config.c
@@ -398,6 +398,8 @@ static void drm_mode_config_init_release(struct drm_device *dev, void *ptr)
*/
int drmm_mode_config_init(struct drm_device *dev)
{
+ int ret;
+
mutex_init(&dev->mode_config.mutex);
drm_modeset_lock_init(&dev->mode_config.connection_mutex);
mutex_init(&dev->mode_config.idr_mutex);
@@ -419,7 +421,11 @@ int drmm_mode_config_init(struct drm_device *dev)
init_llist_head(&dev->mode_config.connector_free_list);
INIT_WORK(&dev->mode_config.connector_free_work, drm_connector_free_work_fn);
- drm_mode_create_standard_properties(dev);
+ ret = drm_mode_create_standard_properties(dev);
+ if (ret) {
+ drm_mode_config_cleanup(dev);
+ return ret;
+ }
/* Just to be sure */
dev->mode_config.num_fb = 0;
diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
index ca0fefe..8768073 100644
--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
+++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
@@ -272,6 +272,18 @@ static const struct dmi_system_id orientation_data[] = {
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
},
.driver_data = (void *)&lcd1200x1920_rightside_up,
+ }, { /* Lenovo Ideapad D330-10IGL (HD) */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGL"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* Lenovo IdeaPad Duet 3 10IGL5 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "IdeaPad Duet 3 10IGL5"),
+ },
+ .driver_data = (void *)&lcd1200x1920_rightside_up,
}, { /* Lenovo Yoga Book X90F / X91F / X91L */
.matches = {
/* Non exact match to match all versions */
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 6d8bed9..aa7227b 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -88,7 +88,15 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
static int etnaviv_gem_prime_mmap_obj(struct etnaviv_gem_object *etnaviv_obj,
struct vm_area_struct *vma)
{
- return dma_buf_mmap(etnaviv_obj->base.dma_buf, vma, 0);
+ int ret;
+
+ ret = dma_buf_mmap(etnaviv_obj->base.dma_buf, vma, 0);
+ if (!ret) {
+ /* Drop the reference acquired by drm_gem_mmap_obj(). */
+ drm_gem_object_put(&etnaviv_obj->base);
+ }
+
+ return ret;
}
static const struct etnaviv_gem_ops etnaviv_gem_prime_ops = {
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
index aabb997..2de8061 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c
@@ -80,10 +80,10 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova,
return -EINVAL;
for_each_sgtable_dma_sg(sgt, sg, i) {
- u32 pa = sg_dma_address(sg) - sg->offset;
+ phys_addr_t pa = sg_dma_address(sg) - sg->offset;
size_t bytes = sg_dma_len(sg) + sg->offset;
- VERB("map[%d]: %08x %08x(%zx)", i, iova, pa, bytes);
+ VERB("map[%d]: %08x %pap(%zx)", i, iova, &pa, bytes);
ret = etnaviv_context_map(context, da, pa, bytes, prot);
if (ret)
diff --git a/drivers/gpu/drm/exynos/exynos_drm_dsi.c b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
index 8d13785..e0465b60 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_dsi.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_dsi.c
@@ -809,15 +809,15 @@ static int exynos_dsi_init_link(struct exynos_dsi *dsi)
reg |= DSIM_AUTO_MODE;
if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_HSE)
reg |= DSIM_HSE_MODE;
- if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HFP))
+ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HFP)
reg |= DSIM_HFP_MODE;
- if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HBP))
+ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HBP)
reg |= DSIM_HBP_MODE;
- if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HSA))
+ if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HSA)
reg |= DSIM_HSA_MODE;
}
- if (!(dsi->mode_flags & MIPI_DSI_MODE_NO_EOT_PACKET))
+ if (dsi->mode_flags & MIPI_DSI_MODE_NO_EOT_PACKET)
reg |= DSIM_EOT_DISABLE;
switch (dsi->format) {
diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
index 32d5a55..aa0b936 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.c
+++ b/drivers/gpu/drm/i915/display/intel_bios.c
@@ -1820,6 +1820,22 @@ static enum port dvo_port_to_port(struct drm_i915_private *i915,
dvo_port);
}
+static enum port
+dsi_dvo_port_to_port(struct drm_i915_private *i915, u8 dvo_port)
+{
+ switch (dvo_port) {
+ case DVO_PORT_MIPIA:
+ return PORT_A;
+ case DVO_PORT_MIPIC:
+ if (DISPLAY_VER(i915) >= 11)
+ return PORT_B;
+ else
+ return PORT_C;
+ default:
+ return PORT_NONE;
+ }
+}
+
static int parse_bdb_230_dp_max_link_rate(const int vbt_max_link_rate)
{
switch (vbt_max_link_rate) {
@@ -2733,19 +2749,16 @@ bool intel_bios_is_dsi_present(struct drm_i915_private *i915,
dvo_port = child->dvo_port;
- if (dvo_port == DVO_PORT_MIPIA ||
- (dvo_port == DVO_PORT_MIPIB && DISPLAY_VER(i915) >= 11) ||
- (dvo_port == DVO_PORT_MIPIC && DISPLAY_VER(i915) < 11)) {
- if (port)
- *port = dvo_port - DVO_PORT_MIPIA;
- return true;
- } else if (dvo_port == DVO_PORT_MIPIB ||
- dvo_port == DVO_PORT_MIPIC ||
- dvo_port == DVO_PORT_MIPID) {
+ if (dsi_dvo_port_to_port(i915, dvo_port) == PORT_NONE) {
drm_dbg_kms(&i915->drm,
"VBT has unsupported DSI port %c\n",
port_name(dvo_port - DVO_PORT_MIPIA));
+ continue;
}
+
+ if (port)
+ *port = dsi_dvo_port_to_port(i915, dvo_port);
+ return true;
}
return false;
@@ -2830,7 +2843,7 @@ bool intel_bios_get_dsc_params(struct intel_encoder *encoder,
if (!(child->device_type & DEVICE_TYPE_MIPI_OUTPUT))
continue;
- if (child->dvo_port - DVO_PORT_MIPIA == encoder->port) {
+ if (dsi_dvo_port_to_port(i915, child->dvo_port) == encoder->port) {
if (!devdata->dsc)
return false;
diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c b/drivers/gpu/drm/i915/display/intel_cdclk.c
index 34fa413..745ffa7 100644
--- a/drivers/gpu/drm/i915/display/intel_cdclk.c
+++ b/drivers/gpu/drm/i915/display/intel_cdclk.c
@@ -1269,7 +1269,7 @@ static const struct intel_cdclk_vals adlp_cdclk_table[] = {
{ .refclk = 24000, .cdclk = 192000, .divider = 2, .ratio = 16 },
{ .refclk = 24000, .cdclk = 312000, .divider = 2, .ratio = 26 },
{ .refclk = 24000, .cdclk = 552000, .divider = 2, .ratio = 46 },
- { .refclk = 24400, .cdclk = 648000, .divider = 2, .ratio = 54 },
+ { .refclk = 24000, .cdclk = 648000, .divider = 2, .ratio = 54 },
{ .refclk = 38400, .cdclk = 179200, .divider = 3, .ratio = 14 },
{ .refclk = 38400, .cdclk = 192000, .divider = 2, .ratio = 10 },
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index c9b051a..566b34b 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -7824,6 +7824,7 @@ intel_crtc_prepare_cleared_state(struct intel_atomic_state *state,
* only fields that are know to not cause problems are preserved. */
saved_state->uapi = crtc_state->uapi;
+ saved_state->inherited = crtc_state->inherited;
saved_state->scaler_state = crtc_state->scaler_state;
saved_state->shared_dpll = crtc_state->shared_dpll;
saved_state->dpll_hw_state = crtc_state->dpll_hw_state;
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index b56850d..90e055f 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -1520,6 +1520,8 @@ struct intel_psr {
bool psr2_sel_fetch_enabled;
bool req_psr2_sdp_prior_scanline;
u8 sink_sync_latency;
+ u8 io_wake_lines;
+ u8 fast_wake_lines;
ktime_t last_entry_attempt;
ktime_t last_exit;
bool sink_not_reliable;
diff --git a/drivers/gpu/drm/i915/display/intel_psr.c b/drivers/gpu/drm/i915/display/intel_psr.c
index a3d0c57..cf1e924 100644
--- a/drivers/gpu/drm/i915/display/intel_psr.c
+++ b/drivers/gpu/drm/i915/display/intel_psr.c
@@ -22,6 +22,7 @@
*/
#include <drm/drm_atomic_helper.h>
+#include <drm/drm_damage_helper.h>
#include "display/intel_dp.h"
@@ -548,6 +549,14 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
val |= EDP_PSR2_FRAME_BEFORE_SU(intel_dp->psr.sink_sync_latency + 1);
val |= intel_psr2_get_tp_time(intel_dp);
+ if (DISPLAY_VER(dev_priv) >= 12) {
+ if (intel_dp->psr.io_wake_lines < 9 &&
+ intel_dp->psr.fast_wake_lines < 9)
+ val |= TGL_EDP_PSR2_BLOCK_COUNT_NUM_2;
+ else
+ val |= TGL_EDP_PSR2_BLOCK_COUNT_NUM_3;
+ }
+
/* Wa_22012278275:adl-p */
if (IS_ADLP_DISPLAY_STEP(dev_priv, STEP_A0, STEP_E0)) {
static const u8 map[] = {
@@ -564,31 +573,21 @@ static void hsw_activate_psr2(struct intel_dp *intel_dp)
* Still using the default IO_BUFFER_WAKE and FAST_WAKE, see
* comments bellow for more information
*/
- u32 tmp, lines = 7;
+ u32 tmp;
- val |= TGL_EDP_PSR2_BLOCK_COUNT_NUM_2;
-
- tmp = map[lines - TGL_EDP_PSR2_IO_BUFFER_WAKE_MIN_LINES];
+ tmp = map[intel_dp->psr.io_wake_lines - TGL_EDP_PSR2_IO_BUFFER_WAKE_MIN_LINES];
tmp = tmp << TGL_EDP_PSR2_IO_BUFFER_WAKE_SHIFT;
val |= tmp;
- tmp = map[lines - TGL_EDP_PSR2_FAST_WAKE_MIN_LINES];
+ tmp = map[intel_dp->psr.fast_wake_lines - TGL_EDP_PSR2_FAST_WAKE_MIN_LINES];
tmp = tmp << TGL_EDP_PSR2_FAST_WAKE_MIN_SHIFT;
val |= tmp;
} else if (DISPLAY_VER(dev_priv) >= 12) {
- /*
- * TODO: 7 lines of IO_BUFFER_WAKE and FAST_WAKE are default
- * values from BSpec. In order to setting an optimal power
- * consumption, lower than 4k resoluition mode needs to decrese
- * IO_BUFFER_WAKE and FAST_WAKE. And higher than 4K resolution
- * mode needs to increase IO_BUFFER_WAKE and FAST_WAKE.
- */
- val |= TGL_EDP_PSR2_BLOCK_COUNT_NUM_2;
- val |= TGL_EDP_PSR2_IO_BUFFER_WAKE(7);
- val |= TGL_EDP_PSR2_FAST_WAKE(7);
+ val |= TGL_EDP_PSR2_IO_BUFFER_WAKE(intel_dp->psr.io_wake_lines);
+ val |= TGL_EDP_PSR2_FAST_WAKE(intel_dp->psr.fast_wake_lines);
} else if (DISPLAY_VER(dev_priv) >= 9) {
- val |= EDP_PSR2_IO_BUFFER_WAKE(7);
- val |= EDP_PSR2_FAST_WAKE(7);
+ val |= EDP_PSR2_IO_BUFFER_WAKE(intel_dp->psr.io_wake_lines);
+ val |= EDP_PSR2_FAST_WAKE(intel_dp->psr.fast_wake_lines);
}
if (intel_dp->psr.req_psr2_sdp_prior_scanline)
@@ -755,11 +754,7 @@ tgl_dc3co_exitline_compute_config(struct intel_dp *intel_dp,
static bool intel_psr2_sel_fetch_config_valid(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
- struct intel_atomic_state *state = to_intel_atomic_state(crtc_state->uapi.state);
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
- struct intel_plane_state *plane_state;
- struct intel_plane *plane;
- int i;
if (!dev_priv->params.enable_psr2_sel_fetch &&
intel_dp->psr.debug != I915_PSR_DEBUG_ENABLE_SEL_FETCH) {
@@ -774,14 +769,6 @@ static bool intel_psr2_sel_fetch_config_valid(struct intel_dp *intel_dp,
return false;
}
- for_each_new_intel_plane_in_state(state, plane, plane_state, i) {
- if (plane_state->uapi.rotation != DRM_MODE_ROTATE_0) {
- drm_dbg_kms(&dev_priv->drm,
- "PSR2 sel fetch not enabled, plane rotated\n");
- return false;
- }
- }
-
/* Wa_14010254185 Wa_14010103792 */
if (IS_TGL_DISPLAY_STEP(dev_priv, STEP_A0, STEP_C0)) {
drm_dbg_kms(&dev_priv->drm,
@@ -853,6 +840,46 @@ static bool _compute_psr2_sdp_prior_scanline_indication(struct intel_dp *intel_d
return true;
}
+static bool _compute_psr2_wake_times(struct intel_dp *intel_dp,
+ struct intel_crtc_state *crtc_state)
+{
+ struct drm_i915_private *i915 = dp_to_i915(intel_dp);
+ int io_wake_lines, io_wake_time, fast_wake_lines, fast_wake_time;
+ u8 max_wake_lines;
+
+ if (DISPLAY_VER(i915) >= 12) {
+ io_wake_time = 42;
+ /*
+ * According to Bspec it's 42us, but based on testing
+ * it is not enough -> use 45 us.
+ */
+ fast_wake_time = 45;
+ max_wake_lines = 12;
+ } else {
+ io_wake_time = 50;
+ fast_wake_time = 32;
+ max_wake_lines = 8;
+ }
+
+ io_wake_lines = intel_usecs_to_scanlines(
+ &crtc_state->uapi.adjusted_mode, io_wake_time);
+ fast_wake_lines = intel_usecs_to_scanlines(
+ &crtc_state->uapi.adjusted_mode, fast_wake_time);
+
+ if (io_wake_lines > max_wake_lines ||
+ fast_wake_lines > max_wake_lines)
+ return false;
+
+ if (i915->params.psr_safest_params)
+ io_wake_lines = fast_wake_lines = max_wake_lines;
+
+ /* According to Bspec lower limit should be set as 7 lines. */
+ intel_dp->psr.io_wake_lines = max(io_wake_lines, 7);
+ intel_dp->psr.fast_wake_lines = max(fast_wake_lines, 7);
+
+ return true;
+}
+
static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
@@ -950,6 +977,12 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
return false;
}
+ if (!_compute_psr2_wake_times(intel_dp, crtc_state)) {
+ drm_dbg_kms(&dev_priv->drm,
+ "PSR2 not enabled, Unable to use long enough wake times\n");
+ return false;
+ }
+
if (HAS_PSR2_SEL_FETCH(dev_priv)) {
if (!intel_psr2_sel_fetch_config_valid(intel_dp, crtc_state) &&
!HAS_PSR_HW_TRACKING(dev_priv)) {
@@ -997,7 +1030,7 @@ void intel_psr_compute_config(struct intel_dp *intel_dp,
int psr_setup_time;
/*
- * Current PSR panels dont work reliably with VRR enabled
+ * Current PSR panels don't work reliably with VRR enabled
* So if VRR is enabled, do not enable PSR.
*/
if (crtc_state->vrr.enable)
@@ -1601,6 +1634,63 @@ static void intel_psr2_sel_fetch_pipe_alignment(const struct intel_crtc_state *c
drm_warn(&dev_priv->drm, "Missing PSR2 sel fetch alignment with DSC\n");
}
+/*
+ * FIXME: Not sure why but when moving the cursor fast it causes some artifacts
+ * of the cursor to be left in the cursor path, adding some pixels above the
+ * cursor to the damaged area fixes the issue.
+ */
+static void cursor_area_workaround(const struct intel_plane_state *new_plane_state,
+ struct drm_rect *damaged_area,
+ struct drm_rect *pipe_clip)
+{
+ const struct intel_plane *plane = to_intel_plane(new_plane_state->uapi.plane);
+ int height;
+
+ if (plane->id != PLANE_CURSOR)
+ return;
+
+ height = drm_rect_height(&new_plane_state->uapi.dst) / 2;
+ damaged_area->y1 -= height;
+ damaged_area->y1 = max(damaged_area->y1, 0);
+
+ clip_area_update(pipe_clip, damaged_area);
+}
+
+/*
+ * TODO: Not clear how to handle planes with negative position,
+ * also planes are not updated if they have a negative X
+ * position so for now doing a full update in this cases
+ *
+ * Plane scaling and rotation is not supported by selective fetch and both
+ * properties can change without a modeset, so need to be check at every
+ * atomic commit.
+ */
+static bool psr2_sel_fetch_plane_state_supported(const struct intel_plane_state *plane_state)
+{
+ if (plane_state->uapi.dst.y1 < 0 ||
+ plane_state->uapi.dst.x1 < 0 ||
+ plane_state->scaler_id >= 0 ||
+ plane_state->uapi.rotation != DRM_MODE_ROTATE_0)
+ return false;
+
+ return true;
+}
+
+/*
+ * Check for pipe properties that is not supported by selective fetch.
+ *
+ * TODO: pipe scaling causes a modeset but skl_update_scaler_crtc() is executed
+ * after intel_psr_compute_config(), so for now keeping PSR2 selective fetch
+ * enabled and going to the full update path.
+ */
+static bool psr2_sel_fetch_pipe_state_supported(const struct intel_crtc_state *crtc_state)
+{
+ if (crtc_state->scaler_state.scaler_id >= 0)
+ return false;
+
+ return true;
+}
+
int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
struct intel_crtc *crtc)
{
@@ -1614,9 +1704,10 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
if (!crtc_state->enable_psr2_sel_fetch)
return 0;
- ret = drm_atomic_add_affected_planes(&state->base, &crtc->base);
- if (ret)
- return ret;
+ if (!psr2_sel_fetch_pipe_state_supported(crtc_state)) {
+ full_update = true;
+ goto skip_sel_fetch_set_loop;
+ }
/*
* Calculate minimal selective fetch area of each plane and calculate
@@ -1627,8 +1718,8 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
for_each_oldnew_intel_plane_in_state(state, plane, old_plane_state,
new_plane_state, i) {
struct drm_rect src, damaged_area = { .y1 = -1 };
- struct drm_mode_rect *damaged_clips;
- u32 num_clips, j;
+ struct drm_atomic_helper_damage_iter iter;
+ struct drm_rect clip;
if (new_plane_state->uapi.crtc != crtc_state->uapi.crtc)
continue;
@@ -1637,19 +1728,11 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
!old_plane_state->uapi.visible)
continue;
- /*
- * TODO: Not clear how to handle planes with negative position,
- * also planes are not updated if they have a negative X
- * position so for now doing a full update in this cases
- */
- if (new_plane_state->uapi.dst.y1 < 0 ||
- new_plane_state->uapi.dst.x1 < 0) {
+ if (!psr2_sel_fetch_plane_state_supported(new_plane_state)) {
full_update = true;
break;
}
- num_clips = drm_plane_get_damage_clips_count(&new_plane_state->uapi);
-
/*
* If visibility or plane moved, mark the whole plane area as
* damaged as it needs to be complete redraw in the new and old
@@ -1669,15 +1752,12 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
damaged_area.y2 = new_plane_state->uapi.dst.y2;
clip_area_update(&pipe_clip, &damaged_area);
}
+
+ cursor_area_workaround(new_plane_state, &damaged_area,
+ &pipe_clip);
continue;
- } else if (new_plane_state->uapi.alpha != old_plane_state->uapi.alpha ||
- (!num_clips &&
- new_plane_state->uapi.fb != old_plane_state->uapi.fb)) {
- /*
- * If the plane don't have damaged areas but the
- * framebuffer changed or alpha changed, mark the whole
- * plane area as damaged.
- */
+ } else if (new_plane_state->uapi.alpha != old_plane_state->uapi.alpha) {
+ /* If alpha changed mark the whole plane area as damaged */
damaged_area.y1 = new_plane_state->uapi.dst.y1;
damaged_area.y2 = new_plane_state->uapi.dst.y2;
clip_area_update(&pipe_clip, &damaged_area);
@@ -1685,15 +1765,11 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
}
drm_rect_fp_to_int(&src, &new_plane_state->uapi.src);
- damaged_clips = drm_plane_get_damage_clips(&new_plane_state->uapi);
- for (j = 0; j < num_clips; j++) {
- struct drm_rect clip;
-
- clip.x1 = damaged_clips[j].x1;
- clip.y1 = damaged_clips[j].y1;
- clip.x2 = damaged_clips[j].x2;
- clip.y2 = damaged_clips[j].y2;
+ drm_atomic_helper_damage_iter_init(&iter,
+ &old_plane_state->uapi,
+ &new_plane_state->uapi);
+ drm_atomic_for_each_plane_damage(&iter, &clip) {
if (drm_rect_intersect(&clip, &src))
clip_area_update(&damaged_area, &clip);
}
@@ -1709,6 +1785,10 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
if (full_update)
goto skip_sel_fetch_set_loop;
+ ret = drm_atomic_add_affected_planes(&state->base, &crtc->base);
+ if (ret)
+ return ret;
+
intel_psr2_sel_fetch_pipe_alignment(crtc_state, &pipe_clip);
/*
@@ -1727,6 +1807,11 @@ int intel_psr2_sel_fetch_update(struct intel_atomic_state *state,
if (!drm_rect_intersect(&inter, &new_plane_state->uapi.dst))
continue;
+ if (!psr2_sel_fetch_plane_state_supported(new_plane_state)) {
+ full_update = true;
+ break;
+ }
+
sel_fetch_area = &new_plane_state->psr2_sel_fetch_area;
sel_fetch_area->y1 = inter.y1 - new_plane_state->uapi.dst.y1;
sel_fetch_area->y2 = inter.y2 - new_plane_state->uapi.dst.y1;
@@ -2026,7 +2111,7 @@ static void intel_psr_work(struct work_struct *work)
}
/**
- * intel_psr_invalidate - Invalidade PSR
+ * intel_psr_invalidate - Invalidate PSR
* @dev_priv: i915 device
* @frontbuffer_bits: frontbuffer plane tracking bits
* @origin: which operation caused the invalidate
diff --git a/drivers/gpu/drm/i915/display/intel_quirks.c b/drivers/gpu/drm/i915/display/intel_quirks.c
index 407b096..015e5b8 100644
--- a/drivers/gpu/drm/i915/display/intel_quirks.c
+++ b/drivers/gpu/drm/i915/display/intel_quirks.c
@@ -193,6 +193,8 @@ static struct intel_quirk intel_quirks[] = {
/* ECS Liva Q2 */
{ 0x3185, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
{ 0x3184, 0x1019, 0xa94d, quirk_increase_ddi_disabled_time },
+ /* HP Notebook - 14-r206nv */
+ { 0x0f31, 0x103c, 0x220f, quirk_invert_brightness },
};
void intel_init_quirks(struct drm_i915_private *i915)
diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
index 0e88544..1b5b4d2 100644
--- a/drivers/gpu/drm/i915/display/intel_tc.c
+++ b/drivers/gpu/drm/i915/display/intel_tc.c
@@ -386,9 +386,9 @@ static bool icl_tc_phy_is_owned(struct intel_digital_port *dig_port)
PORT_TX_DFLEXDPCSSS(dig_port->tc_phy_fia));
if (val == 0xffffffff) {
drm_dbg_kms(&i915->drm,
- "Port %s: PHY in TCCOLD, assume safe mode\n",
+ "Port %s: PHY in TCCOLD, assume not owned\n",
dig_port->tc_port_name);
- return true;
+ return false;
}
return val & DP_PHY_MODE_STATUS_NOT_SAFE(dig_port->tc_phy_fia_idx);
diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
index 724e7b0..b97b4b3 100644
--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
+++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
@@ -1473,7 +1473,7 @@ static int skl_check_main_surface(struct intel_plane_state *plane_state)
u32 offset;
int ret;
- if (w > max_width || w < min_width || h > max_height) {
+ if (w > max_width || w < min_width || h > max_height || h < 1) {
drm_dbg_kms(&dev_priv->drm,
"requested Y/RGB source size %dx%d outside limits (min: %dx1 max: %dx%d)\n",
w, h, min_width, max_width, max_height);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 11f0721..827f2f9 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -533,7 +533,7 @@ static int shmem_object_init(struct intel_memory_region *mem,
mapping_set_gfp_mask(mapping, mask);
GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));
- i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class, 0);
+ i915_gem_object_init(obj, &i915_gem_shmem_ops, &lock_class, flags);
obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
obj->write_domain = I915_GEM_DOMAIN_CPU;
obj->read_domains = I915_GEM_DOMAIN_CPU;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
index ef4d0f7..d4897ce0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
@@ -294,10 +294,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
spin_unlock(&obj->vma.lock);
obj->tiling_and_stride = tiling | stride;
- i915_gem_object_unlock(obj);
-
- /* Force the fence to be reacquired for GTT access */
- i915_gem_object_release_mmap_gtt(obj);
/* Try to preallocate memory required to save swizzling on put-pages */
if (i915_gem_object_needs_bit17_swizzle(obj)) {
@@ -310,6 +306,11 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
obj->bit_17 = NULL;
}
+ i915_gem_object_unlock(obj);
+
+ /* Force the fence to be reacquired for GTT access */
+ i915_gem_object_release_mmap_gtt(obj);
+
return 0;
}
diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c
index 952e717..b2a0031 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -709,12 +709,12 @@ int intel_gt_init(struct intel_gt *gt)
if (err)
goto err_gt;
- intel_uc_init_late(>->uc);
-
err = i915_inject_probe_error(gt->i915, -EIO);
if (err)
goto err_gt;
+ intel_uc_init_late(>->uc);
+
intel_migrate_init(>->migrate, gt);
goto out_fw;
diff --git a/drivers/gpu/drm/i915/gt/intel_ring.c b/drivers/gpu/drm/i915/gt/intel_ring.c
index 7c4d515..7d82545 100644
--- a/drivers/gpu/drm/i915/gt/intel_ring.c
+++ b/drivers/gpu/drm/i915/gt/intel_ring.c
@@ -113,7 +113,7 @@ static struct i915_vma *create_ring_vma(struct i915_ggtt *ggtt, int size)
struct i915_vma *vma;
obj = i915_gem_object_create_lmem(i915, size, I915_BO_ALLOC_VOLATILE);
- if (IS_ERR(obj) && i915_ggtt_has_aperture(ggtt))
+ if (IS_ERR(obj) && i915_ggtt_has_aperture(ggtt) && !HAS_LLC(i915))
obj = i915_gem_object_create_stolen(i915, size);
if (IS_ERR(obj))
obj = i915_gem_object_create_internal(i915, size);
diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c
index 6b5ab19..de93a1e 100644
--- a/drivers/gpu/drm/i915/gt/intel_workarounds.c
+++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c
@@ -1049,6 +1049,22 @@ icl_gt_workarounds_init(struct drm_i915_private *i915, struct i915_wa_list *wal)
GAMT_CHKN_BIT_REG,
GAMT_CHKN_DISABLE_L3_COH_PIPE);
+ /*
+ * Wa_1408615072:icl,ehl (vsunit)
+ * Wa_1407596294:icl,ehl (hsunit)
+ */
+ wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE,
+ VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS);
+
+ /* Wa_1407352427:icl,ehl */
+ wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2,
+ PSDUNIT_CLKGATE_DIS);
+
+ /* Wa_1406680159:icl,ehl */
+ wa_write_or(wal,
+ SUBSLICE_UNIT_LEVEL_CLKGATE,
+ GWUNIT_CLKGATE_DIS);
+
/* Wa_1607087056:icl,ehl,jsl */
if (IS_ICELAKE(i915) ||
IS_JSL_EHL_GT_STEP(i915, STEP_A0, STEP_B0))
@@ -1739,22 +1755,6 @@ rcs_engine_wa_init(struct intel_engine_cs *engine, struct i915_wa_list *wal)
GEN11_ENABLE_32_PLANE_MODE);
/*
- * Wa_1408615072:icl,ehl (vsunit)
- * Wa_1407596294:icl,ehl (hsunit)
- */
- wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE,
- VSUNIT_CLKGATE_DIS | HSUNIT_CLKGATE_DIS);
-
- /* Wa_1407352427:icl,ehl */
- wa_write_or(wal, UNSLICE_UNIT_LEVEL_CLKGATE2,
- PSDUNIT_CLKGATE_DIS);
-
- /* Wa_1406680159:icl,ehl */
- wa_write_or(wal,
- SUBSLICE_UNIT_LEVEL_CLKGATE,
- GWUNIT_CLKGATE_DIS);
-
- /*
* Wa_1408767742:icl[a2..forever],ehl[all]
* Wa_1605460711:icl[a0..c0]
*/
diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index 6e09a1c..97b5ba2 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -2845,6 +2845,8 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine)
return;
xa_for_each(&guc->context_lookup, index, ce) {
+ bool found;
+
if (!intel_context_is_pinned(ce))
continue;
@@ -2856,10 +2858,18 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine)
continue;
}
+ found = false;
+ spin_lock(&ce->guc_state.lock);
list_for_each_entry(rq, &ce->guc_active.requests, sched.link) {
if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE)
continue;
+ found = true;
+ break;
+ }
+ spin_unlock(&ce->guc_state.lock);
+
+ if (found) {
intel_engine_set_hung_context(engine, ce);
/* Can only cope with one hang at a time... */
diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
index e5c2fdf..7ea7abe 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.c
+++ b/drivers/gpu/drm/i915/gvt/gtt.c
@@ -1195,10 +1195,8 @@ static int split_2MB_gtt_entry(struct intel_vgpu *vgpu,
for_each_shadow_entry(sub_spt, &sub_se, sub_index) {
ret = intel_gvt_hypervisor_dma_map_guest_page(vgpu,
start_gfn + sub_index, PAGE_SIZE, &dma_addr);
- if (ret) {
- ppgtt_invalidate_spt(spt);
- return ret;
- }
+ if (ret)
+ goto err;
sub_se.val64 = se->val64;
/* Copy the PAT field from PDE. */
@@ -1217,6 +1215,17 @@ static int split_2MB_gtt_entry(struct intel_vgpu *vgpu,
ops->set_pfn(se, sub_spt->shadow_page.mfn);
ppgtt_set_shadow_entry(spt, se, index);
return 0;
+err:
+ /* Cancel the existing addess mappings of DMA addr. */
+ for_each_present_shadow_entry(sub_spt, &sub_se, sub_index) {
+ gvt_vdbg_mm("invalidate 4K entry\n");
+ ppgtt_invalidate_pte(sub_spt, &sub_se);
+ }
+ /* Release the new allocated spt. */
+ trace_spt_change(sub_spt->vgpu->id, "release", sub_spt,
+ sub_spt->guest_page.gfn, sub_spt->shadow_page.type);
+ ppgtt_free_spt(sub_spt);
+ return ret;
}
static int split_64KB_gtt_entry(struct intel_vgpu *vgpu,
diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915_active.c
index 3103c1e..2f640b9 100644
--- a/drivers/gpu/drm/i915/i915_active.c
+++ b/drivers/gpu/drm/i915/i915_active.c
@@ -92,8 +92,7 @@ static void debug_active_init(struct i915_active *ref)
static void debug_active_activate(struct i915_active *ref)
{
lockdep_assert_held(&ref->tree_lock);
- if (!atomic_read(&ref->count)) /* before the first inc */
- debug_object_activate(ref, &active_debug_desc);
+ debug_object_activate(ref, &active_debug_desc);
}
static void debug_active_deactivate(struct i915_active *ref)
@@ -422,8 +421,7 @@ replace_barrier(struct i915_active *ref, struct i915_active_fence *active)
* we can use it to substitute for the pending idle-barrer
* request that we want to emit on the kernel_context.
*/
- __active_del_barrier(ref, node_from_active(active));
- return true;
+ return __active_del_barrier(ref, node_from_active(active));
}
int i915_active_ref(struct i915_active *ref, u64 idx, struct dma_fence *fence)
@@ -436,16 +434,19 @@ int i915_active_ref(struct i915_active *ref, u64 idx, struct dma_fence *fence)
if (err)
return err;
- active = active_instance(ref, idx);
- if (!active) {
- err = -ENOMEM;
- goto out;
- }
+ do {
+ active = active_instance(ref, idx);
+ if (!active) {
+ err = -ENOMEM;
+ goto out;
+ }
- if (replace_barrier(ref, active)) {
- RCU_INIT_POINTER(active->fence, NULL);
- atomic_dec(&ref->count);
- }
+ if (replace_barrier(ref, active)) {
+ RCU_INIT_POINTER(active->fence, NULL);
+ atomic_dec(&ref->count);
+ }
+ } while (unlikely(is_barrier(active)));
+
if (!__i915_active_fence_set(active, fence))
__i915_active_acquire(ref);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 59fb4c7..41094d5 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -986,12 +986,9 @@ static int i915_driver_open(struct drm_device *dev, struct drm_file *file)
*/
static void i915_driver_lastclose(struct drm_device *dev)
{
- struct drm_i915_private *i915 = to_i915(dev);
-
intel_fbdev_restore_mode(dev);
- if (HAS_DISPLAY(i915))
- vga_switcheroo_process_delayed_switch();
+ vga_switcheroo_process_delayed_switch();
}
static void i915_driver_postclose(struct drm_device *dev, struct drm_file *file)
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index 1ad7259..0bba1c5 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -401,7 +401,8 @@ static const struct intel_device_info ilk_m_info = {
.has_coherent_ggtt = true, \
.has_llc = 1, \
.has_rc6 = 1, \
- .has_rc6p = 1, \
+ /* snb does support rc6p, but enabling it causes various issues */ \
+ .has_rc6p = 0, \
.has_rps = true, \
.dma_mask_size = 40, \
.ppgtt_type = INTEL_PPGTT_ALIASING, \
diff --git a/drivers/gpu/drm/i915/i915_switcheroo.c b/drivers/gpu/drm/i915/i915_switcheroo.c
index de0e224b..f1ce9f5 100644
--- a/drivers/gpu/drm/i915/i915_switcheroo.c
+++ b/drivers/gpu/drm/i915/i915_switcheroo.c
@@ -18,6 +18,10 @@ static void i915_switcheroo_set_state(struct pci_dev *pdev,
dev_err(&pdev->dev, "DRM not initialized, aborting switch.\n");
return;
}
+ if (!HAS_DISPLAY(i915)) {
+ dev_err(&pdev->dev, "Device state not initialized, aborting switch.\n");
+ return;
+ }
if (state == VGA_SWITCHEROO_ON) {
drm_info(&i915->drm, "switched on\n");
@@ -43,7 +47,7 @@ static bool i915_switcheroo_can_switch(struct pci_dev *pdev)
* locking inversion with the driver load path. And the access here is
* completely racy anyway. So don't bother with locking for now.
*/
- return i915 && atomic_read(&i915->drm.open_count) == 0;
+ return i915 && HAS_DISPLAY(i915) && atomic_read(&i915->drm.open_count) == 0;
}
static const struct vga_switcheroo_client_ops i915_switcheroo_ops = {
diff --git a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
index 4b32834..83ffd17 100644
--- a/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
+++ b/drivers/gpu/drm/i915/selftests/intel_scheduler_helpers.c
@@ -16,8 +16,7 @@
int intel_selftest_modify_policy(struct intel_engine_cs *engine,
struct intel_selftest_saved_policy *saved,
- u32 modify_type)
-
+ enum selftest_scheduler_modify modify_type)
{
int err;
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
index 34bb6c7..6497c9f 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_crtc.c
@@ -889,6 +889,8 @@ int mtk_drm_crtc_create(struct drm_device *drm_dev,
mtk_crtc->planes = devm_kcalloc(dev, num_comp_planes,
sizeof(struct drm_plane), GFP_KERNEL);
+ if (!mtk_crtc->planes)
+ return -ENOMEM;
for (i = 0; i < mtk_crtc->ddp_comp_nr; i++) {
ret = mtk_drm_crtc_init_comp_planes(drm_dev, mtk_crtc, i,
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
index aec3972..8b3928c 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
@@ -376,6 +376,7 @@ static int mtk_drm_bind(struct device *dev)
err_deinit:
mtk_drm_kms_deinit(drm);
err_free:
+ private->drm = NULL;
drm_dev_put(drm);
return ret;
}
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
index d054496..726a34c 100644
--- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c
+++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c
@@ -164,8 +164,6 @@ static int mtk_drm_gem_object_mmap(struct drm_gem_object *obj,
ret = dma_mmap_attrs(priv->dma_dev, vma, mtk_gem->cookie,
mtk_gem->dma_addr, obj->size, mtk_gem->dma_attrs);
- if (ret)
- drm_gem_vm_close(vma);
return ret;
}
@@ -261,6 +259,6 @@ void mtk_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *ma
return;
vunmap(vaddr);
- mtk_gem->kvaddr = 0;
+ mtk_gem->kvaddr = NULL;
kfree(mtk_gem->pages);
}
diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
index a6d2853..98b1204 100644
--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
+++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
@@ -709,7 +709,7 @@ static void mtk_dsi_lane_ready(struct mtk_dsi *dsi)
mtk_dsi_clk_ulp_mode_leave(dsi);
mtk_dsi_lane0_ulp_mode_leave(dsi);
mtk_dsi_clk_hs_mode(dsi, 0);
- msleep(20);
+ usleep_range(1000, 3000);
/* The reaction time after pulling up the mipi signal for dsi_rx */
}
}
diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
index 6e37de4f..207b309 100644
--- a/drivers/gpu/drm/meson/meson_drv.c
+++ b/drivers/gpu/drm/meson/meson_drv.c
@@ -324,23 +324,23 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
ret = meson_encoder_hdmi_init(priv);
if (ret)
- goto exit_afbcd;
+ goto unbind_all;
ret = meson_plane_create(priv);
if (ret)
- goto exit_afbcd;
+ goto unbind_all;
ret = meson_overlay_create(priv);
if (ret)
- goto exit_afbcd;
+ goto unbind_all;
ret = meson_crtc_create(priv);
if (ret)
- goto exit_afbcd;
+ goto unbind_all;
ret = request_irq(priv->vsync_irq, meson_irq, 0, drm->driver->name, drm);
if (ret)
- goto exit_afbcd;
+ goto unbind_all;
drm_mode_config_reset(drm);
@@ -358,6 +358,9 @@ static int meson_drv_bind_master(struct device *dev, bool has_components)
uninstall_irq:
free_irq(priv->vsync_irq, drm);
+unbind_all:
+ if (has_components)
+ component_unbind_all(drm->dev, drm);
exit_afbcd:
if (priv->afbcd.ops)
priv->afbcd.ops->exit(priv);
diff --git a/drivers/gpu/drm/meson/meson_vpp.c b/drivers/gpu/drm/meson/meson_vpp.c
index 1548376..5df1957 100644
--- a/drivers/gpu/drm/meson/meson_vpp.c
+++ b/drivers/gpu/drm/meson/meson_vpp.c
@@ -100,6 +100,8 @@ void meson_vpp_init(struct meson_drm *priv)
priv->io_base + _REG(VPP_DOLBY_CTRL));
writel_relaxed(0x1020080,
priv->io_base + _REG(VPP_DUMMY_DATA1));
+ writel_relaxed(0x42020,
+ priv->io_base + _REG(VPP_DUMMY_DATA));
} else if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A))
writel_relaxed(0xf, priv->io_base + _REG(DOLBY_PATH_CTRL));
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
index 5e2750e..b8c49ba 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
@@ -153,8 +153,8 @@ static void a5xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
OUT_RING(ring, 1);
/* Enable local preemption for finegrain preemption */
- OUT_PKT7(ring, CP_PREEMPT_ENABLE_GLOBAL, 1);
- OUT_RING(ring, 0x02);
+ OUT_PKT7(ring, CP_PREEMPT_ENABLE_LOCAL, 1);
+ OUT_RING(ring, 0x1);
/* Allow CP_CONTEXT_SWITCH_YIELD packets in the IB2 */
OUT_PKT7(ring, CP_YIELD_ENABLE, 1);
@@ -801,7 +801,7 @@ static int a5xx_hw_init(struct msm_gpu *gpu)
gpu_write(gpu, REG_A5XX_RBBM_AHB_CNTL2, 0x0000003F);
/* Set the highest bank bit */
- if (adreno_is_a540(adreno_gpu))
+ if (adreno_is_a540(adreno_gpu) || adreno_is_a530(adreno_gpu))
regbit = 2;
else
regbit = 1;
diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
index 8abc9a2..e0eef47 100644
--- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
+++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c
@@ -63,7 +63,7 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu)
struct msm_ringbuffer *ring = gpu->rb[i];
spin_lock_irqsave(&ring->preempt_lock, flags);
- empty = (get_wptr(ring) == ring->memptrs->rptr);
+ empty = (get_wptr(ring) == gpu->funcs->get_rptr(gpu, ring));
spin_unlock_irqrestore(&ring->preempt_lock, flags);
if (!empty)
@@ -208,6 +208,7 @@ void a5xx_preempt_hw_init(struct msm_gpu *gpu)
a5xx_gpu->preempt[i]->wptr = 0;
a5xx_gpu->preempt[i]->rptr = 0;
a5xx_gpu->preempt[i]->rbase = gpu->rb[i]->iova;
+ a5xx_gpu->preempt[i]->rptr_addr = shadowptr(a5xx_gpu, gpu->rb[i]);
}
/* Write a 0 to signal that we aren't switching pagetables */
@@ -259,7 +260,6 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu,
ptr->data = 0;
ptr->cntl = MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE;
- ptr->rptr_addr = shadowptr(a5xx_gpu, ring);
ptr->counter = counters_iova;
return 0;
diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
index bba68776..3fa0193 100644
--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
@@ -952,13 +952,13 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu)
{
struct msm_gpu *gpu = &adreno_gpu->base;
- struct msm_drm_private *priv = gpu->dev->dev_private;
+ struct msm_drm_private *priv = gpu->dev ? gpu->dev->dev_private : NULL;
unsigned int i;
for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)
release_firmware(adreno_gpu->fw[i]);
- if (pm_runtime_enabled(&priv->gpu_pdev->dev))
+ if (priv && pm_runtime_enabled(&priv->gpu_pdev->dev))
pm_runtime_disable(&priv->gpu_pdev->dev);
msm_gpu_cleanup(&adreno_gpu->base);
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
index 2186fc9..4194689 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
@@ -770,7 +770,10 @@ static void dpu_crtc_reset(struct drm_crtc *crtc)
if (crtc->state)
dpu_crtc_destroy_state(crtc, crtc->state);
- __drm_atomic_helper_crtc_reset(crtc, &cstate->base);
+ if (cstate)
+ __drm_atomic_helper_crtc_reset(crtc, &cstate->base);
+ else
+ __drm_atomic_helper_crtc_reset(crtc, NULL);
}
/**
@@ -938,6 +941,8 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
bool needs_dirtyfb = dpu_crtc_needs_dirtyfb(crtc_state);
pstates = kzalloc(sizeof(*pstates) * DPU_STAGE_MAX * 4, GFP_KERNEL);
+ if (!pstates)
+ return -ENOMEM;
if (!crtc_state->enable || !crtc_state->active) {
DRM_DEBUG_ATOMIC("crtc%d -> enable %d, active %d, skip atomic_check\n",
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
index 700d65e..4c65259 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_catalog.c
@@ -375,19 +375,19 @@ static const struct dpu_ctl_cfg sdm845_ctl[] = {
static const struct dpu_ctl_cfg sc7180_ctl[] = {
{
.name = "ctl_0", .id = CTL_0,
- .base = 0x1000, .len = 0xE4,
+ .base = 0x1000, .len = 0x1dc,
.features = BIT(DPU_CTL_ACTIVE_CFG),
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 9),
},
{
.name = "ctl_1", .id = CTL_1,
- .base = 0x1200, .len = 0xE4,
+ .base = 0x1200, .len = 0x1dc,
.features = BIT(DPU_CTL_ACTIVE_CFG),
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 10),
},
{
.name = "ctl_2", .id = CTL_2,
- .base = 0x1400, .len = 0xE4,
+ .base = 0x1400, .len = 0x1dc,
.features = BIT(DPU_CTL_ACTIVE_CFG),
.intr_start = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR2, 11),
},
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
index e32fe89..59390dc 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c
@@ -1089,7 +1089,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
struct dpu_plane_state *pstate = to_dpu_plane_state(state);
struct drm_crtc *crtc = state->crtc;
struct drm_framebuffer *fb = state->fb;
- bool is_rt_pipe, update_qos_remap;
+ bool is_rt_pipe;
const struct dpu_format *fmt =
to_dpu_format(msm_framebuffer_format(fb));
@@ -1100,6 +1100,9 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
pstate->pending = true;
is_rt_pipe = (dpu_crtc_get_client_type(crtc) != NRT_CLIENT);
+ pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe);
+ pdpu->is_rt_pipe = is_rt_pipe;
+
_dpu_plane_set_qos_ctrl(plane, false, DPU_PLANE_QOS_PANIC_CTRL);
DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT
@@ -1205,14 +1208,8 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane)
_dpu_plane_set_ot_limit(plane, crtc);
}
- update_qos_remap = (is_rt_pipe != pdpu->is_rt_pipe) ||
- pstate->needs_qos_remap;
-
- if (update_qos_remap) {
- if (is_rt_pipe != pdpu->is_rt_pipe)
- pdpu->is_rt_pipe = is_rt_pipe;
- else if (pstate->needs_qos_remap)
- pstate->needs_qos_remap = false;
+ if (pstate->needs_qos_remap) {
+ pstate->needs_qos_remap = false;
_dpu_plane_set_qos_remap(plane);
}
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
index 24fbaf5..932275b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
@@ -663,6 +663,11 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm,
blks_size, enc_id);
break;
}
+ if (!hw_blks[i]) {
+ DPU_ERROR("Allocated resource %d unavailable to assign to enc %d\n",
+ type, enc_id);
+ break;
+ }
blks[num_blks++] = hw_blks[i];
}
diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
index 31447da..2b15f10 100644
--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
+++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
@@ -1138,7 +1138,10 @@ static void mdp5_crtc_reset(struct drm_crtc *crtc)
if (crtc->state)
mdp5_crtc_destroy_state(crtc, crtc->state);
- __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
+ if (mdp5_cstate)
+ __drm_atomic_helper_crtc_reset(crtc, &mdp5_cstate->base);
+ else
+ __drm_atomic_helper_crtc_reset(crtc, NULL);
}
static const struct drm_crtc_funcs mdp5_crtc_no_lm_cursor_funcs = {
diff --git a/drivers/gpu/drm/msm/dsi/dsi_cfg.c b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
index ce39014..68a3f8f 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_cfg.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_cfg.c
@@ -209,8 +209,8 @@ static const struct msm_dsi_config sc7280_dsi_cfg = {
},
.bus_clk_names = dsi_sc7280_bus_clk_names,
.num_bus_clks = ARRAY_SIZE(dsi_sc7280_bus_clk_names),
- .io_start = { 0xae94000 },
- .num_dsi = 1,
+ .io_start = { 0xae94000, 0xae96000 },
+ .num_dsi = 2,
};
static const struct msm_dsi_host_cfg_ops msm_dsi_v2_host_ops = {
diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
index eb60ce1..d3ec4d6 100644
--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
+++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
@@ -1929,6 +1929,9 @@ int msm_dsi_host_init(struct msm_dsi *msm_dsi)
/* setup workqueue */
msm_host->workqueue = alloc_ordered_workqueue("dsi_drm_work", 0);
+ if (!msm_host->workqueue)
+ return -ENOMEM;
+
INIT_WORK(&msm_host->err_work, dsi_err_worker);
INIT_WORK(&msm_host->hpd_work, dsi_hpd_worker);
diff --git a/drivers/gpu/drm/msm/hdmi/hdmi.c b/drivers/gpu/drm/msm/hdmi/hdmi.c
index e1a9b52..2c944419 100644
--- a/drivers/gpu/drm/msm/hdmi/hdmi.c
+++ b/drivers/gpu/drm/msm/hdmi/hdmi.c
@@ -264,6 +264,10 @@ static struct hdmi *msm_hdmi_init(struct platform_device *pdev)
devm_pm_runtime_enable(&pdev->dev);
hdmi->workq = alloc_ordered_workqueue("msm_hdmi", 0);
+ if (!hdmi->workq) {
+ ret = -ENOMEM;
+ goto fail;
+ }
hdmi->i2c = msm_hdmi_i2c_init(hdmi);
if (IS_ERR(hdmi->i2c)) {
diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
index 916361c..6c4d5194 100644
--- a/drivers/gpu/drm/msm/msm_drv.c
+++ b/drivers/gpu/drm/msm/msm_drv.c
@@ -609,7 +609,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv)
if (IS_ERR(priv->event_thread[i].worker)) {
ret = PTR_ERR(priv->event_thread[i].worker);
DRM_DEV_ERROR(dev, "failed to create crtc_event kthread\n");
- ret = PTR_ERR(priv->event_thread[i].worker);
+ priv->event_thread[i].worker = NULL;
goto err_msm_uninit;
}
diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c
index f2cece5..7643967 100644
--- a/drivers/gpu/drm/msm/msm_fence.c
+++ b/drivers/gpu/drm/msm/msm_fence.c
@@ -21,7 +21,7 @@ msm_fence_context_alloc(struct drm_device *dev, volatile uint32_t *fenceptr,
return ERR_PTR(-ENOMEM);
fctx->dev = dev;
- strncpy(fctx->name, name, sizeof(fctx->name));
+ strscpy(fctx->name, name, sizeof(fctx->name));
fctx->context = dma_fence_context_alloc(1);
fctx->fenceptr = fenceptr;
spin_lock_init(&fctx->spinlock);
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index 1f74bab..fc2fb10 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -220,6 +220,10 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit,
goto out;
}
submit->cmd[i].relocs = kmalloc(sz, GFP_KERNEL);
+ if (!submit->cmd[i].relocs) {
+ ret = -ENOMEM;
+ goto out;
+ }
ret = copy_from_user(submit->cmd[i].relocs, userptr, sz);
if (ret) {
ret = -EFAULT;
@@ -636,8 +640,8 @@ static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev,
int ret = 0;
uint32_t i, j;
- post_deps = kmalloc_array(nr_syncobjs, sizeof(*post_deps),
- GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
+ post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps),
+ GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
if (!post_deps)
return ERR_PTR(-ENOMEM);
@@ -652,7 +656,6 @@ static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev,
}
post_deps[i].point = syncobj_desc.point;
- post_deps[i].chain = NULL;
if (syncobj_desc.flags) {
ret = -EINVAL;
diff --git a/drivers/gpu/drm/mxsfb/Kconfig b/drivers/gpu/drm/mxsfb/Kconfig
index ee22cd2..e7201e1 100644
--- a/drivers/gpu/drm/mxsfb/Kconfig
+++ b/drivers/gpu/drm/mxsfb/Kconfig
@@ -8,6 +8,7 @@
tristate "i.MX (e)LCDIF LCD controller"
depends on DRM && OF
depends on COMMON_CLK
+ depends on ARCH_MXS || ARCH_MXC || COMPILE_TEST
select DRM_MXS
select DRM_KMS_HELPER
select DRM_KMS_CMA_HELPER
diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouveau/dispnv50/disp.c
index d7b9f7f..73e24e0 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
+++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
@@ -411,6 +411,35 @@ nv50_outp_atomic_check_view(struct drm_encoder *encoder,
return 0;
}
+static void
+nv50_outp_atomic_fix_depth(struct drm_encoder *encoder, struct drm_crtc_state *crtc_state)
+{
+ struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
+ struct nouveau_encoder *nv_encoder = nouveau_encoder(encoder);
+ struct drm_display_mode *mode = &asyh->state.adjusted_mode;
+ unsigned int max_rate, mode_rate;
+
+ switch (nv_encoder->dcb->type) {
+ case DCB_OUTPUT_DP:
+ max_rate = nv_encoder->dp.link_nr * nv_encoder->dp.link_bw;
+
+ /* we don't support more than 10 anyway */
+ asyh->or.bpc = min_t(u8, asyh->or.bpc, 10);
+
+ /* reduce the bpc until it works out */
+ while (asyh->or.bpc > 6) {
+ mode_rate = DIV_ROUND_UP(mode->clock * asyh->or.bpc * 3, 8);
+ if (mode_rate <= max_rate)
+ break;
+
+ asyh->or.bpc -= 2;
+ }
+ break;
+ default:
+ break;
+ }
+}
+
static int
nv50_outp_atomic_check(struct drm_encoder *encoder,
struct drm_crtc_state *crtc_state,
@@ -429,6 +458,9 @@ nv50_outp_atomic_check(struct drm_encoder *encoder,
if (crtc_state->mode_changed || crtc_state->connectors_changed)
asyh->or.bpc = connector->display_info.bpc;
+ /* We might have to reduce the bpc */
+ nv50_outp_atomic_fix_depth(encoder, crtc_state);
+
return 0;
}
@@ -2622,14 +2654,6 @@ nv50_display_fini(struct drm_device *dev, bool runtime, bool suspend)
{
struct nouveau_drm *drm = nouveau_drm(dev);
struct drm_encoder *encoder;
- struct drm_plane *plane;
-
- drm_for_each_plane(plane, dev) {
- struct nv50_wndw *wndw = nv50_wndw(plane);
- if (plane->funcs != &nv50_wndw)
- continue;
- nv50_wndw_fini(wndw);
- }
list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST)
@@ -2645,7 +2669,6 @@ nv50_display_init(struct drm_device *dev, bool resume, bool runtime)
{
struct nv50_core *core = nv50_disp(dev)->core;
struct drm_encoder *encoder;
- struct drm_plane *plane;
if (resume || runtime)
core->func->init(core);
@@ -2658,13 +2681,6 @@ nv50_display_init(struct drm_device *dev, bool resume, bool runtime)
}
}
- drm_for_each_plane(plane, dev) {
- struct nv50_wndw *wndw = nv50_wndw(plane);
- if (plane->funcs != &nv50_wndw)
- continue;
- nv50_wndw_init(wndw);
- }
-
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
index 8d048ba..e1e6267 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c
+++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c
@@ -694,18 +694,6 @@ nv50_wndw_notify(struct nvif_notify *notify)
return NVIF_NOTIFY_KEEP;
}
-void
-nv50_wndw_fini(struct nv50_wndw *wndw)
-{
- nvif_notify_put(&wndw->notify);
-}
-
-void
-nv50_wndw_init(struct nv50_wndw *wndw)
-{
- nvif_notify_get(&wndw->notify);
-}
-
static const u64 nv50_cursor_format_modifiers[] = {
DRM_FORMAT_MOD_LINEAR,
DRM_FORMAT_MOD_INVALID,
diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.h b/drivers/gpu/drm/nouveau/dispnv50/wndw.h
index f4e0c508..6c64864 100644
--- a/drivers/gpu/drm/nouveau/dispnv50/wndw.h
+++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.h
@@ -38,10 +38,9 @@ struct nv50_wndw {
int nv50_wndw_new_(const struct nv50_wndw_func *, struct drm_device *,
enum drm_plane_type, const char *name, int index,
- const u32 *format, enum nv50_disp_interlock_type,
- u32 interlock_data, u32 heads, struct nv50_wndw **);
-void nv50_wndw_init(struct nv50_wndw *);
-void nv50_wndw_fini(struct nv50_wndw *);
+ const u32 *format, u32 heads,
+ enum nv50_disp_interlock_type, u32 interlock_data,
+ struct nv50_wndw **);
void nv50_wndw_flush_set(struct nv50_wndw *, u32 *interlock,
struct nv50_wndw_atom *);
void nv50_wndw_flush_clr(struct nv50_wndw *, u32 *interlock, bool flush,
diff --git a/drivers/gpu/drm/nouveau/nouveau_dp.c b/drivers/gpu/drm/nouveau/nouveau_dp.c
index 040ed88..447b759 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dp.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dp.c
@@ -220,8 +220,6 @@ void nouveau_dp_irq(struct nouveau_drm *drm,
}
/* TODO:
- * - Use the minimum possible BPC here, once we add support for the max bpc
- * property.
* - Validate against the DP caps advertised by the GPU (we don't check these
* yet)
*/
@@ -233,7 +231,11 @@ nv50_dp_mode_valid(struct drm_connector *connector,
{
const unsigned int min_clock = 25000;
unsigned int max_rate, mode_rate, ds_max_dotclock, clock = mode->clock;
- const u8 bpp = connector->display_info.bpc * 3;
+ /* Check with the minmum bpc always, so we can advertise better modes.
+ * In particlar not doing this causes modes to be dropped on HDR
+ * displays as we might check with a bpc of 16 even.
+ */
+ const u8 bpp = 6 * 3;
if (mode->flags & DRM_MODE_FLAG_INTERLACE && !outp->caps.dp_interlace)
return MODE_NO_INTERLACE;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c
index 634f64f..81a1ad2 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c
@@ -65,10 +65,33 @@ tu102_devinit_pll_set(struct nvkm_devinit *init, u32 type, u32 freq)
return ret;
}
+static int
+tu102_devinit_wait(struct nvkm_device *device)
+{
+ unsigned timeout = 50 + 2000;
+
+ do {
+ if (nvkm_rd32(device, 0x118128) & 0x00000001) {
+ if ((nvkm_rd32(device, 0x118234) & 0x000000ff) == 0xff)
+ return 0;
+ }
+
+ usleep_range(1000, 2000);
+ } while (timeout--);
+
+ return -ETIMEDOUT;
+}
+
int
tu102_devinit_post(struct nvkm_devinit *base, bool post)
{
struct nv50_devinit *init = nv50_devinit(base);
+ int ret;
+
+ ret = tu102_devinit_wait(init->base.subdev.device);
+ if (ret)
+ return ret;
+
gm200_devinit_preos(init, post);
return 0;
}
diff --git a/drivers/gpu/drm/omapdrm/dss/dsi.c b/drivers/gpu/drm/omapdrm/dss/dsi.c
index 5f1722b..41da86c 100644
--- a/drivers/gpu/drm/omapdrm/dss/dsi.c
+++ b/drivers/gpu/drm/omapdrm/dss/dsi.c
@@ -1039,22 +1039,26 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
{
struct dsi_data *dsi = s->private;
unsigned long flags;
- struct dsi_irq_stats stats;
+ struct dsi_irq_stats *stats;
+
+ stats = kmalloc(sizeof(*stats), GFP_KERNEL);
+ if (!stats)
+ return -ENOMEM;
spin_lock_irqsave(&dsi->irq_stats_lock, flags);
- stats = dsi->irq_stats;
+ *stats = dsi->irq_stats;
memset(&dsi->irq_stats, 0, sizeof(dsi->irq_stats));
dsi->irq_stats.last_reset = jiffies;
spin_unlock_irqrestore(&dsi->irq_stats_lock, flags);
seq_printf(s, "period %u ms\n",
- jiffies_to_msecs(jiffies - stats.last_reset));
+ jiffies_to_msecs(jiffies - stats->last_reset));
- seq_printf(s, "irqs %d\n", stats.irq_count);
+ seq_printf(s, "irqs %d\n", stats->irq_count);
#define PIS(x) \
- seq_printf(s, "%-20s %10d\n", #x, stats.dsi_irqs[ffs(DSI_IRQ_##x)-1]);
+ seq_printf(s, "%-20s %10d\n", #x, stats->dsi_irqs[ffs(DSI_IRQ_##x)-1]);
seq_printf(s, "-- DSI%d interrupts --\n", dsi->module_id + 1);
PIS(VC0);
@@ -1078,10 +1082,10 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
#define PIS(x) \
seq_printf(s, "%-20s %10d %10d %10d %10d\n", #x, \
- stats.vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
- stats.vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
- stats.vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
- stats.vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
+ stats->vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
+ stats->vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
+ stats->vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
+ stats->vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
seq_printf(s, "-- VC interrupts --\n");
PIS(CS);
@@ -1097,7 +1101,7 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
#define PIS(x) \
seq_printf(s, "%-20s %10d\n", #x, \
- stats.cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
+ stats->cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
seq_printf(s, "-- CIO interrupts --\n");
PIS(ERRSYNCESC1);
@@ -1122,6 +1126,8 @@ static int dsi_dump_dsi_irqs(struct seq_file *s, void *p)
PIS(ULPSACTIVENOT_ALL1);
#undef PIS
+ kfree(stats);
+
return 0;
}
#endif
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c b/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
index 0ab1b7e..166d7d4 100644
--- a/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e3ha2.c
@@ -692,7 +692,9 @@ static int s6e3ha2_probe(struct mipi_dsi_device *dsi)
dsi->lanes = 4;
dsi->format = MIPI_DSI_FMT_RGB888;
- dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS;
+ dsi->mode_flags = MIPI_DSI_CLOCK_NON_CONTINUOUS |
+ MIPI_DSI_MODE_VIDEO_NO_HFP | MIPI_DSI_MODE_VIDEO_NO_HBP |
+ MIPI_DSI_MODE_VIDEO_NO_HSA | MIPI_DSI_MODE_NO_EOT_PACKET;
ctx->supplies[0].supply = "vdd3";
ctx->supplies[1].supply = "vci";
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c b/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
index ccc8ed6..2fc46fd 100644
--- a/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e63j0x03.c
@@ -446,7 +446,8 @@ static int s6e63j0x03_probe(struct mipi_dsi_device *dsi)
dsi->lanes = 1;
dsi->format = MIPI_DSI_FMT_RGB888;
- dsi->mode_flags = MIPI_DSI_MODE_NO_EOT_PACKET;
+ dsi->mode_flags = MIPI_DSI_MODE_VIDEO_NO_HFP |
+ MIPI_DSI_MODE_VIDEO_NO_HBP | MIPI_DSI_MODE_VIDEO_NO_HSA;
ctx->supplies[0].supply = "vdd3";
ctx->supplies[1].supply = "vci";
diff --git a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
index 9b3599d..737b8ca 100644
--- a/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
+++ b/drivers/gpu/drm/panel/panel-samsung-s6e8aa0.c
@@ -990,8 +990,6 @@ static int s6e8aa0_probe(struct mipi_dsi_device *dsi)
dsi->lanes = 4;
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST
- | MIPI_DSI_MODE_VIDEO_NO_HFP | MIPI_DSI_MODE_VIDEO_NO_HBP
- | MIPI_DSI_MODE_VIDEO_NO_HSA | MIPI_DSI_MODE_NO_EOT_PACKET
| MIPI_DSI_MODE_VSYNC_FLUSH | MIPI_DSI_MODE_VIDEO_AUTO_VERT;
ret = s6e8aa0_parse_dt(ctx);
diff --git a/drivers/gpu/drm/panfrost/Kconfig b/drivers/gpu/drm/panfrost/Kconfig
index 86cdc0c..77f4d32 100644
--- a/drivers/gpu/drm/panfrost/Kconfig
+++ b/drivers/gpu/drm/panfrost/Kconfig
@@ -3,7 +3,8 @@
config DRM_PANFROST
tristate "Panfrost (DRM support for ARM Mali Midgard/Bifrost GPUs)"
depends on DRM
- depends on ARM || ARM64 || (COMPILE_TEST && !GENERIC_ATOMIC64)
+ depends on ARM || ARM64 || COMPILE_TEST
+ depends on !GENERIC_ATOMIC64 # for IOMMU_IO_PGTABLE_LPAE
depends on MMU
select DRM_SCHED
select IOMMU_SUPPORT
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index c3292a6..b5ee076 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -253,7 +253,7 @@ static void panfrost_mmu_flush_range(struct panfrost_device *pfdev,
if (pm_runtime_active(pfdev->dev))
mmu_hw_do_operation(pfdev, mmu, iova, size, AS_COMMAND_FLUSH_PT);
- pm_runtime_put_sync_autosuspend(pfdev->dev);
+ pm_runtime_put_autosuspend(pfdev->dev);
}
static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
@@ -469,6 +469,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
if (IS_ERR(pages[i])) {
mutex_unlock(&bo->base.pages_lock);
ret = PTR_ERR(pages[i]);
+ pages[i] = NULL;
goto err_pages;
}
}
diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
index 70bd84b..2b44911 100644
--- a/drivers/gpu/drm/radeon/atombios_encoders.c
+++ b/drivers/gpu/drm/radeon/atombios_encoders.c
@@ -2188,11 +2188,12 @@ int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx)
/*
* On DCE32 any encoder can drive any block so usually just use crtc id,
- * but Apple thinks different at least on iMac10,1, so there use linkb,
+ * but Apple thinks different at least on iMac10,1 and iMac11,2, so there use linkb,
* otherwise the internal eDP panel will stay dark.
*/
if (ASIC_IS_DCE32(rdev)) {
- if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1"))
+ if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1") ||
+ dmi_match(DMI_PRODUCT_NAME, "iMac11,2"))
enc_idx = (dig->linkb) ? 1 : 0;
else
enc_idx = radeon_crtc->crtc_id;
diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
index 92905eb..1c005e0 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -1022,6 +1022,7 @@ void radeon_atombios_fini(struct radeon_device *rdev)
{
if (rdev->mode_info.atom_context) {
kfree(rdev->mode_info.atom_context->scratch);
+ kfree(rdev->mode_info.atom_context->iio);
}
kfree(rdev->mode_info.atom_context);
rdev->mode_info.atom_context = NULL;
diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c
index 5b7061e..0cc64c9 100644
--- a/drivers/gpu/drm/sun4i/sun4i_drv.c
+++ b/drivers/gpu/drm/sun4i/sun4i_drv.c
@@ -94,12 +94,12 @@ static int sun4i_drv_bind(struct device *dev)
/* drm_vblank_init calls kcalloc, which can fail */
ret = drm_vblank_init(drm, drm->mode_config.num_crtc);
if (ret)
- goto cleanup_mode_config;
+ goto unbind_all;
/* Remove early framebuffers (ie. simplefb) */
ret = drm_aperture_remove_framebuffers(false, &sun4i_drv_driver);
if (ret)
- goto cleanup_mode_config;
+ goto unbind_all;
sun4i_framebuffer_init(drm);
@@ -118,6 +118,8 @@ static int sun4i_drv_bind(struct device *dev)
finish_poll:
drm_kms_helper_poll_fini(drm);
+unbind_all:
+ component_unbind_all(dev, NULL);
cleanup_mode_config:
drm_mode_config_cleanup(drm);
of_reserved_mem_device_release(dev);
diff --git a/drivers/gpu/drm/tegra/firewall.c b/drivers/gpu/drm/tegra/firewall.c
index 1824d2d..d53f890 100644
--- a/drivers/gpu/drm/tegra/firewall.c
+++ b/drivers/gpu/drm/tegra/firewall.c
@@ -97,6 +97,9 @@ static int fw_check_regs_imm(struct tegra_drm_firewall *fw, u32 offset)
{
bool is_addr;
+ if (!fw->client->ops->is_addr_reg)
+ return 0;
+
is_addr = fw->client->ops->is_addr_reg(fw->client->base.dev, fw->class,
offset);
if (is_addr)
diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c
index 60b92df..f545176 100644
--- a/drivers/gpu/drm/tidss/tidss_dispc.c
+++ b/drivers/gpu/drm/tidss/tidss_dispc.c
@@ -1855,8 +1855,8 @@ static const struct {
{ DRM_FORMAT_XBGR4444, 0x21, },
{ DRM_FORMAT_RGBX4444, 0x22, },
- { DRM_FORMAT_ARGB1555, 0x25, },
- { DRM_FORMAT_ABGR1555, 0x26, },
+ { DRM_FORMAT_XRGB1555, 0x25, },
+ { DRM_FORMAT_XBGR1555, 0x26, },
{ DRM_FORMAT_XRGB8888, 0x27, },
{ DRM_FORMAT_XBGR8888, 0x28, },
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 4611ec4..2a81311 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -450,7 +450,7 @@ static void cirrus_pipe_update(struct drm_simple_display_pipe *pipe,
if (state->fb && cirrus->cpp != cirrus_cpp(state->fb))
cirrus_mode_set(cirrus, &crtc->mode, state->fb);
- if (drm_atomic_helper_damage_merged(old_state, state, &rect))
+ if (state->fb && drm_atomic_helper_damage_merged(old_state, state, &rect))
cirrus_fb_blit_rect(state->fb, &shadow_plane_state->data[0], &rect);
}
diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
index e9a63f4..e159dfb 100644
--- a/drivers/gpu/drm/tiny/ili9486.c
+++ b/drivers/gpu/drm/tiny/ili9486.c
@@ -43,6 +43,7 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
size_t num)
{
struct spi_device *spi = mipi->spi;
+ unsigned int bpw = 8;
void *data = par;
u32 speed_hz;
int i, ret;
@@ -56,8 +57,6 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
* The displays are Raspberry Pi HATs and connected to the 8-bit only
* SPI controller, so 16-bit command and parameters need byte swapping
* before being transferred as 8-bit on the big endian SPI bus.
- * Pixel data bytes have already been swapped before this function is
- * called.
*/
buf[0] = cpu_to_be16(*cmd);
gpiod_set_value_cansleep(mipi->dc, 0);
@@ -71,12 +70,18 @@ static int waveshare_command(struct mipi_dbi *mipi, u8 *cmd, u8 *par,
for (i = 0; i < num; i++)
buf[i] = cpu_to_be16(par[i]);
num *= 2;
- speed_hz = mipi_dbi_spi_cmd_max_speed(spi, num);
data = buf;
}
+ /*
+ * Check whether pixel data bytes needs to be swapped or not
+ */
+ if (*cmd == MIPI_DCS_WRITE_MEMORY_START && !mipi->swap_bytes)
+ bpw = 16;
+
gpiod_set_value_cansleep(mipi->dc, 1);
- ret = mipi_dbi_spi_transfer(spi, speed_hz, 8, data, num);
+ speed_hz = mipi_dbi_spi_cmd_max_speed(spi, num);
+ ret = mipi_dbi_spi_transfer(spi, speed_hz, bpw, data, num);
free:
kfree(buf);
diff --git a/drivers/gpu/drm/vc4/vc4_dpi.c b/drivers/gpu/drm/vc4/vc4_dpi.c
index a90f254..9c8a71d 100644
--- a/drivers/gpu/drm/vc4/vc4_dpi.c
+++ b/drivers/gpu/drm/vc4/vc4_dpi.c
@@ -148,35 +148,45 @@ static void vc4_dpi_encoder_enable(struct drm_encoder *encoder)
}
drm_connector_list_iter_end(&conn_iter);
- if (connector && connector->display_info.num_bus_formats) {
- u32 bus_format = connector->display_info.bus_formats[0];
+ if (connector) {
+ if (connector->display_info.num_bus_formats) {
+ u32 bus_format = connector->display_info.bus_formats[0];
- switch (bus_format) {
- case MEDIA_BUS_FMT_RGB888_1X24:
- dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
- DPI_FORMAT);
- break;
- case MEDIA_BUS_FMT_BGR888_1X24:
- dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
- DPI_FORMAT);
- dpi_c |= VC4_SET_FIELD(DPI_ORDER_BGR, DPI_ORDER);
- break;
- case MEDIA_BUS_FMT_RGB666_1X24_CPADHI:
- dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_2,
- DPI_FORMAT);
- break;
- case MEDIA_BUS_FMT_RGB666_1X18:
- dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_1,
- DPI_FORMAT);
- break;
- case MEDIA_BUS_FMT_RGB565_1X16:
- dpi_c |= VC4_SET_FIELD(DPI_FORMAT_16BIT_565_RGB_3,
- DPI_FORMAT);
- break;
- default:
- DRM_ERROR("Unknown media bus format %d\n", bus_format);
- break;
+ switch (bus_format) {
+ case MEDIA_BUS_FMT_RGB888_1X24:
+ dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
+ DPI_FORMAT);
+ break;
+ case MEDIA_BUS_FMT_BGR888_1X24:
+ dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB,
+ DPI_FORMAT);
+ dpi_c |= VC4_SET_FIELD(DPI_ORDER_BGR,
+ DPI_ORDER);
+ break;
+ case MEDIA_BUS_FMT_RGB666_1X24_CPADHI:
+ dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_2,
+ DPI_FORMAT);
+ break;
+ case MEDIA_BUS_FMT_RGB666_1X18:
+ dpi_c |= VC4_SET_FIELD(DPI_FORMAT_18BIT_666_RGB_1,
+ DPI_FORMAT);
+ break;
+ case MEDIA_BUS_FMT_RGB565_1X16:
+ dpi_c |= VC4_SET_FIELD(DPI_FORMAT_16BIT_565_RGB_1,
+ DPI_FORMAT);
+ break;
+ default:
+ DRM_ERROR("Unknown media bus format %d\n",
+ bus_format);
+ break;
+ }
}
+
+ if (connector->display_info.bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE)
+ dpi_c |= DPI_PIXEL_CLK_INVERT;
+
+ if (connector->display_info.bus_flags & DRM_BUS_FLAG_DE_LOW)
+ dpi_c |= DPI_OUTPUT_ENABLE_INVERT;
} else {
/* Default to 24bit if no connector found. */
dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB, DPI_FORMAT);
diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
index 9b3e642..7a8353d 100644
--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
+++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
@@ -785,11 +785,12 @@ static void vc5_hdmi_set_timings(struct vc4_hdmi *vc4_hdmi,
VC4_SET_FIELD(mode->crtc_vdisplay, VC5_HDMI_VERTA_VAL));
u32 vertb = (VC4_SET_FIELD(mode->htotal >> (2 - pixel_rep),
VC5_HDMI_VERTB_VSPO) |
- VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end,
+ VC4_SET_FIELD(mode->crtc_vtotal - mode->crtc_vsync_end +
+ interlaced,
VC4_HDMI_VERTB_VBP));
u32 vertb_even = (VC4_SET_FIELD(0, VC5_HDMI_VERTB_VSPO) |
VC4_SET_FIELD(mode->crtc_vtotal -
- mode->crtc_vsync_end - interlaced,
+ mode->crtc_vsync_end,
VC4_HDMI_VERTB_VBP));
unsigned char gcp;
bool gcp_en;
@@ -1854,7 +1855,8 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi)
}
vc4_hdmi->cec_adap = cec_allocate_adapter(&vc4_hdmi_cec_adap_ops,
- vc4_hdmi, "vc4",
+ vc4_hdmi,
+ vc4_hdmi->variant->card_name,
CEC_CAP_DEFAULTS |
CEC_CAP_CONNECTOR_INFO, 1);
ret = PTR_ERR_OR_ZERO(vc4_hdmi->cec_adap);
diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
index 9d88bfb..3856ac2 100644
--- a/drivers/gpu/drm/vc4/vc4_hvs.c
+++ b/drivers/gpu/drm/vc4/vc4_hvs.c
@@ -718,6 +718,17 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
SCALER_DISPCTRL_DSPEISLUR(2) |
SCALER_DISPCTRL_SCLEIRQ);
+ /* Set AXI panic mode.
+ * VC4 panics when < 2 lines in FIFO.
+ * VC5 panics when less than 1 line in the FIFO.
+ */
+ dispctrl &= ~(SCALER_DISPCTRL_PANIC0_MASK |
+ SCALER_DISPCTRL_PANIC1_MASK |
+ SCALER_DISPCTRL_PANIC2_MASK);
+ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC0);
+ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC1);
+ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC2);
+
HVS_WRITE(SCALER_DISPCTRL, dispctrl);
ret = devm_request_irq(dev, platform_get_irq(pdev, 0),
diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
index 8574ace..4404059 100644
--- a/drivers/gpu/drm/vc4/vc4_plane.c
+++ b/drivers/gpu/drm/vc4/vc4_plane.c
@@ -72,11 +72,13 @@ static const struct hvs_format {
.drm = DRM_FORMAT_ARGB1555,
.hvs = HVS_PIXEL_FORMAT_RGBA5551,
.pixel_order = HVS_PIXEL_ORDER_ABGR,
+ .pixel_order_hvs5 = HVS_PIXEL_ORDER_ARGB,
},
{
.drm = DRM_FORMAT_XRGB1555,
.hvs = HVS_PIXEL_FORMAT_RGBA5551,
.pixel_order = HVS_PIXEL_ORDER_ABGR,
+ .pixel_order_hvs5 = HVS_PIXEL_ORDER_ARGB,
},
{
.drm = DRM_FORMAT_RGB888,
diff --git a/drivers/gpu/drm/vc4/vc4_regs.h b/drivers/gpu/drm/vc4/vc4_regs.h
index 8ac2f08..fe6d0e2 100644
--- a/drivers/gpu/drm/vc4/vc4_regs.h
+++ b/drivers/gpu/drm/vc4/vc4_regs.h
@@ -220,6 +220,12 @@
#define SCALER_DISPCTRL 0x00000000
/* Global register for clock gating the HVS */
# define SCALER_DISPCTRL_ENABLE BIT(31)
+# define SCALER_DISPCTRL_PANIC0_MASK VC4_MASK(25, 24)
+# define SCALER_DISPCTRL_PANIC0_SHIFT 24
+# define SCALER_DISPCTRL_PANIC1_MASK VC4_MASK(27, 26)
+# define SCALER_DISPCTRL_PANIC1_SHIFT 26
+# define SCALER_DISPCTRL_PANIC2_MASK VC4_MASK(29, 28)
+# define SCALER_DISPCTRL_PANIC2_SHIFT 28
# define SCALER_DISPCTRL_DSP3_MUX_MASK VC4_MASK(19, 18)
# define SCALER_DISPCTRL_DSP3_MUX_SHIFT 18
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 7e75fb0f..25d399b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -169,8 +169,9 @@ static int virtio_gpu_object_shmem_init(struct virtio_gpu_device *vgdev,
shmem->pages = drm_gem_shmem_get_sg_table(&bo->base);
if (IS_ERR(shmem->pages)) {
drm_gem_shmem_unpin(&bo->base);
+ ret = PTR_ERR(shmem->pages);
shmem->pages = NULL;
- return PTR_ERR(shmem->pages);
+ return ret;
}
if (use_dma_api) {
diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
index 0ffe5f0..f716c57 100644
--- a/drivers/gpu/drm/vkms/vkms_drv.c
+++ b/drivers/gpu/drm/vkms/vkms_drv.c
@@ -57,7 +57,8 @@ static void vkms_release(struct drm_device *dev)
{
struct vkms_device *vkms = drm_device_to_vkms_device(dev);
- destroy_workqueue(vkms->output.composer_workq);
+ if (vkms->output.composer_workq)
+ destroy_workqueue(vkms->output.composer_workq);
}
static void vkms_atomic_commit_tail(struct drm_atomic_state *old_state)
@@ -218,6 +219,7 @@ static int vkms_create(struct vkms_config *config)
static int __init vkms_init(void)
{
+ int ret;
struct vkms_config *config;
config = kmalloc(sizeof(*config), GFP_KERNEL);
@@ -230,7 +232,11 @@ static int __init vkms_init(void)
config->writeback = enable_writeback;
config->overlay = enable_overlay;
- return vkms_create(config);
+ ret = vkms_create(config);
+ if (ret)
+ kfree(config);
+
+ return ret;
}
static void vkms_destroy(struct vkms_config *config)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg_arm64.h b/drivers/gpu/drm/vmwgfx/vmwgfx_msg_arm64.h
old mode 100755
new mode 100644
diff --git a/drivers/gpu/host1x/hw/syncpt_hw.c b/drivers/gpu/host1x/hw/syncpt_hw.c
index dd39d67..8cf35b2 100644
--- a/drivers/gpu/host1x/hw/syncpt_hw.c
+++ b/drivers/gpu/host1x/hw/syncpt_hw.c
@@ -106,9 +106,6 @@ static void syncpt_assign_to_channel(struct host1x_syncpt *sp,
#if HOST1X_HW >= 6
struct host1x *host = sp->host;
- if (!host->hv_regs)
- return;
-
host1x_sync_writel(host,
HOST1X_SYNC_SYNCPT_CH_APP_CH(ch ? ch->id : 0xff),
HOST1X_SYNC_SYNCPT_CH_APP(sp->id));
diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
index 1183185..c35eac1 100644
--- a/drivers/gpu/ipu-v3/ipu-common.c
+++ b/drivers/gpu/ipu-v3/ipu-common.c
@@ -1165,6 +1165,7 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
pdev = platform_device_alloc(reg->name, id++);
if (!pdev) {
ret = -ENOMEM;
+ of_node_put(of_node);
goto err_register;
}
diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
index b59c3da..16832e7 100644
--- a/drivers/hid/hid-asus.c
+++ b/drivers/hid/hid-asus.c
@@ -98,6 +98,7 @@ struct asus_kbd_leds {
struct hid_device *hdev;
struct work_struct work;
unsigned int brightness;
+ spinlock_t lock;
bool removed;
};
@@ -492,21 +493,42 @@ static int rog_nkey_led_init(struct hid_device *hdev)
return ret;
}
+static void asus_schedule_work(struct asus_kbd_leds *led)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&led->lock, flags);
+ if (!led->removed)
+ schedule_work(&led->work);
+ spin_unlock_irqrestore(&led->lock, flags);
+}
+
static void asus_kbd_backlight_set(struct led_classdev *led_cdev,
enum led_brightness brightness)
{
struct asus_kbd_leds *led = container_of(led_cdev, struct asus_kbd_leds,
cdev);
+ unsigned long flags;
+
+ spin_lock_irqsave(&led->lock, flags);
led->brightness = brightness;
- schedule_work(&led->work);
+ spin_unlock_irqrestore(&led->lock, flags);
+
+ asus_schedule_work(led);
}
static enum led_brightness asus_kbd_backlight_get(struct led_classdev *led_cdev)
{
struct asus_kbd_leds *led = container_of(led_cdev, struct asus_kbd_leds,
cdev);
+ enum led_brightness brightness;
+ unsigned long flags;
- return led->brightness;
+ spin_lock_irqsave(&led->lock, flags);
+ brightness = led->brightness;
+ spin_unlock_irqrestore(&led->lock, flags);
+
+ return brightness;
}
static void asus_kbd_backlight_work(struct work_struct *work)
@@ -514,11 +536,11 @@ static void asus_kbd_backlight_work(struct work_struct *work)
struct asus_kbd_leds *led = container_of(work, struct asus_kbd_leds, work);
u8 buf[] = { FEATURE_KBD_REPORT_ID, 0xba, 0xc5, 0xc4, 0x00 };
int ret;
+ unsigned long flags;
- if (led->removed)
- return;
-
+ spin_lock_irqsave(&led->lock, flags);
buf[4] = led->brightness;
+ spin_unlock_irqrestore(&led->lock, flags);
ret = asus_kbd_set_report(led->hdev, buf, sizeof(buf));
if (ret < 0)
@@ -586,6 +608,7 @@ static int asus_kbd_register_leds(struct hid_device *hdev)
drvdata->kbd_backlight->cdev.brightness_set = asus_kbd_backlight_set;
drvdata->kbd_backlight->cdev.brightness_get = asus_kbd_backlight_get;
INIT_WORK(&drvdata->kbd_backlight->work, asus_kbd_backlight_work);
+ spin_lock_init(&drvdata->kbd_backlight->lock);
ret = devm_led_classdev_register(&hdev->dev, &drvdata->kbd_backlight->cdev);
if (ret < 0) {
@@ -1121,9 +1144,13 @@ static int asus_probe(struct hid_device *hdev, const struct hid_device_id *id)
static void asus_remove(struct hid_device *hdev)
{
struct asus_drvdata *drvdata = hid_get_drvdata(hdev);
+ unsigned long flags;
if (drvdata->kbd_backlight) {
+ spin_lock_irqsave(&drvdata->kbd_backlight->lock, flags);
drvdata->kbd_backlight->removed = true;
+ spin_unlock_irqrestore(&drvdata->kbd_backlight->lock, flags);
+
cancel_work_sync(&drvdata->kbd_backlight->work);
}
diff --git a/drivers/hid/hid-betopff.c b/drivers/hid/hid-betopff.c
index 467d789..25ed7b9 100644
--- a/drivers/hid/hid-betopff.c
+++ b/drivers/hid/hid-betopff.c
@@ -60,7 +60,6 @@ static int betopff_init(struct hid_device *hid)
struct list_head *report_list =
&hid->report_enum[HID_OUTPUT_REPORT].report_list;
struct input_dev *dev;
- int field_count = 0;
int error;
int i, j;
@@ -86,19 +85,21 @@ static int betopff_init(struct hid_device *hid)
* -----------------------------------------
* Do init them with default value.
*/
+ if (report->maxfield < 4) {
+ hid_err(hid, "not enough fields in the report: %d\n",
+ report->maxfield);
+ return -ENODEV;
+ }
for (i = 0; i < report->maxfield; i++) {
+ if (report->field[i]->report_count < 1) {
+ hid_err(hid, "no values in the field\n");
+ return -ENODEV;
+ }
for (j = 0; j < report->field[i]->report_count; j++) {
report->field[i]->value[j] = 0x00;
- field_count++;
}
}
- if (field_count < 4) {
- hid_err(hid, "not enough fields in the report: %d\n",
- field_count);
- return -ENODEV;
- }
-
betopff = kzalloc(sizeof(*betopff), GFP_KERNEL);
if (!betopff)
return -ENOMEM;
diff --git a/drivers/hid/hid-bigbenff.c b/drivers/hid/hid-bigbenff.c
index e8c5e3a..a02cb51 100644
--- a/drivers/hid/hid-bigbenff.c
+++ b/drivers/hid/hid-bigbenff.c
@@ -174,6 +174,7 @@ static __u8 pid0902_rdesc_fixed[] = {
struct bigben_device {
struct hid_device *hid;
struct hid_report *report;
+ spinlock_t lock;
bool removed;
u8 led_state; /* LED1 = 1 .. LED4 = 8 */
u8 right_motor_on; /* right motor off/on 0/1 */
@@ -184,18 +185,39 @@ struct bigben_device {
struct work_struct worker;
};
+static inline void bigben_schedule_work(struct bigben_device *bigben)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&bigben->lock, flags);
+ if (!bigben->removed)
+ schedule_work(&bigben->worker);
+ spin_unlock_irqrestore(&bigben->lock, flags);
+}
static void bigben_worker(struct work_struct *work)
{
struct bigben_device *bigben = container_of(work,
struct bigben_device, worker);
struct hid_field *report_field = bigben->report->field[0];
+ bool do_work_led = false;
+ bool do_work_ff = false;
+ u8 *buf;
+ u32 len;
+ unsigned long flags;
- if (bigben->removed || !report_field)
+ buf = hid_alloc_report_buf(bigben->report, GFP_KERNEL);
+ if (!buf)
return;
+ len = hid_report_len(bigben->report);
+
+ /* LED work */
+ spin_lock_irqsave(&bigben->lock, flags);
+
if (bigben->work_led) {
bigben->work_led = false;
+ do_work_led = true;
report_field->value[0] = 0x01; /* 1 = led message */
report_field->value[1] = 0x08; /* reserved value, always 8 */
report_field->value[2] = bigben->led_state;
@@ -204,11 +226,22 @@ static void bigben_worker(struct work_struct *work)
report_field->value[5] = 0x00; /* padding */
report_field->value[6] = 0x00; /* padding */
report_field->value[7] = 0x00; /* padding */
- hid_hw_request(bigben->hid, bigben->report, HID_REQ_SET_REPORT);
+ hid_output_report(bigben->report, buf);
}
+ spin_unlock_irqrestore(&bigben->lock, flags);
+
+ if (do_work_led) {
+ hid_hw_raw_request(bigben->hid, bigben->report->id, buf, len,
+ bigben->report->type, HID_REQ_SET_REPORT);
+ }
+
+ /* FF work */
+ spin_lock_irqsave(&bigben->lock, flags);
+
if (bigben->work_ff) {
bigben->work_ff = false;
+ do_work_ff = true;
report_field->value[0] = 0x02; /* 2 = rumble effect message */
report_field->value[1] = 0x08; /* reserved value, always 8 */
report_field->value[2] = bigben->right_motor_on;
@@ -217,8 +250,17 @@ static void bigben_worker(struct work_struct *work)
report_field->value[5] = 0x00; /* padding */
report_field->value[6] = 0x00; /* padding */
report_field->value[7] = 0x00; /* padding */
- hid_hw_request(bigben->hid, bigben->report, HID_REQ_SET_REPORT);
+ hid_output_report(bigben->report, buf);
}
+
+ spin_unlock_irqrestore(&bigben->lock, flags);
+
+ if (do_work_ff) {
+ hid_hw_raw_request(bigben->hid, bigben->report->id, buf, len,
+ bigben->report->type, HID_REQ_SET_REPORT);
+ }
+
+ kfree(buf);
}
static int hid_bigben_play_effect(struct input_dev *dev, void *data,
@@ -228,6 +270,7 @@ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
struct bigben_device *bigben = hid_get_drvdata(hid);
u8 right_motor_on;
u8 left_motor_force;
+ unsigned long flags;
if (!bigben) {
hid_err(hid, "no device data\n");
@@ -242,10 +285,13 @@ static int hid_bigben_play_effect(struct input_dev *dev, void *data,
if (right_motor_on != bigben->right_motor_on ||
left_motor_force != bigben->left_motor_force) {
+ spin_lock_irqsave(&bigben->lock, flags);
bigben->right_motor_on = right_motor_on;
bigben->left_motor_force = left_motor_force;
bigben->work_ff = true;
- schedule_work(&bigben->worker);
+ spin_unlock_irqrestore(&bigben->lock, flags);
+
+ bigben_schedule_work(bigben);
}
return 0;
@@ -259,6 +305,7 @@ static void bigben_set_led(struct led_classdev *led,
struct bigben_device *bigben = hid_get_drvdata(hid);
int n;
bool work;
+ unsigned long flags;
if (!bigben) {
hid_err(hid, "no device data\n");
@@ -267,6 +314,7 @@ static void bigben_set_led(struct led_classdev *led,
for (n = 0; n < NUM_LEDS; n++) {
if (led == bigben->leds[n]) {
+ spin_lock_irqsave(&bigben->lock, flags);
if (value == LED_OFF) {
work = (bigben->led_state & BIT(n));
bigben->led_state &= ~BIT(n);
@@ -274,10 +322,11 @@ static void bigben_set_led(struct led_classdev *led,
work = !(bigben->led_state & BIT(n));
bigben->led_state |= BIT(n);
}
+ spin_unlock_irqrestore(&bigben->lock, flags);
if (work) {
bigben->work_led = true;
- schedule_work(&bigben->worker);
+ bigben_schedule_work(bigben);
}
return;
}
@@ -307,8 +356,12 @@ static enum led_brightness bigben_get_led(struct led_classdev *led)
static void bigben_remove(struct hid_device *hid)
{
struct bigben_device *bigben = hid_get_drvdata(hid);
+ unsigned long flags;
+ spin_lock_irqsave(&bigben->lock, flags);
bigben->removed = true;
+ spin_unlock_irqrestore(&bigben->lock, flags);
+
cancel_work_sync(&bigben->worker);
hid_hw_stop(hid);
}
@@ -318,7 +371,6 @@ static int bigben_probe(struct hid_device *hid,
{
struct bigben_device *bigben;
struct hid_input *hidinput;
- struct list_head *report_list;
struct led_classdev *led;
char *name;
size_t name_sz;
@@ -343,9 +395,12 @@ static int bigben_probe(struct hid_device *hid,
return error;
}
- report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
- bigben->report = list_entry(report_list->next,
- struct hid_report, list);
+ bigben->report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 8);
+ if (!bigben->report) {
+ hid_err(hid, "no output report found\n");
+ error = -ENODEV;
+ goto error_hw_stop;
+ }
if (list_empty(&hid->inputs)) {
hid_err(hid, "no inputs found\n");
@@ -357,6 +412,7 @@ static int bigben_probe(struct hid_device *hid,
set_bit(FF_RUMBLE, hidinput->input->ffbit);
INIT_WORK(&bigben->worker, bigben_worker);
+ spin_lock_init(&bigben->lock);
error = input_ff_create_memless(hidinput->input, NULL,
hid_bigben_play_effect);
@@ -397,7 +453,7 @@ static int bigben_probe(struct hid_device *hid,
bigben->left_motor_force = 0;
bigben->work_led = true;
bigben->work_ff = true;
- schedule_work(&bigben->worker);
+ bigben_schedule_work(bigben);
hid_info(hid, "LED and force feedback support for BigBen gamepad\n");
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
index ef9c799..d941023 100644
--- a/drivers/hid/hid-core.c
+++ b/drivers/hid/hid-core.c
@@ -258,6 +258,7 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
{
struct hid_report *report;
struct hid_field *field;
+ unsigned int max_buffer_size = HID_MAX_BUFFER_SIZE;
unsigned int usages;
unsigned int offset;
unsigned int i;
@@ -288,8 +289,11 @@ static int hid_add_field(struct hid_parser *parser, unsigned report_type, unsign
offset = report->size;
report->size += parser->global.report_size * parser->global.report_count;
+ if (parser->device->ll_driver->max_buffer_size)
+ max_buffer_size = parser->device->ll_driver->max_buffer_size;
+
/* Total size check: Allow for possible report index byte */
- if (report->size > (HID_MAX_BUFFER_SIZE - 1) << 3) {
+ if (report->size > (max_buffer_size - 1) << 3) {
hid_err(parser->device, "report is too long\n");
return -1;
}
@@ -988,8 +992,8 @@ struct hid_report *hid_validate_values(struct hid_device *hid,
* Validating on id 0 means we should examine the first
* report in the list.
*/
- report = list_entry(
- hid->report_enum[type].report_list.next,
+ report = list_first_entry_or_null(
+ &hid->report_enum[type].report_list,
struct hid_report, list);
} else {
report = hid->report_enum[type].report_id_hash[id];
@@ -1197,6 +1201,7 @@ int hid_open_report(struct hid_device *device)
__u8 *end;
__u8 *next;
int ret;
+ int i;
static int (*dispatch_type[])(struct hid_parser *parser,
struct hid_item *item) = {
hid_parser_main,
@@ -1247,6 +1252,8 @@ int hid_open_report(struct hid_device *device)
goto err;
}
device->collection_size = HID_DEFAULT_NUM_COLLECTIONS;
+ for (i = 0; i < HID_DEFAULT_NUM_COLLECTIONS; i++)
+ device->collection[i].parent_idx = -1;
ret = -EINVAL;
while ((next = fetch_item(start, end, &item)) != NULL) {
@@ -1749,6 +1756,7 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
struct hid_report_enum *report_enum = hid->report_enum + type;
struct hid_report *report;
struct hid_driver *hdrv;
+ int max_buffer_size = HID_MAX_BUFFER_SIZE;
unsigned int a;
u32 rsize, csize = size;
u8 *cdata = data;
@@ -1765,10 +1773,13 @@ int hid_report_raw_event(struct hid_device *hid, int type, u8 *data, u32 size,
rsize = hid_compute_report_size(report);
- if (report_enum->numbered && rsize >= HID_MAX_BUFFER_SIZE)
- rsize = HID_MAX_BUFFER_SIZE - 1;
- else if (rsize > HID_MAX_BUFFER_SIZE)
- rsize = HID_MAX_BUFFER_SIZE;
+ if (hid->ll_driver->max_buffer_size)
+ max_buffer_size = hid->ll_driver->max_buffer_size;
+
+ if (report_enum->numbered && rsize >= max_buffer_size)
+ rsize = max_buffer_size - 1;
+ else if (rsize > max_buffer_size)
+ rsize = max_buffer_size;
if (csize < rsize) {
dbg_hid("report %d is too short, (%d < %d)\n", report->id,
diff --git a/drivers/hid/hid-cp2112.c b/drivers/hid/hid-cp2112.c
index 172f20e..d902fe4 100644
--- a/drivers/hid/hid-cp2112.c
+++ b/drivers/hid/hid-cp2112.c
@@ -1352,6 +1352,7 @@ static int cp2112_probe(struct hid_device *hdev, const struct hid_device_id *id)
girq->parents = NULL;
girq->default_type = IRQ_TYPE_NONE;
girq->handler = handle_simple_irq;
+ girq->threaded = true;
ret = gpiochip_add_data(&dev->gc, dev);
if (ret < 0) {
diff --git a/drivers/hid/hid-debug.c b/drivers/hid/hid-debug.c
index f48d353..03da865 100644
--- a/drivers/hid/hid-debug.c
+++ b/drivers/hid/hid-debug.c
@@ -937,6 +937,7 @@ static const char *keys[KEY_MAX + 1] = {
[KEY_KBD_LAYOUT_NEXT] = "KbdLayoutNext",
[KEY_EMOJI_PICKER] = "EmojiPicker",
[KEY_DICTATE] = "Dictate",
+ [KEY_MICMUTE] = "MicrophoneMute",
[KEY_BRIGHTNESS_MIN] = "BrightnessMin",
[KEY_BRIGHTNESS_MAX] = "BrightnessMax",
[KEY_BRIGHTNESS_AUTO] = "BrightnessAuto",
diff --git a/drivers/hid/hid-elecom.c b/drivers/hid/hid-elecom.c
index e59e991..4fa45ee 100644
--- a/drivers/hid/hid-elecom.c
+++ b/drivers/hid/hid-elecom.c
@@ -12,6 +12,7 @@
* Copyright (c) 2017 Alex Manoussakis <amanou@gnu.org>
* Copyright (c) 2017 Tomasz Kramkowski <tk@the-tk.com>
* Copyright (c) 2020 YOSHIOKA Takuma <lo48576@hard-wi.red>
+ * Copyright (c) 2022 Takahiro Fujii <fujii@xaxxi.net>
*/
/*
@@ -89,7 +90,7 @@ static __u8 *elecom_report_fixup(struct hid_device *hdev, __u8 *rdesc,
case USB_DEVICE_ID_ELECOM_M_DT1URBK:
case USB_DEVICE_ID_ELECOM_M_DT1DRBK:
case USB_DEVICE_ID_ELECOM_M_HT1URBK:
- case USB_DEVICE_ID_ELECOM_M_HT1DRBK:
+ case USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D:
/*
* Report descriptor format:
* 12: button bit count
@@ -99,6 +100,16 @@ static __u8 *elecom_report_fixup(struct hid_device *hdev, __u8 *rdesc,
*/
mouse_button_fixup(hdev, rdesc, *rsize, 12, 30, 14, 20, 8);
break;
+ case USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C:
+ /*
+ * Report descriptor format:
+ * 22: button bit count
+ * 30: padding bit count
+ * 24: button report size
+ * 16: button usage maximum
+ */
+ mouse_button_fixup(hdev, rdesc, *rsize, 22, 30, 24, 16, 8);
+ break;
}
return rdesc;
}
@@ -112,7 +123,8 @@ static const struct hid_device_id elecom_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) },
- { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) },
{ }
};
MODULE_DEVICE_TABLE(hid, elecom_devices);
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
index 8698d49..b153ddc 100644
--- a/drivers/hid/hid-ids.h
+++ b/drivers/hid/hid-ids.h
@@ -261,7 +261,6 @@
#define USB_DEVICE_ID_CH_AXIS_295 0x001c
#define USB_VENDOR_ID_CHERRY 0x046a
-#define USB_DEVICE_ID_CHERRY_MOUSE_000C 0x000c
#define USB_DEVICE_ID_CHERRY_CYMOTION 0x0023
#define USB_DEVICE_ID_CHERRY_CYMOTION_SOLAR 0x0027
@@ -411,7 +410,8 @@
#define USB_DEVICE_ID_ELECOM_M_DT1URBK 0x00fe
#define USB_DEVICE_ID_ELECOM_M_DT1DRBK 0x00ff
#define USB_DEVICE_ID_ELECOM_M_HT1URBK 0x010c
-#define USB_DEVICE_ID_ELECOM_M_HT1DRBK 0x010d
+#define USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D 0x010d
+#define USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C 0x011c
#define USB_VENDOR_ID_DREAM_CHEEKY 0x1d34
#define USB_DEVICE_ID_DREAM_CHEEKY_WN 0x0004
diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
index f197aed..0ae959e 100644
--- a/drivers/hid/hid-input.c
+++ b/drivers/hid/hid-input.c
@@ -709,6 +709,14 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
break;
}
+ if ((usage->hid & 0xf0) == 0xa0) { /* SystemControl */
+ switch (usage->hid & 0xf) {
+ case 0x9: map_key_clear(KEY_MICMUTE); break;
+ default: goto ignore;
+ }
+ break;
+ }
+
if ((usage->hid & 0xf0) == 0xb0) { /* SC - Display */
switch (usage->hid & 0xf) {
case 0x05: map_key_clear(KEY_SWITCHVIDEOMODE); break;
diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
index 81de88a..601ab67 100644
--- a/drivers/hid/hid-logitech-hidpp.c
+++ b/drivers/hid/hid-logitech-hidpp.c
@@ -4049,6 +4049,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
bool connected;
unsigned int connect_mask = HID_CONNECT_DEFAULT;
struct hidpp_ff_private_data data;
+ bool will_restart = false;
/* report_fixup needs drvdata to be set before we call hid_parse */
hidpp = devm_kzalloc(&hdev->dev, sizeof(*hidpp), GFP_KERNEL);
@@ -4104,6 +4105,10 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
return ret;
}
+ if (hidpp->quirks & HIDPP_QUIRK_DELAYED_INIT ||
+ hidpp->quirks & HIDPP_QUIRK_UNIFYING)
+ will_restart = true;
+
INIT_WORK(&hidpp->work, delayed_work_cb);
mutex_init(&hidpp->send_mutex);
init_waitqueue_head(&hidpp->wait);
@@ -4118,7 +4123,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
* Plain USB connections need to actually call start and open
* on the transport driver to allow incoming data.
*/
- ret = hid_hw_start(hdev, 0);
+ ret = hid_hw_start(hdev, will_restart ? 0 : connect_mask);
if (ret) {
hid_err(hdev, "hw start failed\n");
goto hid_hw_start_fail;
@@ -4155,6 +4160,7 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
hidpp->wireless_feature_index = 0;
else if (ret)
goto hid_hw_init_fail;
+ ret = 0;
}
if (connected && (hidpp->quirks & HIDPP_QUIRK_CLASS_WTP)) {
@@ -4169,19 +4175,21 @@ static int hidpp_probe(struct hid_device *hdev, const struct hid_device_id *id)
hidpp_connect_event(hidpp);
- /* Reset the HID node state */
- hid_device_io_stop(hdev);
- hid_hw_close(hdev);
- hid_hw_stop(hdev);
+ if (will_restart) {
+ /* Reset the HID node state */
+ hid_device_io_stop(hdev);
+ hid_hw_close(hdev);
+ hid_hw_stop(hdev);
- if (hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT)
- connect_mask &= ~HID_CONNECT_HIDINPUT;
+ if (hidpp->quirks & HIDPP_QUIRK_NO_HIDINPUT)
+ connect_mask &= ~HID_CONNECT_HIDINPUT;
- /* Now export the actual inputs and hidraw nodes to the world */
- ret = hid_hw_start(hdev, connect_mask);
- if (ret) {
- hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
- goto hid_hw_start_fail;
+ /* Now export the actual inputs and hidraw nodes to the world */
+ ret = hid_hw_start(hdev, connect_mask);
+ if (ret) {
+ hid_err(hdev, "%s:hid_hw_start returned error\n", __func__);
+ goto hid_hw_start_fail;
+ }
}
if (hidpp->quirks & HIDPP_QUIRK_CLASS_G920) {
diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
index 6b86d36..592ffdd 100644
--- a/drivers/hid/hid-multitouch.c
+++ b/drivers/hid/hid-multitouch.c
@@ -71,6 +71,7 @@ MODULE_LICENSE("GPL");
#define MT_QUIRK_SEPARATE_APP_REPORT BIT(19)
#define MT_QUIRK_FORCE_MULTI_INPUT BIT(20)
#define MT_QUIRK_DISABLE_WAKEUP BIT(21)
+#define MT_QUIRK_ORIENTATION_INVERT BIT(22)
#define MT_INPUTMODE_TOUCHSCREEN 0x02
#define MT_INPUTMODE_TOUCHPAD 0x03
@@ -1009,6 +1010,7 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
struct mt_usages *slot)
{
struct input_mt *mt = input->mt;
+ struct hid_device *hdev = td->hdev;
__s32 quirks = app->quirks;
bool valid = true;
bool confidence_state = true;
@@ -1086,6 +1088,10 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
int orientation = wide;
int max_azimuth;
int azimuth;
+ int x;
+ int y;
+ int cx;
+ int cy;
if (slot->a != DEFAULT_ZERO) {
/*
@@ -1104,6 +1110,9 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
if (azimuth > max_azimuth * 2)
azimuth -= max_azimuth * 4;
orientation = -azimuth;
+ if (quirks & MT_QUIRK_ORIENTATION_INVERT)
+ orientation = -orientation;
+
}
if (quirks & MT_QUIRK_TOUCH_SIZE_SCALING) {
@@ -1115,10 +1124,23 @@ static int mt_process_slot(struct mt_device *td, struct input_dev *input,
minor = minor >> 1;
}
- input_event(input, EV_ABS, ABS_MT_POSITION_X, *slot->x);
- input_event(input, EV_ABS, ABS_MT_POSITION_Y, *slot->y);
- input_event(input, EV_ABS, ABS_MT_TOOL_X, *slot->cx);
- input_event(input, EV_ABS, ABS_MT_TOOL_Y, *slot->cy);
+ x = hdev->quirks & HID_QUIRK_X_INVERT ?
+ input_abs_get_max(input, ABS_MT_POSITION_X) - *slot->x :
+ *slot->x;
+ y = hdev->quirks & HID_QUIRK_Y_INVERT ?
+ input_abs_get_max(input, ABS_MT_POSITION_Y) - *slot->y :
+ *slot->y;
+ cx = hdev->quirks & HID_QUIRK_X_INVERT ?
+ input_abs_get_max(input, ABS_MT_POSITION_X) - *slot->cx :
+ *slot->cx;
+ cy = hdev->quirks & HID_QUIRK_Y_INVERT ?
+ input_abs_get_max(input, ABS_MT_POSITION_Y) - *slot->cy :
+ *slot->cy;
+
+ input_event(input, EV_ABS, ABS_MT_POSITION_X, x);
+ input_event(input, EV_ABS, ABS_MT_POSITION_Y, y);
+ input_event(input, EV_ABS, ABS_MT_TOOL_X, cx);
+ input_event(input, EV_ABS, ABS_MT_TOOL_Y, cy);
input_event(input, EV_ABS, ABS_MT_DISTANCE, !*slot->tip_state);
input_event(input, EV_ABS, ABS_MT_ORIENTATION, orientation);
input_event(input, EV_ABS, ABS_MT_PRESSURE, *slot->p);
@@ -1738,6 +1760,15 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
if (id->vendor == HID_ANY_ID && id->product == HID_ANY_ID)
td->serial_maybe = true;
+
+ /* Orientation is inverted if the X or Y axes are
+ * flipped, but normalized if both are inverted.
+ */
+ if (hdev->quirks & (HID_QUIRK_X_INVERT | HID_QUIRK_Y_INVERT) &&
+ !((hdev->quirks & HID_QUIRK_X_INVERT)
+ && (hdev->quirks & HID_QUIRK_Y_INVERT)))
+ td->mtclass.quirks = MT_QUIRK_ORIENTATION_INVERT;
+
/* This allows the driver to correctly support devices
* that emit events over several HID messages.
*/
diff --git a/drivers/hid/hid-playstation.c b/drivers/hid/hid-playstation.c
index bd0e0fe..944e5e5 100644
--- a/drivers/hid/hid-playstation.c
+++ b/drivers/hid/hid-playstation.c
@@ -626,6 +626,7 @@ static const struct attribute_group ps_device_attribute_group = {
static int dualsense_get_calibration_data(struct dualsense *ds)
{
+ struct hid_device *hdev = ds->base.hdev;
short gyro_pitch_bias, gyro_pitch_plus, gyro_pitch_minus;
short gyro_yaw_bias, gyro_yaw_plus, gyro_yaw_minus;
short gyro_roll_bias, gyro_roll_plus, gyro_roll_minus;
@@ -636,6 +637,7 @@ static int dualsense_get_calibration_data(struct dualsense *ds)
int speed_2x;
int range_2g;
int ret = 0;
+ int i;
uint8_t *buf;
buf = kzalloc(DS_FEATURE_REPORT_CALIBRATION_SIZE, GFP_KERNEL);
@@ -688,6 +690,21 @@ static int dualsense_get_calibration_data(struct dualsense *ds)
ds->gyro_calib_data[2].sens_denom = gyro_roll_plus - gyro_roll_minus;
/*
+ * Sanity check gyro calibration data. This is needed to prevent crashes
+ * during report handling of virtual, clone or broken devices not implementing
+ * calibration data properly.
+ */
+ for (i = 0; i < ARRAY_SIZE(ds->gyro_calib_data); i++) {
+ if (ds->gyro_calib_data[i].sens_denom == 0) {
+ hid_warn(hdev, "Invalid gyro calibration data for axis (%d), disabling calibration.",
+ ds->gyro_calib_data[i].abs_code);
+ ds->gyro_calib_data[i].bias = 0;
+ ds->gyro_calib_data[i].sens_numer = DS_GYRO_RANGE;
+ ds->gyro_calib_data[i].sens_denom = S16_MAX;
+ }
+ }
+
+ /*
* Set accelerometer calibration and normalization parameters.
* Data values will be normalized to 1/DS_ACC_RES_PER_G g.
*/
@@ -709,6 +726,21 @@ static int dualsense_get_calibration_data(struct dualsense *ds)
ds->accel_calib_data[2].sens_numer = 2*DS_ACC_RES_PER_G;
ds->accel_calib_data[2].sens_denom = range_2g;
+ /*
+ * Sanity check accelerometer calibration data. This is needed to prevent crashes
+ * during report handling of virtual, clone or broken devices not implementing calibration
+ * data properly.
+ */
+ for (i = 0; i < ARRAY_SIZE(ds->accel_calib_data); i++) {
+ if (ds->accel_calib_data[i].sens_denom == 0) {
+ hid_warn(hdev, "Invalid accelerometer calibration data for axis (%d), disabling calibration.",
+ ds->accel_calib_data[i].abs_code);
+ ds->accel_calib_data[i].bias = 0;
+ ds->accel_calib_data[i].sens_numer = DS_ACC_RANGE;
+ ds->accel_calib_data[i].sens_denom = S16_MAX;
+ }
+ }
+
err_free:
kfree(buf);
return ret;
diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
index fc1e061..c7c06aa 100644
--- a/drivers/hid/hid-quirks.c
+++ b/drivers/hid/hid-quirks.c
@@ -54,7 +54,6 @@ static const struct hid_device_id hid_quirks[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_YOKE), HID_QUIRK_NOGET },
{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_PEDALS), HID_QUIRK_NOGET },
{ HID_USB_DEVICE(USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_PRO_THROTTLE), HID_QUIRK_NOGET },
- { HID_USB_DEVICE(USB_VENDOR_ID_CHERRY, USB_DEVICE_ID_CHERRY_MOUSE_000C), HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB), HID_QUIRK_NO_INIT_REPORTS },
{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB_RAPIDFIRE), HID_QUIRK_NO_INIT_REPORTS | HID_QUIRK_ALWAYS_POLL },
{ HID_USB_DEVICE(USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K70RGB), HID_QUIRK_NO_INIT_REPORTS },
@@ -382,7 +381,8 @@ static const struct hid_device_id hid_have_special_driver[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) },
- { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) },
#endif
#if IS_ENABLED(CONFIG_HID_ELO)
{ HID_USB_DEVICE(USB_VENDOR_ID_ELO, 0x0009) },
@@ -1217,7 +1217,7 @@ EXPORT_SYMBOL_GPL(hid_quirks_exit);
static unsigned long hid_gets_squirk(const struct hid_device *hdev)
{
const struct hid_device_id *bl_entry;
- unsigned long quirks = 0;
+ unsigned long quirks = hdev->initial_quirks;
if (hid_match_id(hdev, hid_ignore_list))
quirks |= HID_QUIRK_IGNORE;
diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
index 65c1f20..7c61bb9 100644
--- a/drivers/hid/i2c-hid/i2c-hid-core.c
+++ b/drivers/hid/i2c-hid/i2c-hid-core.c
@@ -1012,6 +1012,10 @@ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
hid->vendor = le16_to_cpu(ihid->hdesc.wVendorID);
hid->product = le16_to_cpu(ihid->hdesc.wProductID);
+ hid->initial_quirks = quirks;
+ hid->initial_quirks |= i2c_hid_get_dmi_quirks(hid->vendor,
+ hid->product);
+
snprintf(hid->name, sizeof(hid->name), "%s %04X:%04X",
client->name, (u16)hid->vendor, (u16)hid->product);
strlcpy(hid->phys, dev_name(&client->dev), sizeof(hid->phys));
@@ -1025,8 +1029,6 @@ int i2c_hid_core_probe(struct i2c_client *client, struct i2chid_ops *ops,
goto err_mem_free;
}
- hid->quirks |= quirks;
-
return 0;
err_mem_free:
diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
index 8e0f674..210f17c 100644
--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
@@ -10,8 +10,10 @@
#include <linux/types.h>
#include <linux/dmi.h>
#include <linux/mod_devicetable.h>
+#include <linux/hid.h>
#include "i2c-hid.h"
+#include "../hid-ids.h"
struct i2c_hid_desc_override {
@@ -416,6 +418,28 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
{ } /* Terminate list */
};
+static const struct hid_device_id i2c_hid_elan_flipped_quirks = {
+ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, USB_VENDOR_ID_ELAN, 0x2dcd),
+ HID_QUIRK_X_INVERT | HID_QUIRK_Y_INVERT
+};
+
+/*
+ * This list contains devices which have specific issues based on the system
+ * they're on and not just the device itself. The driver_data will have a
+ * specific hid device to match against.
+ */
+static const struct dmi_system_id i2c_hid_dmi_quirk_table[] = {
+ {
+ .ident = "DynaBook K50/FR",
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Dynabook Inc."),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "dynabook K50/FR"),
+ },
+ .driver_data = (void *)&i2c_hid_elan_flipped_quirks,
+ },
+ { } /* Terminate list */
+};
+
struct i2c_hid_desc *i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name)
{
@@ -450,3 +474,21 @@ char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
*size = override->hid_report_desc_size;
return override->hid_report_desc;
}
+
+u32 i2c_hid_get_dmi_quirks(const u16 vendor, const u16 product)
+{
+ u32 quirks = 0;
+ const struct dmi_system_id *system_id =
+ dmi_first_match(i2c_hid_dmi_quirk_table);
+
+ if (system_id) {
+ const struct hid_device_id *device_id =
+ (struct hid_device_id *)(system_id->driver_data);
+
+ if (device_id && device_id->vendor == vendor &&
+ device_id->product == product)
+ quirks = device_id->driver_data;
+ }
+
+ return quirks;
+}
diff --git a/drivers/hid/i2c-hid/i2c-hid.h b/drivers/hid/i2c-hid/i2c-hid.h
index 236cc06..7b93b6c2 100644
--- a/drivers/hid/i2c-hid/i2c-hid.h
+++ b/drivers/hid/i2c-hid/i2c-hid.h
@@ -9,6 +9,7 @@
struct i2c_hid_desc *i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name);
char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
unsigned int *size);
+u32 i2c_hid_get_dmi_quirks(const u16 vendor, const u16 product);
#else
static inline struct i2c_hid_desc
*i2c_hid_get_dmi_i2c_hid_desc_override(uint8_t *i2c_name)
@@ -16,6 +17,8 @@ static inline struct i2c_hid_desc
static inline char *i2c_hid_get_dmi_hid_report_desc_override(uint8_t *i2c_name,
unsigned int *size)
{ return NULL; }
+static inline u32 i2c_hid_get_dmi_quirks(const u16 vendor, const u16 product)
+{ return 0; }
#endif
/**
diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c
index 45e0c7b..6c942dd 100644
--- a/drivers/hid/intel-ish-hid/ipc/ipc.c
+++ b/drivers/hid/intel-ish-hid/ipc/ipc.c
@@ -5,6 +5,7 @@
* Copyright (c) 2014-2016, Intel Corporation.
*/
+#include <linux/devm-helpers.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <linux/delay.h>
@@ -621,7 +622,6 @@ static void recv_ipc(struct ishtp_device *dev, uint32_t doorbell_val)
case MNG_RESET_NOTIFY:
if (!ishtp_dev) {
ishtp_dev = dev;
- INIT_WORK(&fw_reset_work, fw_reset_work_fn);
}
schedule_work(&fw_reset_work);
break;
@@ -936,6 +936,7 @@ struct ishtp_device *ish_dev_init(struct pci_dev *pdev)
{
struct ishtp_device *dev;
int i;
+ int ret;
dev = devm_kzalloc(&pdev->dev,
sizeof(struct ishtp_device) + sizeof(struct ish_hw),
@@ -971,6 +972,12 @@ struct ishtp_device *ish_dev_init(struct pci_dev *pdev)
list_add_tail(&tx_buf->link, &dev->wr_free_list);
}
+ ret = devm_work_autocancel(&pdev->dev, &fw_reset_work, fw_reset_work_fn);
+ if (ret) {
+ dev_err(dev->devc, "Failed to initialise FW reset work\n");
+ return NULL;
+ }
+
dev->ops = &ish_hw_ops;
dev->devc = &pdev->dev;
dev->mtu = IPC_PAYLOAD_SIZE - sizeof(struct ishtp_msg_hdr);
diff --git a/drivers/hid/intel-ish-hid/ishtp/dma-if.c b/drivers/hid/intel-ish-hid/ishtp/dma-if.c
index 40554c8..00046cb 100644
--- a/drivers/hid/intel-ish-hid/ishtp/dma-if.c
+++ b/drivers/hid/intel-ish-hid/ishtp/dma-if.c
@@ -104,6 +104,11 @@ void *ishtp_cl_get_dma_send_buf(struct ishtp_device *dev,
int required_slots = (size / DMA_SLOT_SIZE)
+ 1 * (size % DMA_SLOT_SIZE != 0);
+ if (!dev->ishtp_dma_tx_map) {
+ dev_err(dev->devc, "Fail to allocate Tx map\n");
+ return NULL;
+ }
+
spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags);
for (i = 0; i <= (dev->ishtp_dma_num_slots - required_slots); i++) {
free = 1;
@@ -150,6 +155,11 @@ void ishtp_cl_release_dma_acked_mem(struct ishtp_device *dev,
return;
}
+ if (!dev->ishtp_dma_tx_map) {
+ dev_err(dev->devc, "Fail to allocate Tx map\n");
+ return;
+ }
+
i = (msg_addr - dev->ishtp_host_dma_tx_buf) / DMA_SLOT_SIZE;
spin_lock_irqsave(&dev->ishtp_dma_tx_lock, flags);
for (j = 0; j < acked_slots; j++) {
diff --git a/drivers/hid/uhid.c b/drivers/hid/uhid.c
index fc06d8b..ba0ca652 100644
--- a/drivers/hid/uhid.c
+++ b/drivers/hid/uhid.c
@@ -395,6 +395,7 @@ struct hid_ll_driver uhid_hid_driver = {
.parse = uhid_hid_parse,
.raw_request = uhid_hid_raw_request,
.output_report = uhid_hid_output_report,
+ .max_buffer_size = UHID_DATA_MAX,
};
EXPORT_SYMBOL_GPL(uhid_hid_driver);
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index 5e479d5..47fb412 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -315,6 +315,10 @@ void vmbus_disconnect(void)
*/
struct vmbus_channel *relid2channel(u32 relid)
{
+ if (vmbus_connection.channels == NULL) {
+ pr_warn_once("relid2channel: relid=%d: No channels mapped!\n", relid);
+ return NULL;
+ }
if (WARN_ON(relid >= MAX_CHANNEL_RELIDS))
return NULL;
return READ_ONCE(vmbus_connection.channels[relid]);
diff --git a/drivers/hwmon/adt7475.c b/drivers/hwmon/adt7475.c
index 9d5b019..6b84822 100644
--- a/drivers/hwmon/adt7475.c
+++ b/drivers/hwmon/adt7475.c
@@ -486,10 +486,10 @@ static ssize_t temp_store(struct device *dev, struct device_attribute *attr,
val = (temp - val) / 1000;
if (sattr->index != 1) {
- data->temp[HYSTERSIS][sattr->index] &= 0xF0;
+ data->temp[HYSTERSIS][sattr->index] &= 0x0F;
data->temp[HYSTERSIS][sattr->index] |= (val & 0xF) << 4;
} else {
- data->temp[HYSTERSIS][sattr->index] &= 0x0F;
+ data->temp[HYSTERSIS][sattr->index] &= 0xF0;
data->temp[HYSTERSIS][sattr->index] |= (val & 0xF);
}
@@ -554,11 +554,11 @@ static ssize_t temp_st_show(struct device *dev, struct device_attribute *attr,
val = data->enh_acoustics[0] & 0xf;
break;
case 1:
- val = (data->enh_acoustics[1] >> 4) & 0xf;
+ val = data->enh_acoustics[1] & 0xf;
break;
case 2:
default:
- val = data->enh_acoustics[1] & 0xf;
+ val = (data->enh_acoustics[1] >> 4) & 0xf;
break;
}
diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
index 42b84eb..eaae5de 100644
--- a/drivers/hwmon/coretemp.c
+++ b/drivers/hwmon/coretemp.c
@@ -550,66 +550,49 @@ static void coretemp_remove_core(struct platform_data *pdata, int indx)
ida_free(&pdata->ida, indx - BASE_SYSFS_ATTR_NO);
}
-static int coretemp_probe(struct platform_device *pdev)
+static int coretemp_device_add(int zoneid)
{
- struct device *dev = &pdev->dev;
+ struct platform_device *pdev;
struct platform_data *pdata;
+ int err;
/* Initialize the per-zone data structures */
- pdata = devm_kzalloc(dev, sizeof(struct platform_data), GFP_KERNEL);
+ pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
if (!pdata)
return -ENOMEM;
- pdata->pkg_id = pdev->id;
+ pdata->pkg_id = zoneid;
ida_init(&pdata->ida);
- platform_set_drvdata(pdev, pdata);
-
- pdata->hwmon_dev = devm_hwmon_device_register_with_groups(dev, DRVNAME,
- pdata, NULL);
- return PTR_ERR_OR_ZERO(pdata->hwmon_dev);
-}
-
-static int coretemp_remove(struct platform_device *pdev)
-{
- struct platform_data *pdata = platform_get_drvdata(pdev);
- int i;
-
- for (i = MAX_CORE_DATA - 1; i >= 0; --i)
- if (pdata->core_data[i])
- coretemp_remove_core(pdata, i);
-
- ida_destroy(&pdata->ida);
- return 0;
-}
-
-static struct platform_driver coretemp_driver = {
- .driver = {
- .name = DRVNAME,
- },
- .probe = coretemp_probe,
- .remove = coretemp_remove,
-};
-
-static struct platform_device *coretemp_device_add(unsigned int cpu)
-{
- int err, zoneid = topology_logical_die_id(cpu);
- struct platform_device *pdev;
-
- if (zoneid < 0)
- return ERR_PTR(-ENOMEM);
pdev = platform_device_alloc(DRVNAME, zoneid);
- if (!pdev)
- return ERR_PTR(-ENOMEM);
-
- err = platform_device_add(pdev);
- if (err) {
- platform_device_put(pdev);
- return ERR_PTR(err);
+ if (!pdev) {
+ err = -ENOMEM;
+ goto err_free_pdata;
}
+ err = platform_device_add(pdev);
+ if (err)
+ goto err_put_dev;
+
+ platform_set_drvdata(pdev, pdata);
zone_devices[zoneid] = pdev;
- return pdev;
+ return 0;
+
+err_put_dev:
+ platform_device_put(pdev);
+err_free_pdata:
+ kfree(pdata);
+ return err;
+}
+
+static void coretemp_device_remove(int zoneid)
+{
+ struct platform_device *pdev = zone_devices[zoneid];
+ struct platform_data *pdata = platform_get_drvdata(pdev);
+
+ ida_destroy(&pdata->ida);
+ kfree(pdata);
+ platform_device_unregister(pdev);
}
static int coretemp_cpu_online(unsigned int cpu)
@@ -633,7 +616,10 @@ static int coretemp_cpu_online(unsigned int cpu)
if (!cpu_has(c, X86_FEATURE_DTHERM))
return -ENODEV;
- if (!pdev) {
+ pdata = platform_get_drvdata(pdev);
+ if (!pdata->hwmon_dev) {
+ struct device *hwmon;
+
/* Check the microcode version of the CPU */
if (chk_ucode_version(cpu))
return -EINVAL;
@@ -644,9 +630,11 @@ static int coretemp_cpu_online(unsigned int cpu)
* online. So, initialize per-pkg data structures and
* then bring this core online.
*/
- pdev = coretemp_device_add(cpu);
- if (IS_ERR(pdev))
- return PTR_ERR(pdev);
+ hwmon = hwmon_device_register_with_groups(&pdev->dev, DRVNAME,
+ pdata, NULL);
+ if (IS_ERR(hwmon))
+ return PTR_ERR(hwmon);
+ pdata->hwmon_dev = hwmon;
/*
* Check whether pkgtemp support is available.
@@ -656,7 +644,6 @@ static int coretemp_cpu_online(unsigned int cpu)
coretemp_add_core(pdev, cpu, 1);
}
- pdata = platform_get_drvdata(pdev);
/*
* Check whether a thread sibling is already online. If not add the
* interface for this CPU core.
@@ -675,18 +662,14 @@ static int coretemp_cpu_offline(unsigned int cpu)
struct temp_data *tdata;
int i, indx = -1, target;
- /*
- * Don't execute this on suspend as the device remove locks
- * up the machine.
- */
+ /* No need to tear down any interfaces for suspend */
if (cpuhp_tasks_frozen)
return 0;
/* If the physical CPU device does not exist, just return */
- if (!pdev)
- return 0;
-
pd = platform_get_drvdata(pdev);
+ if (!pd->hwmon_dev)
+ return 0;
for (i = 0; i < NUM_REAL_CORES; i++) {
if (pd->cpu_map[i] == topology_core_id(cpu)) {
@@ -718,13 +701,14 @@ static int coretemp_cpu_offline(unsigned int cpu)
}
/*
- * If all cores in this pkg are offline, remove the device. This
- * will invoke the platform driver remove function, which cleans up
- * the rest.
+ * If all cores in this pkg are offline, remove the interface.
*/
+ tdata = pd->core_data[PKG_SYSFS_ATTR_NO];
if (cpumask_empty(&pd->cpumask)) {
- zone_devices[topology_logical_die_id(cpu)] = NULL;
- platform_device_unregister(pdev);
+ if (tdata)
+ coretemp_remove_core(pd, PKG_SYSFS_ATTR_NO);
+ hwmon_device_unregister(pd->hwmon_dev);
+ pd->hwmon_dev = NULL;
return 0;
}
@@ -732,7 +716,6 @@ static int coretemp_cpu_offline(unsigned int cpu)
* Check whether this core is the target for the package
* interface. We need to assign it to some other cpu.
*/
- tdata = pd->core_data[PKG_SYSFS_ATTR_NO];
if (tdata && tdata->cpu == cpu) {
target = cpumask_first(&pd->cpumask);
mutex_lock(&tdata->update_lock);
@@ -751,7 +734,7 @@ static enum cpuhp_state coretemp_hp_online;
static int __init coretemp_init(void)
{
- int err;
+ int i, err;
/*
* CPUID.06H.EAX[0] indicates whether the CPU has thermal
@@ -767,20 +750,22 @@ static int __init coretemp_init(void)
if (!zone_devices)
return -ENOMEM;
- err = platform_driver_register(&coretemp_driver);
- if (err)
- goto outzone;
+ for (i = 0; i < max_zones; i++) {
+ err = coretemp_device_add(i);
+ if (err)
+ goto outzone;
+ }
err = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hwmon/coretemp:online",
coretemp_cpu_online, coretemp_cpu_offline);
if (err < 0)
- goto outdrv;
+ goto outzone;
coretemp_hp_online = err;
return 0;
-outdrv:
- platform_driver_unregister(&coretemp_driver);
outzone:
+ while (i--)
+ coretemp_device_remove(i);
kfree(zone_devices);
return err;
}
@@ -788,8 +773,11 @@ module_init(coretemp_init)
static void __exit coretemp_exit(void)
{
+ int i;
+
cpuhp_remove_state(coretemp_hp_online);
- platform_driver_unregister(&coretemp_driver);
+ for (i = 0; i < max_zones; i++)
+ coretemp_device_remove(i);
kfree(zone_devices);
}
module_exit(coretemp_exit)
diff --git a/drivers/hwmon/ftsteutates.c b/drivers/hwmon/ftsteutates.c
index ceffc76..2998d8c 100644
--- a/drivers/hwmon/ftsteutates.c
+++ b/drivers/hwmon/ftsteutates.c
@@ -12,6 +12,7 @@
#include <linux/i2c.h>
#include <linux/init.h>
#include <linux/jiffies.h>
+#include <linux/math.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
@@ -347,13 +348,15 @@ static ssize_t in_value_show(struct device *dev,
{
struct fts_data *data = dev_get_drvdata(dev);
int index = to_sensor_dev_attr(devattr)->index;
- int err;
+ int value, err;
err = fts_update_device(data);
if (err < 0)
return err;
- return sprintf(buf, "%u\n", data->volt[index]);
+ value = DIV_ROUND_CLOSEST(data->volt[index] * 3300, 255);
+
+ return sprintf(buf, "%d\n", value);
}
static ssize_t temp_value_show(struct device *dev,
@@ -361,13 +364,15 @@ static ssize_t temp_value_show(struct device *dev,
{
struct fts_data *data = dev_get_drvdata(dev);
int index = to_sensor_dev_attr(devattr)->index;
- int err;
+ int value, err;
err = fts_update_device(data);
if (err < 0)
return err;
- return sprintf(buf, "%u\n", data->temp_input[index]);
+ value = (data->temp_input[index] - 64) * 1000;
+
+ return sprintf(buf, "%d\n", value);
}
static ssize_t temp_fault_show(struct device *dev,
@@ -436,13 +441,15 @@ static ssize_t fan_value_show(struct device *dev,
{
struct fts_data *data = dev_get_drvdata(dev);
int index = to_sensor_dev_attr(devattr)->index;
- int err;
+ int value, err;
err = fts_update_device(data);
if (err < 0)
return err;
- return sprintf(buf, "%u\n", data->fan_input[index]);
+ value = data->fan_input[index] * 60;
+
+ return sprintf(buf, "%d\n", value);
}
static ssize_t fan_source_show(struct device *dev,
diff --git a/drivers/hwmon/hwmon.c b/drivers/hwmon/hwmon.c
index 3ae9619..fd3b277 100644
--- a/drivers/hwmon/hwmon.c
+++ b/drivers/hwmon/hwmon.c
@@ -736,6 +736,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
{
struct hwmon_device *hwdev;
struct device *hdev;
+ struct device *tdev = dev;
int i, err, id;
/* Complain about invalid characters in hwmon name attribute */
@@ -793,7 +794,9 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
hwdev->name = name;
hdev->class = &hwmon_class;
hdev->parent = dev;
- hdev->of_node = dev ? dev->of_node : NULL;
+ while (tdev && !tdev->of_node)
+ tdev = tdev->parent;
+ hdev->of_node = tdev ? tdev->of_node : NULL;
hwdev->chip = chip;
dev_set_drvdata(hdev, drvdata);
dev_set_name(hdev, HWMON_ID_FORMAT, id);
@@ -805,7 +808,7 @@ __hwmon_device_register(struct device *dev, const char *name, void *drvdata,
INIT_LIST_HEAD(&hwdev->tzdata);
- if (dev && dev->of_node && chip && chip->ops->read &&
+ if (hdev->of_node && chip && chip->ops->read &&
chip->info[0]->type == hwmon_chip &&
(chip->info[0]->config[0] & HWMON_C_REGISTER_TZ)) {
err = hwmon_thermal_register_sensors(hdev);
diff --git a/drivers/hwmon/ina3221.c b/drivers/hwmon/ina3221.c
index 14586b2..bc90631 100644
--- a/drivers/hwmon/ina3221.c
+++ b/drivers/hwmon/ina3221.c
@@ -772,7 +772,7 @@ static int ina3221_probe_child_from_dt(struct device *dev,
return ret;
} else if (val > INA3221_CHANNEL3) {
dev_err(dev, "invalid reg %d of %pOFn\n", val, child);
- return ret;
+ return -EINVAL;
}
input = &ina->inputs[val];
diff --git a/drivers/hwmon/it87.c b/drivers/hwmon/it87.c
index 1f93134..485d68a 100644
--- a/drivers/hwmon/it87.c
+++ b/drivers/hwmon/it87.c
@@ -486,6 +486,8 @@ static const struct it87_devices it87_devices[] = {
#define has_pwm_freq2(data) ((data)->features & FEAT_PWM_FREQ2)
#define has_six_temp(data) ((data)->features & FEAT_SIX_TEMP)
#define has_vin3_5v(data) ((data)->features & FEAT_VIN3_5V)
+#define has_scaling(data) ((data)->features & (FEAT_12MV_ADC | \
+ FEAT_10_9MV_ADC))
struct it87_sio_data {
int sioaddr;
@@ -3098,7 +3100,7 @@ static int it87_probe(struct platform_device *pdev)
"Detected broken BIOS defaults, disabling PWM interface\n");
/* Starting with IT8721F, we handle scaling of internal voltages */
- if (has_12mv_adc(data)) {
+ if (has_scaling(data)) {
if (sio_data->internal & BIT(0))
data->in_scaled |= BIT(3); /* in3 is AVCC */
if (sio_data->internal & BIT(1))
diff --git a/drivers/hwmon/ltc2945.c b/drivers/hwmon/ltc2945.c
index 9adebb5..c06ab73 100644
--- a/drivers/hwmon/ltc2945.c
+++ b/drivers/hwmon/ltc2945.c
@@ -248,6 +248,8 @@ static ssize_t ltc2945_value_store(struct device *dev,
/* convert to register value, then clamp and write result */
regval = ltc2945_val_to_reg(dev, reg, val);
+ if (regval < 0)
+ return regval;
if (is_power_reg(reg)) {
regval = clamp_val(regval, 0, 0xffffff);
regbuf[0] = regval >> 16;
diff --git a/drivers/hwmon/ltc2992.c b/drivers/hwmon/ltc2992.c
index 2a4bed0a..009a0a5 100644
--- a/drivers/hwmon/ltc2992.c
+++ b/drivers/hwmon/ltc2992.c
@@ -324,6 +324,7 @@ static int ltc2992_config_gpio(struct ltc2992_state *st)
st->gc.label = name;
st->gc.parent = &st->client->dev;
st->gc.owner = THIS_MODULE;
+ st->gc.can_sleep = true;
st->gc.base = -1;
st->gc.names = st->gpio_names;
st->gc.ngpio = ARRAY_SIZE(st->gpio_names);
diff --git a/drivers/hwmon/mlxreg-fan.c b/drivers/hwmon/mlxreg-fan.c
index 89fe7b9..6ecc45c 100644
--- a/drivers/hwmon/mlxreg-fan.c
+++ b/drivers/hwmon/mlxreg-fan.c
@@ -151,6 +151,12 @@ mlxreg_fan_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
if (err)
return err;
+ if (MLXREG_FAN_GET_FAULT(regval, tacho->mask)) {
+ /* FAN is broken - return zero for FAN speed. */
+ *val = 0;
+ return 0;
+ }
+
*val = MLXREG_FAN_GET_RPM(regval, fan->divider,
fan->samples);
break;
diff --git a/drivers/hwmon/pmbus/adm1266.c b/drivers/hwmon/pmbus/adm1266.c
index ec5f932..1ac2b2f 100644
--- a/drivers/hwmon/pmbus/adm1266.c
+++ b/drivers/hwmon/pmbus/adm1266.c
@@ -301,6 +301,7 @@ static int adm1266_config_gpio(struct adm1266_data *data)
data->gc.label = name;
data->gc.parent = &data->client->dev;
data->gc.owner = THIS_MODULE;
+ data->gc.can_sleep = true;
data->gc.base = -1;
data->gc.names = data->gpio_names;
data->gc.ngpio = ARRAY_SIZE(data->gpio_names);
diff --git a/drivers/hwmon/pmbus/ucd9000.c b/drivers/hwmon/pmbus/ucd9000.c
index 75fc770..3daaf22 100644
--- a/drivers/hwmon/pmbus/ucd9000.c
+++ b/drivers/hwmon/pmbus/ucd9000.c
@@ -7,6 +7,7 @@
*/
#include <linux/debugfs.h>
+#include <linux/delay.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_device.h>
@@ -16,6 +17,7 @@
#include <linux/i2c.h>
#include <linux/pmbus.h>
#include <linux/gpio/driver.h>
+#include <linux/timekeeping.h>
#include "pmbus.h"
enum chips { ucd9000, ucd90120, ucd90124, ucd90160, ucd90320, ucd9090,
@@ -65,6 +67,7 @@ struct ucd9000_data {
struct gpio_chip gpio;
#endif
struct dentry *debugfs;
+ ktime_t write_time;
};
#define to_ucd9000_data(_info) container_of(_info, struct ucd9000_data, info)
@@ -73,6 +76,73 @@ struct ucd9000_debugfs_entry {
u8 index;
};
+/*
+ * It has been observed that the UCD90320 randomly fails register access when
+ * doing another access right on the back of a register write. To mitigate this
+ * make sure that there is a minimum delay between a write access and the
+ * following access. The 250us is based on experimental data. At a delay of
+ * 200us the issue seems to go away. Add a bit of extra margin to allow for
+ * system to system differences.
+ */
+#define UCD90320_WAIT_DELAY_US 250
+
+static inline void ucd90320_wait(const struct ucd9000_data *data)
+{
+ s64 delta = ktime_us_delta(ktime_get(), data->write_time);
+
+ if (delta < UCD90320_WAIT_DELAY_US)
+ udelay(UCD90320_WAIT_DELAY_US - delta);
+}
+
+static int ucd90320_read_word_data(struct i2c_client *client, int page,
+ int phase, int reg)
+{
+ const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
+ struct ucd9000_data *data = to_ucd9000_data(info);
+
+ if (reg >= PMBUS_VIRT_BASE)
+ return -ENXIO;
+
+ ucd90320_wait(data);
+ return pmbus_read_word_data(client, page, phase, reg);
+}
+
+static int ucd90320_read_byte_data(struct i2c_client *client, int page, int reg)
+{
+ const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
+ struct ucd9000_data *data = to_ucd9000_data(info);
+
+ ucd90320_wait(data);
+ return pmbus_read_byte_data(client, page, reg);
+}
+
+static int ucd90320_write_word_data(struct i2c_client *client, int page,
+ int reg, u16 word)
+{
+ const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
+ struct ucd9000_data *data = to_ucd9000_data(info);
+ int ret;
+
+ ucd90320_wait(data);
+ ret = pmbus_write_word_data(client, page, reg, word);
+ data->write_time = ktime_get();
+
+ return ret;
+}
+
+static int ucd90320_write_byte(struct i2c_client *client, int page, u8 value)
+{
+ const struct pmbus_driver_info *info = pmbus_get_driver_info(client);
+ struct ucd9000_data *data = to_ucd9000_data(info);
+ int ret;
+
+ ucd90320_wait(data);
+ ret = pmbus_write_byte(client, page, value);
+ data->write_time = ktime_get();
+
+ return ret;
+}
+
static int ucd9000_get_fan_config(struct i2c_client *client, int fan)
{
int fan_config = 0;
@@ -598,6 +668,11 @@ static int ucd9000_probe(struct i2c_client *client)
info->read_byte_data = ucd9000_read_byte_data;
info->func[0] |= PMBUS_HAVE_FAN12 | PMBUS_HAVE_STATUS_FAN12
| PMBUS_HAVE_FAN34 | PMBUS_HAVE_STATUS_FAN34;
+ } else if (mid->driver_data == ucd90320) {
+ info->read_byte_data = ucd90320_read_byte_data;
+ info->read_word_data = ucd90320_read_word_data;
+ info->write_byte = ucd90320_write_byte;
+ info->write_word_data = ucd90320_write_word_data;
}
ucd9000_probe_gpio(client, mid, data);
diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
index 47bbe47..7d5f744 100644
--- a/drivers/hwmon/tmp513.c
+++ b/drivers/hwmon/tmp513.c
@@ -758,7 +758,7 @@ static int tmp51x_probe(struct i2c_client *client)
static struct i2c_driver tmp51x_driver = {
.driver = {
.name = "tmp51x",
- .of_match_table = of_match_ptr(tmp51x_of_match),
+ .of_match_table = tmp51x_of_match,
},
.probe_new = tmp51x_probe,
.id_table = tmp51x_id,
diff --git a/drivers/hwmon/xgene-hwmon.c b/drivers/hwmon/xgene-hwmon.c
index 382ef03..a64f768 100644
--- a/drivers/hwmon/xgene-hwmon.c
+++ b/drivers/hwmon/xgene-hwmon.c
@@ -768,6 +768,7 @@ static int xgene_hwmon_remove(struct platform_device *pdev)
{
struct xgene_hwmon_dev *ctx = platform_get_drvdata(pdev);
+ cancel_work_sync(&ctx->workq);
hwmon_device_unregister(ctx->hwmon_dev);
kfifo_free(&ctx->async_msg_fifo);
if (acpi_disabled)
diff --git a/drivers/hwtracing/coresight/coresight-cti-core.c b/drivers/hwtracing/coresight/coresight-cti-core.c
index dcd607a..932e17f 100644
--- a/drivers/hwtracing/coresight/coresight-cti-core.c
+++ b/drivers/hwtracing/coresight/coresight-cti-core.c
@@ -151,9 +151,16 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
{
struct cti_config *config = &drvdata->config;
struct coresight_device *csdev = drvdata->csdev;
+ int ret = 0;
spin_lock(&drvdata->spinlock);
+ /* don't allow negative refcounts, return an error */
+ if (!atomic_read(&drvdata->config.enable_req_count)) {
+ ret = -EINVAL;
+ goto cti_not_disabled;
+ }
+
/* check refcount - disable on 0 */
if (atomic_dec_return(&drvdata->config.enable_req_count) > 0)
goto cti_not_disabled;
@@ -171,12 +178,12 @@ static int cti_disable_hw(struct cti_drvdata *drvdata)
coresight_disclaim_device_unlocked(csdev);
CS_LOCK(drvdata->base);
spin_unlock(&drvdata->spinlock);
- return 0;
+ return ret;
/* not disabled this call */
cti_not_disabled:
spin_unlock(&drvdata->spinlock);
- return 0;
+ return ret;
}
void cti_write_single_reg(struct cti_drvdata *drvdata, int offset, u32 value)
diff --git a/drivers/hwtracing/coresight/coresight-cti-sysfs.c b/drivers/hwtracing/coresight/coresight-cti-sysfs.c
index 7ff7e77..92fc300 100644
--- a/drivers/hwtracing/coresight/coresight-cti-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-cti-sysfs.c
@@ -108,10 +108,19 @@ static ssize_t enable_store(struct device *dev,
if (ret)
return ret;
- if (val)
+ if (val) {
+ ret = pm_runtime_resume_and_get(dev->parent);
+ if (ret)
+ return ret;
ret = cti_enable(drvdata->csdev);
- else
+ if (ret)
+ pm_runtime_put(dev->parent);
+ } else {
ret = cti_disable(drvdata->csdev);
+ if (!ret)
+ pm_runtime_put(dev->parent);
+ }
+
if (ret)
return ret;
return size;
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
index e24252e..2b22343 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
@@ -384,8 +384,10 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
etm4x_relaxed_write32(csa, config->vipcssctlr, TRCVIPCSSCTLR);
for (i = 0; i < drvdata->nrseqstate - 1; i++)
etm4x_relaxed_write32(csa, config->seq_ctrl[i], TRCSEQEVRn(i));
- etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR);
- etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR);
+ if (drvdata->nrseqstate) {
+ etm4x_relaxed_write32(csa, config->seq_rst, TRCSEQRSTEVR);
+ etm4x_relaxed_write32(csa, config->seq_state, TRCSEQSTR);
+ }
etm4x_relaxed_write32(csa, config->ext_inp, TRCEXTINSELR);
for (i = 0; i < drvdata->nr_cntr; i++) {
etm4x_relaxed_write32(csa, config->cntrldvr[i], TRCCNTRLDVRn(i));
@@ -409,7 +411,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
if (etm4x_sspcicrn_present(drvdata, i))
etm4x_relaxed_write32(csa, config->ss_pe_cmp[i], TRCSSPCICRn(i));
}
- for (i = 0; i < drvdata->nr_addr_cmp; i++) {
+ for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
etm4x_relaxed_write64(csa, config->addr_val[i], TRCACVRn(i));
etm4x_relaxed_write64(csa, config->addr_acc[i], TRCACATRn(i));
}
@@ -949,25 +951,21 @@ static bool etm4_init_iomem_access(struct etmv4_drvdata *drvdata,
struct csdev_access *csa)
{
u32 devarch = readl_relaxed(drvdata->base + TRCDEVARCH);
- u32 idr1 = readl_relaxed(drvdata->base + TRCIDR1);
/*
* All ETMs must implement TRCDEVARCH to indicate that
- * the component is an ETMv4. To support any broken
- * implementations we fall back to TRCIDR1 check, which
- * is not really reliable.
+ * the component is an ETMv4. Even though TRCIDR1 also
+ * contains the information, it is part of the "Trace"
+ * register and must be accessed with the OSLK cleared,
+ * with MMIO. But we cannot touch the OSLK until we are
+ * sure this is an ETM. So rely only on the TRCDEVARCH.
*/
- if ((devarch & ETM_DEVARCH_ID_MASK) == ETM_DEVARCH_ETMv4x_ARCH) {
- drvdata->arch = etm_devarch_to_arch(devarch);
- } else {
- pr_warn("CPU%d: ETM4x incompatible TRCDEVARCH: %x, falling back to TRCIDR1\n",
- smp_processor_id(), devarch);
-
- if (ETM_TRCIDR1_ARCH_MAJOR(idr1) != ETM_TRCIDR1_ARCH_ETMv4)
- return false;
- drvdata->arch = etm_trcidr_to_arch(idr1);
+ if ((devarch & ETM_DEVARCH_ID_MASK) != ETM_DEVARCH_ETMv4x_ARCH) {
+ pr_warn_once("TRCDEVARCH doesn't match ETMv4 architecture\n");
+ return false;
}
+ drvdata->arch = etm_devarch_to_arch(devarch);
*csa = CSDEV_ACCESS_IOMEM(drvdata->base);
return true;
}
@@ -1618,8 +1616,10 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata)
for (i = 0; i < drvdata->nrseqstate - 1; i++)
state->trcseqevr[i] = etm4x_read32(csa, TRCSEQEVRn(i));
- state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR);
- state->trcseqstr = etm4x_read32(csa, TRCSEQSTR);
+ if (drvdata->nrseqstate) {
+ state->trcseqrstevr = etm4x_read32(csa, TRCSEQRSTEVR);
+ state->trcseqstr = etm4x_read32(csa, TRCSEQSTR);
+ }
state->trcextinselr = etm4x_read32(csa, TRCEXTINSELR);
for (i = 0; i < drvdata->nr_cntr; i++) {
@@ -1731,8 +1731,10 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata)
for (i = 0; i < drvdata->nrseqstate - 1; i++)
etm4x_relaxed_write32(csa, state->trcseqevr[i], TRCSEQEVRn(i));
- etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR);
- etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR);
+ if (drvdata->nrseqstate) {
+ etm4x_relaxed_write32(csa, state->trcseqrstevr, TRCSEQRSTEVR);
+ etm4x_relaxed_write32(csa, state->trcseqstr, TRCSEQSTR);
+ }
etm4x_relaxed_write32(csa, state->trcextinselr, TRCEXTINSELR);
for (i = 0; i < drvdata->nr_cntr; i++) {
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index 794b296..2305f32 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -669,14 +669,12 @@
* TRCDEVARCH - CoreSight architected register
* - Bits[15:12] - Major version
* - Bits[19:16] - Minor version
- * TRCIDR1 - ETM architected register
- * - Bits[11:8] - Major version
- * - Bits[7:4] - Minor version
- * We must rely on TRCDEVARCH for the version information,
- * however we don't want to break the support for potential
- * old implementations which might not implement it. Thus
- * we fall back to TRCIDR1 if TRCDEVARCH is not implemented
- * for memory mapped components.
+ *
+ * We must rely only on TRCDEVARCH for the version information. Even though,
+ * TRCIDR1 also provides the architecture version, it is a "Trace" register
+ * and as such must be accessed only with Trace power domain ON. This may
+ * not be available at probe time.
+ *
* Now to make certain decisions easier based on the version
* we use an internal representation of the version in the
* driver, as follows :
@@ -702,12 +700,6 @@ static inline u8 etm_devarch_to_arch(u32 devarch)
ETM_DEVARCH_REVISION(devarch));
}
-static inline u8 etm_trcidr_to_arch(u32 trcidr1)
-{
- return ETM_ARCH_VERSION(ETM_TRCIDR1_ARCH_MAJOR(trcidr1),
- ETM_TRCIDR1_ARCH_MINOR(trcidr1));
-}
-
enum etm_impdef_type {
ETM4_IMPDEF_HISI_CORE_COMMIT,
ETM4_IMPDEF_FEATURE_MAX,
diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
index a1100e3..4e75232 100644
--- a/drivers/i2c/busses/i2c-designware-common.c
+++ b/drivers/i2c/busses/i2c-designware-common.c
@@ -351,7 +351,8 @@ u32 i2c_dw_scl_hcnt(u32 ic_clk, u32 tSYMBOL, u32 tf, int cond, int offset)
*
* If your hardware is free from tHD;STA issue, try this one.
*/
- return DIV_ROUND_CLOSEST(ic_clk * tSYMBOL, MICRO) - 8 + offset;
+ return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * tSYMBOL, MICRO) -
+ 8 + offset;
else
/*
* Conditional expression:
@@ -367,7 +368,8 @@ u32 i2c_dw_scl_hcnt(u32 ic_clk, u32 tSYMBOL, u32 tf, int cond, int offset)
* The reason why we need to take into account "tf" here,
* is the same as described in i2c_dw_scl_lcnt().
*/
- return DIV_ROUND_CLOSEST(ic_clk * (tSYMBOL + tf), MICRO) - 3 + offset;
+ return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * (tSYMBOL + tf), MICRO) -
+ 3 + offset;
}
u32 i2c_dw_scl_lcnt(u32 ic_clk, u32 tLOW, u32 tf, int offset)
@@ -383,7 +385,8 @@ u32 i2c_dw_scl_lcnt(u32 ic_clk, u32 tLOW, u32 tf, int offset)
* account the fall time of SCL signal (tf). Default tf value
* should be 0.3 us, for safety.
*/
- return DIV_ROUND_CLOSEST(ic_clk * (tLOW + tf), MICRO) - 1 + offset;
+ return DIV_ROUND_CLOSEST_ULL((u64)ic_clk * (tLOW + tf), MICRO) -
+ 1 + offset;
}
int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev)
@@ -462,7 +465,7 @@ void __i2c_dw_disable(struct dw_i2c_dev *dev)
dev_warn(dev->dev, "timeout in disabling adapter\n");
}
-unsigned long i2c_dw_clk_rate(struct dw_i2c_dev *dev)
+u32 i2c_dw_clk_rate(struct dw_i2c_dev *dev)
{
/*
* Clock is not necessary if we got LCNT/HCNT values directly from
diff --git a/drivers/i2c/busses/i2c-designware-core.h b/drivers/i2c/busses/i2c-designware-core.h
index 9be81db..59b36e0 100644
--- a/drivers/i2c/busses/i2c-designware-core.h
+++ b/drivers/i2c/busses/i2c-designware-core.h
@@ -310,7 +310,7 @@ int i2c_dw_init_regmap(struct dw_i2c_dev *dev);
u32 i2c_dw_scl_hcnt(u32 ic_clk, u32 tSYMBOL, u32 tf, int cond, int offset);
u32 i2c_dw_scl_lcnt(u32 ic_clk, u32 tLOW, u32 tf, int offset);
int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev);
-unsigned long i2c_dw_clk_rate(struct dw_i2c_dev *dev);
+u32 i2c_dw_clk_rate(struct dw_i2c_dev *dev);
int i2c_dw_prepare_clk(struct dw_i2c_dev *dev, bool prepare);
int i2c_dw_acquire_lock(struct dw_i2c_dev *dev);
void i2c_dw_release_lock(struct dw_i2c_dev *dev);
diff --git a/drivers/i2c/busses/i2c-designware-pcidrv.c b/drivers/i2c/busses/i2c-designware-pcidrv.c
index 5b45941..de8dd3e 100644
--- a/drivers/i2c/busses/i2c-designware-pcidrv.c
+++ b/drivers/i2c/busses/i2c-designware-pcidrv.c
@@ -398,6 +398,8 @@ static const struct pci_device_id i2_designware_pci_ids[] = {
{ PCI_VDEVICE(ATI, 0x73a4), navi_amd },
{ PCI_VDEVICE(ATI, 0x73e4), navi_amd },
{ PCI_VDEVICE(ATI, 0x73c4), navi_amd },
+ { PCI_VDEVICE(ATI, 0x7444), navi_amd },
+ { PCI_VDEVICE(ATI, 0x7464), navi_amd },
{ 0,}
};
MODULE_DEVICE_TABLE(pci, i2_designware_pci_ids);
diff --git a/drivers/i2c/busses/i2c-hisi.c b/drivers/i2c/busses/i2c-hisi.c
index acf3948..72e43ec 100644
--- a/drivers/i2c/busses/i2c-hisi.c
+++ b/drivers/i2c/busses/i2c-hisi.c
@@ -340,7 +340,11 @@ static irqreturn_t hisi_i2c_irq(int irq, void *context)
hisi_i2c_read_rx_fifo(ctlr);
out:
- if (int_stat & HISI_I2C_INT_TRANS_CPLT || ctlr->xfer_err) {
+ /*
+ * Only use TRANS_CPLT to indicate the completion. On error cases we'll
+ * get two interrupts, INT_ERR first then TRANS_CPLT.
+ */
+ if (int_stat & HISI_I2C_INT_TRANS_CPLT) {
hisi_i2c_disable_int(ctlr, HISI_I2C_INT_ALL);
hisi_i2c_clear_int(ctlr, HISI_I2C_INT_ALL);
complete(ctlr->completion);
diff --git a/drivers/i2c/busses/i2c-imx-lpi2c.c b/drivers/i2c/busses/i2c-imx-lpi2c.c
index 8b9ba05..2018dbc 100644
--- a/drivers/i2c/busses/i2c-imx-lpi2c.c
+++ b/drivers/i2c/busses/i2c-imx-lpi2c.c
@@ -502,10 +502,14 @@ static int lpi2c_imx_xfer(struct i2c_adapter *adapter,
static irqreturn_t lpi2c_imx_isr(int irq, void *dev_id)
{
struct lpi2c_imx_struct *lpi2c_imx = dev_id;
+ unsigned int enabled;
unsigned int temp;
+ enabled = readl(lpi2c_imx->base + LPI2C_MIER);
+
lpi2c_imx_intctrl(lpi2c_imx, 0);
temp = readl(lpi2c_imx->base + LPI2C_MSR);
+ temp &= enabled;
if (temp & MSR_RDF)
lpi2c_imx_read_rxfifo(lpi2c_imx);
diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c
index 5c8e94b..103a05e 100644
--- a/drivers/i2c/busses/i2c-mv64xxx.c
+++ b/drivers/i2c/busses/i2c-mv64xxx.c
@@ -150,6 +150,7 @@ struct mv64xxx_i2c_data {
/* Clk div is 2 to the power n, not 2 to the power n + 1 */
bool clk_n_base_0;
struct i2c_bus_recovery_info rinfo;
+ bool atomic;
};
static struct mv64xxx_i2c_regs mv64xxx_i2c_regs_mv64xxx = {
@@ -179,7 +180,10 @@ mv64xxx_i2c_prepare_for_io(struct mv64xxx_i2c_data *drv_data,
u32 dir = 0;
drv_data->cntl_bits = MV64XXX_I2C_REG_CONTROL_ACK |
- MV64XXX_I2C_REG_CONTROL_INTEN | MV64XXX_I2C_REG_CONTROL_TWSIEN;
+ MV64XXX_I2C_REG_CONTROL_TWSIEN;
+
+ if (!drv_data->atomic)
+ drv_data->cntl_bits |= MV64XXX_I2C_REG_CONTROL_INTEN;
if (msg->flags & I2C_M_RD)
dir = 1;
@@ -409,7 +413,8 @@ mv64xxx_i2c_do_action(struct mv64xxx_i2c_data *drv_data)
case MV64XXX_I2C_ACTION_RCV_DATA_STOP:
drv_data->msg->buf[drv_data->byte_posn++] =
readl(drv_data->reg_base + drv_data->reg_offsets.data);
- drv_data->cntl_bits &= ~MV64XXX_I2C_REG_CONTROL_INTEN;
+ if (!drv_data->atomic)
+ drv_data->cntl_bits &= ~MV64XXX_I2C_REG_CONTROL_INTEN;
writel(drv_data->cntl_bits | MV64XXX_I2C_REG_CONTROL_STOP,
drv_data->reg_base + drv_data->reg_offsets.control);
drv_data->block = 0;
@@ -427,7 +432,8 @@ mv64xxx_i2c_do_action(struct mv64xxx_i2c_data *drv_data)
drv_data->rc = -EIO;
fallthrough;
case MV64XXX_I2C_ACTION_SEND_STOP:
- drv_data->cntl_bits &= ~MV64XXX_I2C_REG_CONTROL_INTEN;
+ if (!drv_data->atomic)
+ drv_data->cntl_bits &= ~MV64XXX_I2C_REG_CONTROL_INTEN;
writel(drv_data->cntl_bits | MV64XXX_I2C_REG_CONTROL_STOP,
drv_data->reg_base + drv_data->reg_offsets.control);
drv_data->block = 0;
@@ -575,6 +581,17 @@ mv64xxx_i2c_wait_for_completion(struct mv64xxx_i2c_data *drv_data)
spin_unlock_irqrestore(&drv_data->lock, flags);
}
+static void mv64xxx_i2c_wait_polling(struct mv64xxx_i2c_data *drv_data)
+{
+ ktime_t timeout = ktime_add_ms(ktime_get(), drv_data->adapter.timeout);
+
+ while (READ_ONCE(drv_data->block) &&
+ ktime_compare(ktime_get(), timeout) < 0) {
+ udelay(5);
+ mv64xxx_i2c_intr(0, drv_data);
+ }
+}
+
static int
mv64xxx_i2c_execute_msg(struct mv64xxx_i2c_data *drv_data, struct i2c_msg *msg,
int is_last)
@@ -590,7 +607,11 @@ mv64xxx_i2c_execute_msg(struct mv64xxx_i2c_data *drv_data, struct i2c_msg *msg,
mv64xxx_i2c_send_start(drv_data);
spin_unlock_irqrestore(&drv_data->lock, flags);
- mv64xxx_i2c_wait_for_completion(drv_data);
+ if (!drv_data->atomic)
+ mv64xxx_i2c_wait_for_completion(drv_data);
+ else
+ mv64xxx_i2c_wait_polling(drv_data);
+
return drv_data->rc;
}
@@ -717,7 +738,7 @@ mv64xxx_i2c_functionality(struct i2c_adapter *adap)
}
static int
-mv64xxx_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+mv64xxx_i2c_xfer_core(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
{
struct mv64xxx_i2c_data *drv_data = i2c_get_adapdata(adap);
int rc, ret = num;
@@ -730,7 +751,7 @@ mv64xxx_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
drv_data->msgs = msgs;
drv_data->num_msgs = num;
- if (mv64xxx_i2c_can_offload(drv_data))
+ if (mv64xxx_i2c_can_offload(drv_data) && !drv_data->atomic)
rc = mv64xxx_i2c_offload_xfer(drv_data);
else
rc = mv64xxx_i2c_execute_msg(drv_data, &msgs[0], num == 1);
@@ -747,8 +768,27 @@ mv64xxx_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
return ret;
}
+static int
+mv64xxx_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], int num)
+{
+ struct mv64xxx_i2c_data *drv_data = i2c_get_adapdata(adap);
+
+ drv_data->atomic = 0;
+ return mv64xxx_i2c_xfer_core(adap, msgs, num);
+}
+
+static int mv64xxx_i2c_xfer_atomic(struct i2c_adapter *adap,
+ struct i2c_msg msgs[], int num)
+{
+ struct mv64xxx_i2c_data *drv_data = i2c_get_adapdata(adap);
+
+ drv_data->atomic = 1;
+ return mv64xxx_i2c_xfer_core(adap, msgs, num);
+}
+
static const struct i2c_algorithm mv64xxx_i2c_algo = {
.master_xfer = mv64xxx_i2c_xfer,
+ .master_xfer_atomic = mv64xxx_i2c_xfer_atomic,
.functionality = mv64xxx_i2c_functionality,
};
@@ -1047,14 +1087,6 @@ mv64xxx_i2c_remove(struct platform_device *pd)
return 0;
}
-static void
-mv64xxx_i2c_shutdown(struct platform_device *pd)
-{
- pm_runtime_disable(&pd->dev);
- if (!pm_runtime_status_suspended(&pd->dev))
- mv64xxx_i2c_runtime_suspend(&pd->dev);
-}
-
static const struct dev_pm_ops mv64xxx_i2c_pm_ops = {
SET_RUNTIME_PM_OPS(mv64xxx_i2c_runtime_suspend,
mv64xxx_i2c_runtime_resume, NULL)
@@ -1065,7 +1097,6 @@ static const struct dev_pm_ops mv64xxx_i2c_pm_ops = {
static struct platform_driver mv64xxx_i2c_driver = {
.probe = mv64xxx_i2c_probe,
.remove = mv64xxx_i2c_remove,
- .shutdown = mv64xxx_i2c_shutdown,
.driver = {
.name = MV64XXX_I2C_CTLR_NAME,
.pm = &mv64xxx_i2c_pm_ops,
diff --git a/drivers/i2c/busses/i2c-mxs.c b/drivers/i2c/busses/i2c-mxs.c
index 68f67d0..b353732 100644
--- a/drivers/i2c/busses/i2c-mxs.c
+++ b/drivers/i2c/busses/i2c-mxs.c
@@ -826,8 +826,8 @@ static int mxs_i2c_probe(struct platform_device *pdev)
/* Setup the DMA */
i2c->dmach = dma_request_chan(dev, "rx-tx");
if (IS_ERR(i2c->dmach)) {
- dev_err(dev, "Failed to request dma\n");
- return PTR_ERR(i2c->dmach);
+ return dev_err_probe(dev, PTR_ERR(i2c->dmach),
+ "Failed to request dma\n");
}
platform_set_drvdata(pdev, i2c);
diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
index 02ddb23..13c14eb1 100644
--- a/drivers/i2c/busses/i2c-rk3x.c
+++ b/drivers/i2c/busses/i2c-rk3x.c
@@ -80,7 +80,7 @@ enum {
#define DEFAULT_SCL_RATE (100 * 1000) /* Hz */
/**
- * struct i2c_spec_values:
+ * struct i2c_spec_values - I2C specification values for various modes
* @min_hold_start_ns: min hold time (repeated) START condition
* @min_low_ns: min LOW period of the SCL clock
* @min_high_ns: min HIGH period of the SCL cloc
@@ -136,7 +136,7 @@ static const struct i2c_spec_values fast_mode_plus_spec = {
};
/**
- * struct rk3x_i2c_calced_timings:
+ * struct rk3x_i2c_calced_timings - calculated V1 timings
* @div_low: Divider output for low
* @div_high: Divider output for high
* @tuning: Used to adjust setup/hold data time,
@@ -159,7 +159,7 @@ enum rk3x_i2c_state {
};
/**
- * struct rk3x_i2c_soc_data:
+ * struct rk3x_i2c_soc_data - SOC-specific data
* @grf_offset: offset inside the grf regmap for setting the i2c type
* @calc_timings: Callback function for i2c timing information calculated
*/
@@ -239,7 +239,8 @@ static inline void rk3x_i2c_clean_ipd(struct rk3x_i2c *i2c)
}
/**
- * Generate a START condition, which triggers a REG_INT_START interrupt.
+ * rk3x_i2c_start - Generate a START condition, which triggers a REG_INT_START interrupt.
+ * @i2c: target controller data
*/
static void rk3x_i2c_start(struct rk3x_i2c *i2c)
{
@@ -258,8 +259,8 @@ static void rk3x_i2c_start(struct rk3x_i2c *i2c)
}
/**
- * Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
- *
+ * rk3x_i2c_stop - Generate a STOP condition, which triggers a REG_INT_STOP interrupt.
+ * @i2c: target controller data
* @error: Error code to return in rk3x_i2c_xfer
*/
static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
@@ -298,7 +299,8 @@ static void rk3x_i2c_stop(struct rk3x_i2c *i2c, int error)
}
/**
- * Setup a read according to i2c->msg
+ * rk3x_i2c_prepare_read - Setup a read according to i2c->msg
+ * @i2c: target controller data
*/
static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
{
@@ -329,7 +331,8 @@ static void rk3x_i2c_prepare_read(struct rk3x_i2c *i2c)
}
/**
- * Fill the transmit buffer with data from i2c->msg
+ * rk3x_i2c_fill_transmit_buf - Fill the transmit buffer with data from i2c->msg
+ * @i2c: target controller data
*/
static void rk3x_i2c_fill_transmit_buf(struct rk3x_i2c *i2c)
{
@@ -532,11 +535,10 @@ static irqreturn_t rk3x_i2c_irq(int irqno, void *dev_id)
}
/**
- * Get timing values of I2C specification
- *
+ * rk3x_i2c_get_spec - Get timing values of I2C specification
* @speed: Desired SCL frequency
*
- * Returns: Matched i2c spec values.
+ * Return: Matched i2c_spec_values.
*/
static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
{
@@ -549,13 +551,12 @@ static const struct i2c_spec_values *rk3x_i2c_get_spec(unsigned int speed)
}
/**
- * Calculate divider values for desired SCL frequency
- *
+ * rk3x_i2c_v0_calc_timings - Calculate divider values for desired SCL frequency
* @clk_rate: I2C input clock rate
* @t: Known I2C timing information
* @t_calc: Caculated rk3x private timings that would be written into regs
*
- * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
+ * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
* a best-effort divider value is returned in divs. If the target rate is
* too high, we silently use the highest possible rate.
*/
@@ -710,13 +711,12 @@ static int rk3x_i2c_v0_calc_timings(unsigned long clk_rate,
}
/**
- * Calculate timing values for desired SCL frequency
- *
+ * rk3x_i2c_v1_calc_timings - Calculate timing values for desired SCL frequency
* @clk_rate: I2C input clock rate
* @t: Known I2C timing information
* @t_calc: Caculated rk3x private timings that would be written into regs
*
- * Returns: 0 on success, -EINVAL if the goal SCL rate is too slow. In that case
+ * Return: %0 on success, -%EINVAL if the goal SCL rate is too slow. In that case
* a best-effort divider value is returned in divs. If the target rate is
* too high, we silently use the highest possible rate.
* The following formulas are v1's method to calculate timings.
@@ -960,14 +960,14 @@ static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long
}
/**
- * Setup I2C registers for an I2C operation specified by msgs, num.
- *
- * Must be called with i2c->lock held.
- *
+ * rk3x_i2c_setup - Setup I2C registers for an I2C operation specified by msgs, num.
+ * @i2c: target controller data
* @msgs: I2C msgs to process
* @num: Number of msgs
*
- * returns: Number of I2C msgs processed or negative in case of error
+ * Must be called with i2c->lock held.
+ *
+ * Return: Number of I2C msgs processed or negative in case of error
*/
static int rk3x_i2c_setup(struct rk3x_i2c *i2c, struct i2c_msg *msgs, int num)
{
diff --git a/drivers/i2c/busses/i2c-xgene-slimpro.c b/drivers/i2c/busses/i2c-xgene-slimpro.c
index bba08cb..6c39881 100644
--- a/drivers/i2c/busses/i2c-xgene-slimpro.c
+++ b/drivers/i2c/busses/i2c-xgene-slimpro.c
@@ -307,6 +307,9 @@ static int slimpro_i2c_blkwr(struct slimpro_i2c_dev *ctx, u32 chip,
u32 msg[3];
int rc;
+ if (writelen > I2C_SMBUS_BLOCK_MAX)
+ return -EINVAL;
+
memcpy(ctx->dma_buffer, data, writelen);
paddr = dma_map_single(ctx->dev, ctx->dma_buffer, writelen,
DMA_TO_DEVICE);
diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c
index a2def6f..5eac7ea 100644
--- a/drivers/iio/accel/hid-sensor-accel-3d.c
+++ b/drivers/iio/accel/hid-sensor-accel-3d.c
@@ -280,6 +280,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
hid_sensor_convert_timestamp(
&accel_state->common_attributes,
*(int64_t *)raw_data);
+ ret = 0;
break;
default:
break;
diff --git a/drivers/iio/accel/mma9551_core.c b/drivers/iio/accel/mma9551_core.c
index fbf2e2c..9023c07 100644
--- a/drivers/iio/accel/mma9551_core.c
+++ b/drivers/iio/accel/mma9551_core.c
@@ -296,9 +296,12 @@ int mma9551_read_config_word(struct i2c_client *client, u8 app_id,
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_CONFIG,
reg, NULL, 0, (u8 *)&v, 2);
+ if (ret < 0)
+ return ret;
+
*val = be16_to_cpu(v);
- return ret;
+ return 0;
}
EXPORT_SYMBOL(mma9551_read_config_word);
@@ -354,9 +357,12 @@ int mma9551_read_status_word(struct i2c_client *client, u8 app_id,
ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_STATUS,
reg, NULL, 0, (u8 *)&v, 2);
+ if (ret < 0)
+ return ret;
+
*val = be16_to_cpu(v);
- return ret;
+ return 0;
}
EXPORT_SYMBOL(mma9551_read_status_word);
diff --git a/drivers/iio/adc/ad7791.c b/drivers/iio/adc/ad7791.c
index cb579aa..f7d7bc1 100644
--- a/drivers/iio/adc/ad7791.c
+++ b/drivers/iio/adc/ad7791.c
@@ -253,7 +253,7 @@ static const struct ad_sigma_delta_info ad7791_sigma_delta_info = {
.has_registers = true,
.addr_shift = 4,
.read_mask = BIT(3),
- .irq_flags = IRQF_TRIGGER_LOW,
+ .irq_flags = IRQF_TRIGGER_FALLING,
};
static int ad7791_read_raw(struct iio_dev *indio_dev,
diff --git a/drivers/iio/adc/berlin2-adc.c b/drivers/iio/adc/berlin2-adc.c
index 8b04b95..fa2c879 100644
--- a/drivers/iio/adc/berlin2-adc.c
+++ b/drivers/iio/adc/berlin2-adc.c
@@ -289,8 +289,10 @@ static int berlin2_adc_probe(struct platform_device *pdev)
int ret;
indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv));
- if (!indio_dev)
+ if (!indio_dev) {
+ of_node_put(parent_np);
return -ENOMEM;
+ }
priv = iio_priv(indio_dev);
platform_set_drvdata(pdev, indio_dev);
diff --git a/drivers/iio/adc/stm32-dfsdm-adc.c b/drivers/iio/adc/stm32-dfsdm-adc.c
index 1cfefb3..6592221 100644
--- a/drivers/iio/adc/stm32-dfsdm-adc.c
+++ b/drivers/iio/adc/stm32-dfsdm-adc.c
@@ -1521,6 +1521,7 @@ static const struct of_device_id stm32_dfsdm_adc_match[] = {
},
{}
};
+MODULE_DEVICE_TABLE(of, stm32_dfsdm_adc_match);
static int stm32_dfsdm_adc_probe(struct platform_device *pdev)
{
diff --git a/drivers/iio/adc/ti-ads7950.c b/drivers/iio/adc/ti-ads7950.c
index a2b83f0..d4583b7 100644
--- a/drivers/iio/adc/ti-ads7950.c
+++ b/drivers/iio/adc/ti-ads7950.c
@@ -634,6 +634,7 @@ static int ti_ads7950_probe(struct spi_device *spi)
st->chip.label = dev_name(&st->spi->dev);
st->chip.parent = &st->spi->dev;
st->chip.owner = THIS_MODULE;
+ st->chip.can_sleep = true;
st->chip.base = -1;
st->chip.ngpio = TI_ADS7950_NUM_GPIOS;
st->chip.get_direction = ti_ads7950_get_direction;
diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c
index 256177b..024bdc1 100644
--- a/drivers/iio/adc/twl6030-gpadc.c
+++ b/drivers/iio/adc/twl6030-gpadc.c
@@ -57,6 +57,18 @@
#define TWL6030_GPADCS BIT(1)
#define TWL6030_GPADCR BIT(0)
+#define USB_VBUS_CTRL_SET 0x04
+#define USB_ID_CTRL_SET 0x06
+
+#define TWL6030_MISC1 0xE4
+#define VBUS_MEAS 0x01
+#define ID_MEAS 0x01
+
+#define VAC_MEAS 0x04
+#define VBAT_MEAS 0x02
+#define BB_MEAS 0x01
+
+
/**
* struct twl6030_chnl_calib - channel calibration
* @gain: slope coefficient for ideal curve
@@ -927,6 +939,26 @@ static int twl6030_gpadc_probe(struct platform_device *pdev)
return ret;
}
+ ret = twl_i2c_write_u8(TWL_MODULE_USB, VBUS_MEAS, USB_VBUS_CTRL_SET);
+ if (ret < 0) {
+ dev_err(dev, "failed to wire up inputs\n");
+ return ret;
+ }
+
+ ret = twl_i2c_write_u8(TWL_MODULE_USB, ID_MEAS, USB_ID_CTRL_SET);
+ if (ret < 0) {
+ dev_err(dev, "failed to wire up inputs\n");
+ return ret;
+ }
+
+ ret = twl_i2c_write_u8(TWL6030_MODULE_ID0,
+ VBAT_MEAS | BB_MEAS | VAC_MEAS,
+ TWL6030_MISC1);
+ if (ret < 0) {
+ dev_err(dev, "failed to wire up inputs\n");
+ return ret;
+ }
+
indio_dev->name = DRIVER_NAME;
indio_dev->info = &twl6030_gpadc_iio_info;
indio_dev->modes = INDIO_DIRECT_MODE;
diff --git a/drivers/iio/dac/cio-dac.c b/drivers/iio/dac/cio-dac.c
index 9581356..77a6916 100644
--- a/drivers/iio/dac/cio-dac.c
+++ b/drivers/iio/dac/cio-dac.c
@@ -66,8 +66,8 @@ static int cio_dac_write_raw(struct iio_dev *indio_dev,
if (mask != IIO_CHAN_INFO_RAW)
return -EINVAL;
- /* DAC can only accept up to a 16-bit value */
- if ((unsigned int)val > 65535)
+ /* DAC can only accept up to a 12-bit value */
+ if ((unsigned int)val > 4095)
return -EINVAL;
priv->chan_out_states[chan->channel] = val;
diff --git a/drivers/iio/gyro/hid-sensor-gyro-3d.c b/drivers/iio/gyro/hid-sensor-gyro-3d.c
index 8f0ad02..698c50d 100644
--- a/drivers/iio/gyro/hid-sensor-gyro-3d.c
+++ b/drivers/iio/gyro/hid-sensor-gyro-3d.c
@@ -231,6 +231,7 @@ static int gyro_3d_capture_sample(struct hid_sensor_hub_device *hsdev,
gyro_state->timestamp =
hid_sensor_convert_timestamp(&gyro_state->common_attributes,
*(s64 *)raw_data);
+ ret = 0;
break;
default:
break;
diff --git a/drivers/iio/imu/Kconfig b/drivers/iio/imu/Kconfig
index 001ca2c..1f8ed7b 100644
--- a/drivers/iio/imu/Kconfig
+++ b/drivers/iio/imu/Kconfig
@@ -47,6 +47,7 @@
depends on SPI
select IIO_ADIS_LIB
select IIO_ADIS_LIB_BUFFER if IIO_BUFFER
+ select CRC32
help
Say yes here to build support for Analog Devices ADIS16375, ADIS16480,
ADIS16485, ADIS16488 inertial sensors.
diff --git a/drivers/iio/imu/fxos8700_core.c b/drivers/iio/imu/fxos8700_core.c
index ab28818..04d3778 100644
--- a/drivers/iio/imu/fxos8700_core.c
+++ b/drivers/iio/imu/fxos8700_core.c
@@ -10,6 +10,7 @@
#include <linux/regmap.h>
#include <linux/acpi.h>
#include <linux/bitops.h>
+#include <linux/bitfield.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
@@ -144,9 +145,8 @@
#define FXOS8700_NVM_DATA_BNK0 0xa7
/* Bit definitions for FXOS8700_CTRL_REG1 */
-#define FXOS8700_CTRL_ODR_MSK 0x38
#define FXOS8700_CTRL_ODR_MAX 0x00
-#define FXOS8700_CTRL_ODR_MIN GENMASK(4, 3)
+#define FXOS8700_CTRL_ODR_MSK GENMASK(5, 3)
/* Bit definitions for FXOS8700_M_CTRL_REG1 */
#define FXOS8700_HMS_MASK GENMASK(1, 0)
@@ -320,7 +320,7 @@ static enum fxos8700_sensor fxos8700_to_sensor(enum iio_chan_type iio_type)
switch (iio_type) {
case IIO_ACCEL:
return FXOS8700_ACCEL;
- case IIO_ANGL_VEL:
+ case IIO_MAGN:
return FXOS8700_MAGN;
default:
return -EINVAL;
@@ -345,15 +345,35 @@ static int fxos8700_set_active_mode(struct fxos8700_data *data,
static int fxos8700_set_scale(struct fxos8700_data *data,
enum fxos8700_sensor t, int uscale)
{
- int i;
+ int i, ret, val;
+ bool active_mode;
static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);
struct device *dev = regmap_get_device(data->regmap);
if (t == FXOS8700_MAGN) {
- dev_err(dev, "Magnetometer scale is locked at 1200uT\n");
+ dev_err(dev, "Magnetometer scale is locked at 0.001Gs\n");
return -EINVAL;
}
+ /*
+ * When device is in active mode, it failed to set an ACCEL
+ * full-scale range(2g/4g/8g) in FXOS8700_XYZ_DATA_CFG.
+ * This is not align with the datasheet, but it is a fxos8700
+ * chip behavier. Set the device in standby mode before setting
+ * an ACCEL full-scale range.
+ */
+ ret = regmap_read(data->regmap, FXOS8700_CTRL_REG1, &val);
+ if (ret)
+ return ret;
+
+ active_mode = val & FXOS8700_ACTIVE;
+ if (active_mode) {
+ ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,
+ val & ~FXOS8700_ACTIVE);
+ if (ret)
+ return ret;
+ }
+
for (i = 0; i < scale_num; i++)
if (fxos8700_accel_scale[i].uscale == uscale)
break;
@@ -361,8 +381,12 @@ static int fxos8700_set_scale(struct fxos8700_data *data,
if (i == scale_num)
return -EINVAL;
- return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,
+ ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG,
fxos8700_accel_scale[i].bits);
+ if (ret)
+ return ret;
+ return regmap_write(data->regmap, FXOS8700_CTRL_REG1,
+ active_mode);
}
static int fxos8700_get_scale(struct fxos8700_data *data,
@@ -372,7 +396,7 @@ static int fxos8700_get_scale(struct fxos8700_data *data,
static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale);
if (t == FXOS8700_MAGN) {
- *uscale = 1200; /* Magnetometer is locked at 1200uT */
+ *uscale = 1000; /* Magnetometer is locked at 0.001Gs */
return 0;
}
@@ -394,22 +418,61 @@ static int fxos8700_get_data(struct fxos8700_data *data, int chan_type,
int axis, int *val)
{
u8 base, reg;
+ s16 tmp;
int ret;
- enum fxos8700_sensor type = fxos8700_to_sensor(chan_type);
- base = type ? FXOS8700_OUT_X_MSB : FXOS8700_M_OUT_X_MSB;
+ /*
+ * Different register base addresses varies with channel types.
+ * This bug hasn't been noticed before because using an enum is
+ * really hard to read. Use an a switch statement to take over that.
+ */
+ switch (chan_type) {
+ case IIO_ACCEL:
+ base = FXOS8700_OUT_X_MSB;
+ break;
+ case IIO_MAGN:
+ base = FXOS8700_M_OUT_X_MSB;
+ break;
+ default:
+ return -EINVAL;
+ }
/* Block read 6 bytes of device output registers to avoid data loss */
ret = regmap_bulk_read(data->regmap, base, data->buf,
- FXOS8700_DATA_BUF_SIZE);
+ sizeof(data->buf));
if (ret)
return ret;
/* Convert axis to buffer index */
reg = axis - IIO_MOD_X;
+ /*
+ * Convert to native endianness. The accel data and magn data
+ * are signed, so a forced type conversion is needed.
+ */
+ tmp = be16_to_cpu(data->buf[reg]);
+
+ /*
+ * ACCEL output data registers contain the X-axis, Y-axis, and Z-axis
+ * 14-bit left-justified sample data and MAGN output data registers
+ * contain the X-axis, Y-axis, and Z-axis 16-bit sample data. Apply
+ * a signed 2 bits right shift to the readback raw data from ACCEL
+ * output data register and keep that from MAGN sensor as the origin.
+ * Value should be extended to 32 bit.
+ */
+ switch (chan_type) {
+ case IIO_ACCEL:
+ tmp = tmp >> 2;
+ break;
+ case IIO_MAGN:
+ /* Nothing to do */
+ break;
+ default:
+ return -EINVAL;
+ }
+
/* Convert to native endianness */
- *val = sign_extend32(be16_to_cpu(data->buf[reg]), 15);
+ *val = sign_extend32(tmp, 15);
return 0;
}
@@ -445,10 +508,9 @@ static int fxos8700_set_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
if (i >= odr_num)
return -EINVAL;
- return regmap_update_bits(data->regmap,
- FXOS8700_CTRL_REG1,
- FXOS8700_CTRL_ODR_MSK + FXOS8700_ACTIVE,
- fxos8700_odr[i].bits << 3 | active_mode);
+ val &= ~FXOS8700_CTRL_ODR_MSK;
+ val |= FIELD_PREP(FXOS8700_CTRL_ODR_MSK, fxos8700_odr[i].bits) | FXOS8700_ACTIVE;
+ return regmap_write(data->regmap, FXOS8700_CTRL_REG1, val);
}
static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
@@ -461,7 +523,7 @@ static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t,
if (ret)
return ret;
- val &= FXOS8700_CTRL_ODR_MSK;
+ val = FIELD_GET(FXOS8700_CTRL_ODR_MSK, val);
for (i = 0; i < odr_num; i++)
if (val == fxos8700_odr[i].bits)
@@ -526,7 +588,7 @@ static IIO_CONST_ATTR(in_accel_sampling_frequency_available,
static IIO_CONST_ATTR(in_magn_sampling_frequency_available,
"1.5625 6.25 12.5 50 100 200 400 800");
static IIO_CONST_ATTR(in_accel_scale_available, "0.000244 0.000488 0.000976");
-static IIO_CONST_ATTR(in_magn_scale_available, "0.000001200");
+static IIO_CONST_ATTR(in_magn_scale_available, "0.001000");
static struct attribute *fxos8700_attrs[] = {
&iio_const_attr_in_accel_sampling_frequency_available.dev_attr.attr,
@@ -592,14 +654,19 @@ static int fxos8700_chip_init(struct fxos8700_data *data, bool use_spi)
if (ret)
return ret;
- /* Max ODR (800Hz individual or 400Hz hybrid), active mode */
- ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1,
- FXOS8700_CTRL_ODR_MAX | FXOS8700_ACTIVE);
+ /*
+ * Set max full-scale range (+/-8G) for ACCEL sensor in chip
+ * initialization then activate the device.
+ */
+ ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);
if (ret)
return ret;
- /* Set for max full-scale range (+/-8G) */
- return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G);
+ /* Max ODR (800Hz individual or 400Hz hybrid), active mode */
+ return regmap_update_bits(data->regmap, FXOS8700_CTRL_REG1,
+ FXOS8700_CTRL_ODR_MSK | FXOS8700_ACTIVE,
+ FIELD_PREP(FXOS8700_CTRL_ODR_MSK, FXOS8700_CTRL_ODR_MAX) |
+ FXOS8700_ACTIVE);
}
static void fxos8700_chip_uninit(void *data)
diff --git a/drivers/iio/light/cm32181.c b/drivers/iio/light/cm32181.c
index 9764994..c14a630 100644
--- a/drivers/iio/light/cm32181.c
+++ b/drivers/iio/light/cm32181.c
@@ -429,6 +429,14 @@ static const struct iio_info cm32181_info = {
.attrs = &cm32181_attribute_group,
};
+static void cm32181_unregister_dummy_client(void *data)
+{
+ struct i2c_client *client = data;
+
+ /* Unregister the dummy client */
+ i2c_unregister_device(client);
+}
+
static int cm32181_probe(struct i2c_client *client)
{
struct device *dev = &client->dev;
@@ -458,6 +466,10 @@ static int cm32181_probe(struct i2c_client *client)
client = i2c_acpi_new_device(dev, 1, &board_info);
if (IS_ERR(client))
return PTR_ERR(client);
+
+ ret = devm_add_action_or_reset(dev, cm32181_unregister_dummy_client, client);
+ if (ret)
+ return ret;
}
cm32181 = iio_priv(indio_dev);
diff --git a/drivers/iio/light/tsl2563.c b/drivers/iio/light/tsl2563.c
index 5bf2bfb..af61635 100644
--- a/drivers/iio/light/tsl2563.c
+++ b/drivers/iio/light/tsl2563.c
@@ -705,6 +705,7 @@ static int tsl2563_probe(struct i2c_client *client,
struct iio_dev *indio_dev;
struct tsl2563_chip *chip;
struct tsl2563_platform_data *pdata = client->dev.platform_data;
+ unsigned long irq_flags;
int err = 0;
u8 id = 0;
@@ -760,10 +761,15 @@ static int tsl2563_probe(struct i2c_client *client,
indio_dev->info = &tsl2563_info_no_irq;
if (client->irq) {
+ irq_flags = irq_get_trigger_type(client->irq);
+ if (irq_flags == IRQF_TRIGGER_NONE)
+ irq_flags = IRQF_TRIGGER_RISING;
+ irq_flags |= IRQF_ONESHOT;
+
err = devm_request_threaded_irq(&client->dev, client->irq,
NULL,
&tsl2563_event_handler,
- IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ irq_flags,
"tsl2563_event",
indio_dev);
if (err) {
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index b721085..f0c07e4 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -2965,15 +2965,18 @@ EXPORT_SYMBOL(__rdma_block_iter_start);
bool __rdma_block_iter_next(struct ib_block_iter *biter)
{
unsigned int block_offset;
+ unsigned int sg_delta;
if (!biter->__sg_nents || !biter->__sg)
return false;
biter->__dma_addr = sg_dma_address(biter->__sg) + biter->__sg_advance;
block_offset = biter->__dma_addr & (BIT_ULL(biter->__pg_bit) - 1);
- biter->__sg_advance += BIT_ULL(biter->__pg_bit) - block_offset;
+ sg_delta = BIT_ULL(biter->__pg_bit) - block_offset;
- if (biter->__sg_advance >= sg_dma_len(biter->__sg)) {
+ if (sg_dma_len(biter->__sg) - biter->__sg_advance > sg_delta) {
+ biter->__sg_advance += sg_delta;
+ } else {
biter->__sg_advance = 0;
biter->__sg = sg_next(biter->__sg);
biter->__sg_nents--;
diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
index 291471d1..a3e4913 100644
--- a/drivers/infiniband/hw/cxgb4/cm.c
+++ b/drivers/infiniband/hw/cxgb4/cm.c
@@ -2682,6 +2682,9 @@ static int pass_establish(struct c4iw_dev *dev, struct sk_buff *skb)
u16 tcp_opt = ntohs(req->tcp_opt);
ep = get_ep_from_tid(dev, tid);
+ if (!ep)
+ return 0;
+
pr_debug("ep %p tid %u\n", ep, ep->hwtid);
ep->snd_seq = be32_to_cpu(req->snd_isn);
ep->rcv_seq = be32_to_cpu(req->rcv_isn);
@@ -4150,6 +4153,10 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb)
if (neigh->dev->flags & IFF_LOOPBACK) {
pdev = ip_dev_find(&init_net, iph->daddr);
+ if (!pdev) {
+ pr_err("%s - failed to find device!\n", __func__);
+ goto free_dst;
+ }
e = cxgb4_l2t_get(dev->rdev.lldi.l2t, neigh,
pdev, 0);
pi = (struct port_info *)netdev_priv(pdev);
diff --git a/drivers/infiniband/hw/cxgb4/restrack.c b/drivers/infiniband/hw/cxgb4/restrack.c
index ff645b9..fd22c85 100644
--- a/drivers/infiniband/hw/cxgb4/restrack.c
+++ b/drivers/infiniband/hw/cxgb4/restrack.c
@@ -238,7 +238,7 @@ int c4iw_fill_res_cm_id_entry(struct sk_buff *msg,
if (rdma_nl_put_driver_u64_hex(msg, "history", epcp->history))
goto err_cancel_table;
- if (epcp->state == LISTEN) {
+ if (listen_ep) {
if (rdma_nl_put_driver_u32(msg, "stid", listen_ep->stid))
goto err_cancel_table;
if (rdma_nl_put_driver_u32(msg, "backlog", listen_ep->backlog))
diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
index b0d5872..689921d 100644
--- a/drivers/infiniband/hw/hfi1/chip.c
+++ b/drivers/infiniband/hw/hfi1/chip.c
@@ -1055,7 +1055,7 @@ static void read_link_down_reason(struct hfi1_devdata *dd, u8 *ldr);
static void handle_temp_err(struct hfi1_devdata *dd);
static void dc_shutdown(struct hfi1_devdata *dd);
static void dc_start(struct hfi1_devdata *dd);
-static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
+static int qos_rmt_entries(unsigned int n_krcv_queues, unsigned int *mp,
unsigned int *np);
static void clear_full_mgmt_pkey(struct hfi1_pportdata *ppd);
static int wait_link_transfer_active(struct hfi1_devdata *dd, int wait_ms);
@@ -13361,7 +13361,6 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
int ret;
unsigned ngroups;
int rmt_count;
- int user_rmt_reduced;
u32 n_usr_ctxts;
u32 send_contexts = chip_send_contexts(dd);
u32 rcv_contexts = chip_rcv_contexts(dd);
@@ -13420,28 +13419,34 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
(num_kernel_contexts + n_usr_ctxts),
&node_affinity.real_cpu_mask);
/*
- * The RMT entries are currently allocated as shown below:
- * 1. QOS (0 to 128 entries);
- * 2. FECN (num_kernel_context - 1 + num_user_contexts +
- * num_netdev_contexts);
- * 3. netdev (num_netdev_contexts).
- * It should be noted that FECN oversubscribe num_netdev_contexts
- * entries of RMT because both netdev and PSM could allocate any receive
- * context between dd->first_dyn_alloc_text and dd->num_rcv_contexts,
- * and PSM FECN must reserve an RMT entry for each possible PSM receive
- * context.
+ * RMT entries are allocated as follows:
+ * 1. QOS (0 to 128 entries)
+ * 2. FECN (num_kernel_context - 1 [a] + num_user_contexts +
+ * num_netdev_contexts [b])
+ * 3. netdev (NUM_NETDEV_MAP_ENTRIES)
+ *
+ * Notes:
+ * [a] Kernel contexts (except control) are included in FECN if kernel
+ * TID_RDMA is active.
+ * [b] Netdev and user contexts are randomly allocated from the same
+ * context pool, so FECN must cover all contexts in the pool.
*/
- rmt_count = qos_rmt_entries(dd, NULL, NULL) + (num_netdev_contexts * 2);
- if (HFI1_CAP_IS_KSET(TID_RDMA))
- rmt_count += num_kernel_contexts - 1;
- if (rmt_count + n_usr_ctxts > NUM_MAP_ENTRIES) {
- user_rmt_reduced = NUM_MAP_ENTRIES - rmt_count;
- dd_dev_err(dd,
- "RMT size is reducing the number of user receive contexts from %u to %d\n",
- n_usr_ctxts,
- user_rmt_reduced);
- /* recalculate */
- n_usr_ctxts = user_rmt_reduced;
+ rmt_count = qos_rmt_entries(num_kernel_contexts - 1, NULL, NULL)
+ + (HFI1_CAP_IS_KSET(TID_RDMA) ? num_kernel_contexts - 1
+ : 0)
+ + n_usr_ctxts
+ + num_netdev_contexts
+ + NUM_NETDEV_MAP_ENTRIES;
+ if (rmt_count > NUM_MAP_ENTRIES) {
+ int over = rmt_count - NUM_MAP_ENTRIES;
+ /* try to squish user contexts, minimum of 1 */
+ if (over >= n_usr_ctxts) {
+ dd_dev_err(dd, "RMT overflow: reduce the requested number of contexts\n");
+ return -EINVAL;
+ }
+ dd_dev_err(dd, "RMT overflow: reducing # user contexts from %u to %u\n",
+ n_usr_ctxts, n_usr_ctxts - over);
+ n_usr_ctxts -= over;
}
/* the first N are kernel contexts, the rest are user/netdev contexts */
@@ -14298,15 +14303,15 @@ static void clear_rsm_rule(struct hfi1_devdata *dd, u8 rule_index)
}
/* return the number of RSM map table entries that will be used for QOS */
-static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
+static int qos_rmt_entries(unsigned int n_krcv_queues, unsigned int *mp,
unsigned int *np)
{
int i;
unsigned int m, n;
- u8 max_by_vl = 0;
+ uint max_by_vl = 0;
/* is QOS active at all? */
- if (dd->n_krcv_queues <= MIN_KERNEL_KCTXTS ||
+ if (n_krcv_queues < MIN_KERNEL_KCTXTS ||
num_vls == 1 ||
krcvqsset <= 1)
goto no_qos;
@@ -14364,7 +14369,7 @@ static void init_qos(struct hfi1_devdata *dd, struct rsm_map_table *rmt)
if (!rmt)
goto bail;
- rmt_entries = qos_rmt_entries(dd, &m, &n);
+ rmt_entries = qos_rmt_entries(dd->n_krcv_queues - 1, &m, &n);
if (rmt_entries == 0)
goto bail;
qpns_per_vl = 1 << m;
diff --git a/drivers/infiniband/hw/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index 686d170..1c1172a 100644
--- a/drivers/infiniband/hw/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -1318,12 +1318,15 @@ static int user_exp_rcv_setup(struct hfi1_filedata *fd, unsigned long arg,
addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
sizeof(tinfo.tidcnt)))
- return -EFAULT;
+ ret = -EFAULT;
addr = arg + offsetof(struct hfi1_tid_info, length);
- if (copy_to_user((void __user *)addr, &tinfo.length,
+ if (!ret && copy_to_user((void __user *)addr, &tinfo.length,
sizeof(tinfo.length)))
ret = -EFAULT;
+
+ if (ret)
+ hfi1_user_exp_rcv_invalid(fd, &tinfo);
}
return ret;
diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index a95b654..8ed2039 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -3160,8 +3160,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
{
int rval = 0;
- tx->num_desc++;
- if ((unlikely(tx->num_desc == tx->desc_limit))) {
+ if ((unlikely(tx->num_desc + 1 == tx->desc_limit))) {
rval = _extend_sdma_tx_descs(dd, tx);
if (rval) {
__sdma_txclean(dd, tx);
@@ -3174,6 +3173,7 @@ int _pad_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx)
SDMA_MAP_NONE,
dd->sdma_pad_phys,
sizeof(u32) - (tx->packet_len & (sizeof(u32) - 1)));
+ tx->num_desc++;
_sdma_close_tx(dd, tx);
return rval;
}
diff --git a/drivers/infiniband/hw/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
index d8170fc..b023fc4 100644
--- a/drivers/infiniband/hw/hfi1/sdma.h
+++ b/drivers/infiniband/hw/hfi1/sdma.h
@@ -631,14 +631,13 @@ static inline void sdma_txclean(struct hfi1_devdata *dd, struct sdma_txreq *tx)
static inline void _sdma_close_tx(struct hfi1_devdata *dd,
struct sdma_txreq *tx)
{
- tx->descp[tx->num_desc].qw[0] |=
- SDMA_DESC0_LAST_DESC_FLAG;
- tx->descp[tx->num_desc].qw[1] |=
- dd->default_desc1;
+ u16 last_desc = tx->num_desc - 1;
+
+ tx->descp[last_desc].qw[0] |= SDMA_DESC0_LAST_DESC_FLAG;
+ tx->descp[last_desc].qw[1] |= dd->default_desc1;
if (tx->flags & SDMA_TXREQ_F_URGENT)
- tx->descp[tx->num_desc].qw[1] |=
- (SDMA_DESC1_HEAD_TO_HOST_FLAG |
- SDMA_DESC1_INT_REQ_FLAG);
+ tx->descp[last_desc].qw[1] |= (SDMA_DESC1_HEAD_TO_HOST_FLAG |
+ SDMA_DESC1_INT_REQ_FLAG);
}
static inline int _sdma_txadd_daddr(
@@ -655,6 +654,7 @@ static inline int _sdma_txadd_daddr(
type,
addr, len);
WARN_ON(len > tx->tlen);
+ tx->num_desc++;
tx->tlen -= len;
/* special cases for last */
if (!tx->tlen) {
@@ -666,7 +666,6 @@ static inline int _sdma_txadd_daddr(
_sdma_close_tx(dd, tx);
}
}
- tx->num_desc++;
return rval;
}
diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.c b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
index 0c86e9d..1d2020c 100644
--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.c
+++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
@@ -23,18 +23,25 @@ static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata,
static bool tid_rb_invalidate(struct mmu_interval_notifier *mni,
const struct mmu_notifier_range *range,
unsigned long cur_seq);
+static bool tid_cover_invalidate(struct mmu_interval_notifier *mni,
+ const struct mmu_notifier_range *range,
+ unsigned long cur_seq);
static int program_rcvarray(struct hfi1_filedata *fd, struct tid_user_buf *,
struct tid_group *grp,
unsigned int start, u16 count,
u32 *tidlist, unsigned int *tididx,
unsigned int *pmapped);
-static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
- struct tid_group **grp);
+static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo);
+static void __clear_tid_node(struct hfi1_filedata *fd,
+ struct tid_rb_node *node);
static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node);
static const struct mmu_interval_notifier_ops tid_mn_ops = {
.invalidate = tid_rb_invalidate,
};
+static const struct mmu_interval_notifier_ops tid_cover_ops = {
+ .invalidate = tid_cover_invalidate,
+};
/*
* Initialize context and file private data needed for Expected
@@ -153,16 +160,11 @@ static void unpin_rcv_pages(struct hfi1_filedata *fd,
static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf)
{
int pinned;
- unsigned int npages;
+ unsigned int npages = tidbuf->npages;
unsigned long vaddr = tidbuf->vaddr;
struct page **pages = NULL;
struct hfi1_devdata *dd = fd->uctxt->dd;
- /* Get the number of pages the user buffer spans */
- npages = num_user_pages(vaddr, tidbuf->length);
- if (!npages)
- return -EINVAL;
-
if (npages > fd->uctxt->expected_count) {
dd_dev_err(dd, "Expected buffer too big\n");
return -EINVAL;
@@ -189,7 +191,6 @@ static int pin_rcv_pages(struct hfi1_filedata *fd, struct tid_user_buf *tidbuf)
return pinned;
}
tidbuf->pages = pages;
- tidbuf->npages = npages;
fd->tid_n_pinned += pinned;
return pinned;
}
@@ -253,53 +254,66 @@ int hfi1_user_exp_rcv_setup(struct hfi1_filedata *fd,
tididx = 0, mapped, mapped_pages = 0;
u32 *tidlist = NULL;
struct tid_user_buf *tidbuf;
+ unsigned long mmu_seq = 0;
if (!PAGE_ALIGNED(tinfo->vaddr))
return -EINVAL;
+ if (tinfo->length == 0)
+ return -EINVAL;
tidbuf = kzalloc(sizeof(*tidbuf), GFP_KERNEL);
if (!tidbuf)
return -ENOMEM;
+ mutex_init(&tidbuf->cover_mutex);
tidbuf->vaddr = tinfo->vaddr;
tidbuf->length = tinfo->length;
+ tidbuf->npages = num_user_pages(tidbuf->vaddr, tidbuf->length);
tidbuf->psets = kcalloc(uctxt->expected_count, sizeof(*tidbuf->psets),
GFP_KERNEL);
if (!tidbuf->psets) {
- kfree(tidbuf);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto fail_release_mem;
+ }
+
+ if (fd->use_mn) {
+ ret = mmu_interval_notifier_insert(
+ &tidbuf->notifier, current->mm,
+ tidbuf->vaddr, tidbuf->npages * PAGE_SIZE,
+ &tid_cover_ops);
+ if (ret)
+ goto fail_release_mem;
+ mmu_seq = mmu_interval_read_begin(&tidbuf->notifier);
}
pinned = pin_rcv_pages(fd, tidbuf);
if (pinned <= 0) {
- kfree(tidbuf->psets);
- kfree(tidbuf);
- return pinned;
+ ret = (pinned < 0) ? pinned : -ENOSPC;
+ goto fail_unpin;
}
/* Find sets of physically contiguous pages */
tidbuf->n_psets = find_phys_blocks(tidbuf, pinned);
- /*
- * We don't need to access this under a lock since tid_used is per
- * process and the same process cannot be in hfi1_user_exp_rcv_clear()
- * and hfi1_user_exp_rcv_setup() at the same time.
- */
+ /* Reserve the number of expected tids to be used. */
spin_lock(&fd->tid_lock);
if (fd->tid_used + tidbuf->n_psets > fd->tid_limit)
pageset_count = fd->tid_limit - fd->tid_used;
else
pageset_count = tidbuf->n_psets;
+ fd->tid_used += pageset_count;
spin_unlock(&fd->tid_lock);
- if (!pageset_count)
- goto bail;
+ if (!pageset_count) {
+ ret = -ENOSPC;
+ goto fail_unreserve;
+ }
ngroups = pageset_count / dd->rcv_entries.group_size;
tidlist = kcalloc(pageset_count, sizeof(*tidlist), GFP_KERNEL);
if (!tidlist) {
ret = -ENOMEM;
- goto nomem;
+ goto fail_unreserve;
}
tididx = 0;
@@ -395,43 +409,78 @@ int hfi1_user_exp_rcv_setup(struct hfi1_filedata *fd,
}
unlock:
mutex_unlock(&uctxt->exp_mutex);
-nomem:
hfi1_cdbg(TID, "total mapped: tidpairs:%u pages:%u (%d)", tididx,
mapped_pages, ret);
- if (tididx) {
- spin_lock(&fd->tid_lock);
- fd->tid_used += tididx;
- spin_unlock(&fd->tid_lock);
- tinfo->tidcnt = tididx;
- tinfo->length = mapped_pages * PAGE_SIZE;
- if (copy_to_user(u64_to_user_ptr(tinfo->tidlist),
- tidlist, sizeof(tidlist[0]) * tididx)) {
- /*
- * On failure to copy to the user level, we need to undo
- * everything done so far so we don't leak resources.
- */
- tinfo->tidlist = (unsigned long)&tidlist;
- hfi1_user_exp_rcv_clear(fd, tinfo);
- tinfo->tidlist = 0;
- ret = -EFAULT;
- goto bail;
+ /* fail if nothing was programmed, set error if none provided */
+ if (tididx == 0) {
+ if (ret >= 0)
+ ret = -ENOSPC;
+ goto fail_unreserve;
+ }
+
+ /* adjust reserved tid_used to actual count */
+ spin_lock(&fd->tid_lock);
+ fd->tid_used -= pageset_count - tididx;
+ spin_unlock(&fd->tid_lock);
+
+ /* unpin all pages not covered by a TID */
+ unpin_rcv_pages(fd, tidbuf, NULL, mapped_pages, pinned - mapped_pages,
+ false);
+
+ if (fd->use_mn) {
+ /* check for an invalidate during setup */
+ bool fail = false;
+
+ mutex_lock(&tidbuf->cover_mutex);
+ fail = mmu_interval_read_retry(&tidbuf->notifier, mmu_seq);
+ mutex_unlock(&tidbuf->cover_mutex);
+
+ if (fail) {
+ ret = -EBUSY;
+ goto fail_unprogram;
}
}
- /*
- * If not everything was mapped (due to insufficient RcvArray entries,
- * for example), unpin all unmapped pages so we can pin them nex time.
- */
- if (mapped_pages != pinned)
- unpin_rcv_pages(fd, tidbuf, NULL, mapped_pages,
- (pinned - mapped_pages), false);
-bail:
- kfree(tidbuf->psets);
- kfree(tidlist);
+ tinfo->tidcnt = tididx;
+ tinfo->length = mapped_pages * PAGE_SIZE;
+
+ if (copy_to_user(u64_to_user_ptr(tinfo->tidlist),
+ tidlist, sizeof(tidlist[0]) * tididx)) {
+ ret = -EFAULT;
+ goto fail_unprogram;
+ }
+
+ if (fd->use_mn)
+ mmu_interval_notifier_remove(&tidbuf->notifier);
kfree(tidbuf->pages);
+ kfree(tidbuf->psets);
kfree(tidbuf);
- return ret > 0 ? 0 : ret;
+ kfree(tidlist);
+ return 0;
+
+fail_unprogram:
+ /* unprogram, unmap, and unpin all allocated TIDs */
+ tinfo->tidlist = (unsigned long)tidlist;
+ hfi1_user_exp_rcv_clear(fd, tinfo);
+ tinfo->tidlist = 0;
+ pinned = 0; /* nothing left to unpin */
+ pageset_count = 0; /* nothing left reserved */
+fail_unreserve:
+ spin_lock(&fd->tid_lock);
+ fd->tid_used -= pageset_count;
+ spin_unlock(&fd->tid_lock);
+fail_unpin:
+ if (fd->use_mn)
+ mmu_interval_notifier_remove(&tidbuf->notifier);
+ if (pinned > 0)
+ unpin_rcv_pages(fd, tidbuf, NULL, 0, pinned, false);
+fail_release_mem:
+ kfree(tidbuf->pages);
+ kfree(tidbuf->psets);
+ kfree(tidbuf);
+ kfree(tidlist);
+ return ret;
}
int hfi1_user_exp_rcv_clear(struct hfi1_filedata *fd,
@@ -452,7 +501,7 @@ int hfi1_user_exp_rcv_clear(struct hfi1_filedata *fd,
mutex_lock(&uctxt->exp_mutex);
for (tididx = 0; tididx < tinfo->tidcnt; tididx++) {
- ret = unprogram_rcvarray(fd, tidinfo[tididx], NULL);
+ ret = unprogram_rcvarray(fd, tidinfo[tididx]);
if (ret) {
hfi1_cdbg(TID, "Failed to unprogram rcv array %d",
ret);
@@ -707,6 +756,7 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
}
node->fdata = fd;
+ mutex_init(&node->invalidate_mutex);
node->phys = page_to_phys(pages[0]);
node->npages = npages;
node->rcventry = rcventry;
@@ -722,11 +772,6 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
&tid_mn_ops);
if (ret)
goto out_unmap;
- /*
- * FIXME: This is in the wrong order, the notifier should be
- * established before the pages are pinned by pin_rcv_pages.
- */
- mmu_interval_read_begin(&node->notifier);
}
fd->entry_to_rb[node->rcventry - uctxt->expected_base] = node;
@@ -746,8 +791,7 @@ static int set_rcvarray_entry(struct hfi1_filedata *fd,
return -EFAULT;
}
-static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
- struct tid_group **grp)
+static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo)
{
struct hfi1_ctxtdata *uctxt = fd->uctxt;
struct hfi1_devdata *dd = uctxt->dd;
@@ -770,9 +814,6 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
if (!node || node->rcventry != (uctxt->expected_base + rcventry))
return -EBADF;
- if (grp)
- *grp = node->grp;
-
if (fd->use_mn)
mmu_interval_notifier_remove(&node->notifier);
cacheless_tid_rb_remove(fd, node);
@@ -780,23 +821,34 @@ static int unprogram_rcvarray(struct hfi1_filedata *fd, u32 tidinfo,
return 0;
}
-static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node)
+static void __clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node)
{
struct hfi1_ctxtdata *uctxt = fd->uctxt;
struct hfi1_devdata *dd = uctxt->dd;
+ mutex_lock(&node->invalidate_mutex);
+ if (node->freed)
+ goto done;
+ node->freed = true;
+
trace_hfi1_exp_tid_unreg(uctxt->ctxt, fd->subctxt, node->rcventry,
node->npages,
node->notifier.interval_tree.start, node->phys,
node->dma_addr);
- /*
- * Make sure device has seen the write before we unpin the
- * pages.
- */
+ /* Make sure device has seen the write before pages are unpinned */
hfi1_put_tid(dd, node->rcventry, PT_INVALID_FLUSH, 0, 0);
unpin_rcv_pages(fd, NULL, node, 0, node->npages, true);
+done:
+ mutex_unlock(&node->invalidate_mutex);
+}
+
+static void clear_tid_node(struct hfi1_filedata *fd, struct tid_rb_node *node)
+{
+ struct hfi1_ctxtdata *uctxt = fd->uctxt;
+
+ __clear_tid_node(fd, node);
node->grp->used--;
node->grp->map &= ~(1 << (node->rcventry - node->grp->base));
@@ -855,10 +907,16 @@ static bool tid_rb_invalidate(struct mmu_interval_notifier *mni,
if (node->freed)
return true;
+ /* take action only if unmapping */
+ if (range->event != MMU_NOTIFY_UNMAP)
+ return true;
+
trace_hfi1_exp_tid_inval(uctxt->ctxt, fdata->subctxt,
node->notifier.interval_tree.start,
node->rcventry, node->npages, node->dma_addr);
- node->freed = true;
+
+ /* clear the hardware rcvarray entry */
+ __clear_tid_node(fdata, node);
spin_lock(&fdata->invalid_lock);
if (fdata->invalid_tid_idx < uctxt->expected_count) {
@@ -888,6 +946,23 @@ static bool tid_rb_invalidate(struct mmu_interval_notifier *mni,
return true;
}
+static bool tid_cover_invalidate(struct mmu_interval_notifier *mni,
+ const struct mmu_notifier_range *range,
+ unsigned long cur_seq)
+{
+ struct tid_user_buf *tidbuf =
+ container_of(mni, struct tid_user_buf, notifier);
+
+ /* take action only if unmapping */
+ if (range->event == MMU_NOTIFY_UNMAP) {
+ mutex_lock(&tidbuf->cover_mutex);
+ mmu_interval_set_seq(mni, cur_seq);
+ mutex_unlock(&tidbuf->cover_mutex);
+ }
+
+ return true;
+}
+
static void cacheless_tid_rb_remove(struct hfi1_filedata *fdata,
struct tid_rb_node *tnode)
{
diff --git a/drivers/infiniband/hw/hfi1/user_exp_rcv.h b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
index 8c53e41..f8ee997 100644
--- a/drivers/infiniband/hw/hfi1/user_exp_rcv.h
+++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
@@ -16,6 +16,8 @@ struct tid_pageset {
};
struct tid_user_buf {
+ struct mmu_interval_notifier notifier;
+ struct mutex cover_mutex;
unsigned long vaddr;
unsigned long length;
unsigned int npages;
@@ -27,6 +29,7 @@ struct tid_user_buf {
struct tid_rb_node {
struct mmu_interval_notifier notifier;
struct hfi1_filedata *fdata;
+ struct mutex invalidate_mutex; /* covers hw removal */
unsigned long phys;
struct tid_group *grp;
u32 rcventry;
diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c
index 7bce963..36aaedc 100644
--- a/drivers/infiniband/hw/hfi1/user_pages.c
+++ b/drivers/infiniband/hw/hfi1/user_pages.c
@@ -29,33 +29,52 @@ MODULE_PARM_DESC(cache_size, "Send and receive side cache size limit (in MB)");
bool hfi1_can_pin_pages(struct hfi1_devdata *dd, struct mm_struct *mm,
u32 nlocked, u32 npages)
{
- unsigned long ulimit = rlimit(RLIMIT_MEMLOCK), pinned, cache_limit,
- size = (cache_size * (1UL << 20)); /* convert to bytes */
- unsigned int usr_ctxts =
- dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt;
- bool can_lock = capable(CAP_IPC_LOCK);
+ unsigned long ulimit_pages;
+ unsigned long cache_limit_pages;
+ unsigned int usr_ctxts;
/*
- * Calculate per-cache size. The calculation below uses only a quarter
- * of the available per-context limit. This leaves space for other
- * pinning. Should we worry about shared ctxts?
+ * Perform RLIMIT_MEMLOCK based checks unless CAP_IPC_LOCK is present.
*/
- cache_limit = (ulimit / usr_ctxts) / 4;
+ if (!capable(CAP_IPC_LOCK)) {
+ ulimit_pages =
+ DIV_ROUND_DOWN_ULL(rlimit(RLIMIT_MEMLOCK), PAGE_SIZE);
- /* If ulimit isn't set to "unlimited" and is smaller than cache_size. */
- if (ulimit != (-1UL) && size > cache_limit)
- size = cache_limit;
+ /*
+ * Pinning these pages would exceed this process's locked memory
+ * limit.
+ */
+ if (atomic64_read(&mm->pinned_vm) + npages > ulimit_pages)
+ return false;
- /* Convert to number of pages */
- size = DIV_ROUND_UP(size, PAGE_SIZE);
+ /*
+ * Only allow 1/4 of the user's RLIMIT_MEMLOCK to be used for HFI
+ * caches. This fraction is then equally distributed among all
+ * existing user contexts. Note that if RLIMIT_MEMLOCK is
+ * 'unlimited' (-1), the value of this limit will be > 2^42 pages
+ * (2^64 / 2^12 / 2^8 / 2^2).
+ *
+ * The effectiveness of this check may be reduced if I/O occurs on
+ * some user contexts before all user contexts are created. This
+ * check assumes that this process is the only one using this
+ * context (e.g., the corresponding fd was not passed to another
+ * process for concurrent access) as there is no per-context,
+ * per-process tracking of pinned pages. It also assumes that each
+ * user context has only one cache to limit.
+ */
+ usr_ctxts = dd->num_rcv_contexts - dd->first_dyn_alloc_ctxt;
+ if (nlocked + npages > (ulimit_pages / usr_ctxts / 4))
+ return false;
+ }
- pinned = atomic64_read(&mm->pinned_vm);
-
- /* First, check the absolute limit against all pinned pages. */
- if (pinned + npages >= ulimit && !can_lock)
+ /*
+ * Pinning these pages would exceed the size limit for this cache.
+ */
+ cache_limit_pages = cache_size * (1024 * 1024) / PAGE_SIZE;
+ if (nlocked + npages > cache_limit_pages)
return false;
- return ((nlocked + npages) <= size) || can_lock;
+ return true;
}
int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t npages,
diff --git a/drivers/infiniband/hw/irdma/cm.c b/drivers/infiniband/hw/irdma/cm.c
index b08c67b..a8ec3d8 100644
--- a/drivers/infiniband/hw/irdma/cm.c
+++ b/drivers/infiniband/hw/irdma/cm.c
@@ -1723,6 +1723,9 @@ irdma_add_mqh_4(struct irdma_device *iwdev, struct irdma_cm_info *cm_info,
continue;
idev = in_dev_get(ip_dev);
+ if (!idev)
+ continue;
+
in_dev_for_each_ifa_rtnl(ifa, idev) {
ibdev_dbg(&iwdev->ibdev,
"CM: Allocating child CM Listener forIP=%pI4, vlan_id=%d, MAC=%pM\n",
diff --git a/drivers/infiniband/hw/irdma/hw.c b/drivers/infiniband/hw/irdma/hw.c
index c14f19c..b918f80d 100644
--- a/drivers/infiniband/hw/irdma/hw.c
+++ b/drivers/infiniband/hw/irdma/hw.c
@@ -483,6 +483,8 @@ static enum irdma_status_code irdma_save_msix_info(struct irdma_pci_f *rf)
iw_qvlist->num_vectors = rf->msix_count;
if (rf->msix_count <= num_online_cpus())
rf->msix_shared = true;
+ else if (rf->msix_count > num_online_cpus() + 1)
+ rf->msix_count = num_online_cpus() + 1;
pmsix = rf->msix_entries;
for (i = 0, ceq_idx = 0; i < rf->msix_count; i++, iw_qvinfo++) {
diff --git a/drivers/infiniband/hw/irdma/verbs.c b/drivers/infiniband/hw/irdma/verbs.c
index c5971a8..27f22d5 100644
--- a/drivers/infiniband/hw/irdma/verbs.c
+++ b/drivers/infiniband/hw/irdma/verbs.c
@@ -2272,9 +2272,10 @@ static bool irdma_check_mr_contiguous(struct irdma_pble_alloc *palloc,
* @rf: RDMA PCI function
* @iwmr: mr pointer for this memory registration
* @use_pbles: flag if to use pble's
+ * @lvl_1_only: request only level 1 pble if true
*/
static int irdma_setup_pbles(struct irdma_pci_f *rf, struct irdma_mr *iwmr,
- bool use_pbles)
+ bool use_pbles, bool lvl_1_only)
{
struct irdma_pbl *iwpbl = &iwmr->iwpbl;
struct irdma_pble_alloc *palloc = &iwpbl->pble_alloc;
@@ -2285,7 +2286,7 @@ static int irdma_setup_pbles(struct irdma_pci_f *rf, struct irdma_mr *iwmr,
if (use_pbles) {
status = irdma_get_pble(rf->pble_rsrc, palloc, iwmr->page_cnt,
- false);
+ lvl_1_only);
if (status)
return -ENOMEM;
@@ -2328,16 +2329,10 @@ static int irdma_handle_q_mem(struct irdma_device *iwdev,
bool ret = true;
pg_size = iwmr->page_size;
- err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles);
+ err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles, true);
if (err)
return err;
- if (use_pbles && palloc->level != PBLE_LEVEL_1) {
- irdma_free_pble(iwdev->rf->pble_rsrc, palloc);
- iwpbl->pbl_allocated = false;
- return -ENOMEM;
- }
-
if (use_pbles)
arr = palloc->level1.addr;
@@ -2808,7 +2803,7 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
case IRDMA_MEMREG_TYPE_MEM:
use_pbles = (iwmr->page_cnt != 1);
- err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles);
+ err = irdma_setup_pbles(iwdev->rf, iwmr, use_pbles, false);
if (err)
goto error;
diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c
index 760b254..48a5756 100644
--- a/drivers/infiniband/hw/usnic/usnic_uiom.c
+++ b/drivers/infiniband/hw/usnic/usnic_uiom.c
@@ -281,8 +281,8 @@ static int usnic_uiom_map_sorted_intervals(struct list_head *intervals,
size = pa_end - pa_start + PAGE_SIZE;
usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x",
va_start, &pa_start, size, flags);
- err = iommu_map(pd->domain, va_start, pa_start,
- size, flags);
+ err = iommu_map_atomic(pd->domain, va_start,
+ pa_start, size, flags);
if (err) {
usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n",
va_start, &pa_start, size, err);
@@ -298,8 +298,8 @@ static int usnic_uiom_map_sorted_intervals(struct list_head *intervals,
size = pa - pa_start + PAGE_SIZE;
usnic_dbg("va 0x%lx pa %pa size 0x%zx flags 0x%x\n",
va_start, &pa_start, size, flags);
- err = iommu_map(pd->domain, va_start, pa_start,
- size, flags);
+ err = iommu_map_atomic(pd->domain, va_start,
+ pa_start, size, flags);
if (err) {
usnic_err("Failed to map va 0x%lx pa %pa size 0x%zx with err %d\n",
va_start, &pa_start, size, err);
diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c
index 61c17db..bf69566 100644
--- a/drivers/infiniband/sw/siw/siw_mem.c
+++ b/drivers/infiniband/sw/siw/siw_mem.c
@@ -398,7 +398,7 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
mlock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
- if (num_pages + atomic64_read(&mm_s->pinned_vm) > mlock_limit) {
+ if (atomic64_add_return(num_pages, &mm_s->pinned_vm) > mlock_limit) {
rv = -ENOMEM;
goto out_sem_up;
}
@@ -411,18 +411,16 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
goto out_sem_up;
}
for (i = 0; num_pages; i++) {
- int got, nents = min_t(int, num_pages, PAGES_PER_CHUNK);
-
- umem->page_chunk[i].plist =
+ int nents = min_t(int, num_pages, PAGES_PER_CHUNK);
+ struct page **plist =
kcalloc(nents, sizeof(struct page *), GFP_KERNEL);
- if (!umem->page_chunk[i].plist) {
+
+ if (!plist) {
rv = -ENOMEM;
goto out_sem_up;
}
- got = 0;
+ umem->page_chunk[i].plist = plist;
while (nents) {
- struct page **plist = &umem->page_chunk[i].plist[got];
-
rv = pin_user_pages(first_page_va, nents,
foll_flags | FOLL_LONGTERM,
plist, NULL);
@@ -430,12 +428,11 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
goto out_sem_up;
umem->num_pages += rv;
- atomic64_add(rv, &mm_s->pinned_vm);
first_page_va += rv * PAGE_SIZE;
+ plist += rv;
nents -= rv;
- got += rv;
+ num_pages -= rv;
}
- num_pages -= got;
}
out_sem_up:
mmap_read_unlock(mm_s);
@@ -443,6 +440,10 @@ struct siw_umem *siw_umem_get(u64 start, u64 len, bool writable)
if (rv > 0)
return umem;
+ /* Adjust accounting for pages not pinned */
+ if (num_pages)
+ atomic64_sub(num_pages, &mm_s->pinned_vm);
+
siw_umem_release(umem, false);
return ERR_PTR(rv);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 0aa8629..1ea95f80 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -2197,6 +2197,14 @@ int ipoib_intf_init(struct ib_device *hca, u32 port, const char *name,
rn->attach_mcast = ipoib_mcast_attach;
rn->detach_mcast = ipoib_mcast_detach;
rn->hca = hca;
+
+ rc = netif_set_real_num_tx_queues(dev, 1);
+ if (rc)
+ goto out;
+
+ rc = netif_set_real_num_rx_queues(dev, 1);
+ if (rc)
+ goto out;
}
priv->rn_ops = dev->netdev_ops;
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index c644617..54eb655 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -967,7 +967,7 @@ static void rtrs_clt_init_req(struct rtrs_clt_io_req *req,
refcount_set(&req->ref, 1);
req->mp_policy = clt_path->clt->mp_policy;
- iov_iter_kvec(&iter, READ, vec, 1, usr_len);
+ iov_iter_kvec(&iter, WRITE, vec, 1, usr_len);
len = _copy_from_iter(req->iu->buf, usr_len, &iter);
WARN_ON(len != usr_len);
diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
index abccdde..152242e 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.h
+++ b/drivers/infiniband/ulp/srp/ib_srp.h
@@ -62,9 +62,6 @@ enum {
SRP_DEFAULT_CMD_SQ_SIZE = SRP_DEFAULT_QUEUE_SIZE - SRP_RSP_SQ_SIZE -
SRP_TSK_MGMT_SQ_SIZE,
- SRP_TAG_NO_REQ = ~0U,
- SRP_TAG_TSK_MGMT = 1U << 31,
-
SRP_MAX_PAGES_PER_MR = 512,
SRP_MAX_ADD_CDB_LEN = 16,
@@ -79,6 +76,11 @@ enum {
sizeof(struct srp_imm_buf),
};
+enum {
+ SRP_TAG_NO_REQ = ~0U,
+ SRP_TAG_TSK_MGMT = BIT(31),
+};
+
enum srp_target_state {
SRP_TARGET_SCANNING,
SRP_TARGET_LIVE,
diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
index 4a6b33b..dd08ce9 100644
--- a/drivers/input/mouse/alps.c
+++ b/drivers/input/mouse/alps.c
@@ -852,8 +852,8 @@ static void alps_process_packet_v6(struct psmouse *psmouse)
x = y = z = 0;
/* Divide 4 since trackpoint's speed is too fast */
- input_report_rel(dev2, REL_X, (char)x / 4);
- input_report_rel(dev2, REL_Y, -((char)y / 4));
+ input_report_rel(dev2, REL_X, (s8)x / 4);
+ input_report_rel(dev2, REL_Y, -((s8)y / 4));
psmouse_report_standard_buttons(dev2, packet[3]);
@@ -1104,8 +1104,8 @@ static void alps_process_trackstick_packet_v7(struct psmouse *psmouse)
((packet[3] & 0x20) << 1);
z = (packet[5] & 0x3f) | ((packet[3] & 0x80) >> 1);
- input_report_rel(dev2, REL_X, (char)x);
- input_report_rel(dev2, REL_Y, -((char)y));
+ input_report_rel(dev2, REL_X, (s8)x);
+ input_report_rel(dev2, REL_Y, -((s8)y));
input_report_abs(dev2, ABS_PRESSURE, z);
psmouse_report_standard_buttons(dev2, packet[1]);
@@ -2294,20 +2294,20 @@ static int alps_get_v3_v7_resolution(struct psmouse *psmouse, int reg_pitch)
if (reg < 0)
return reg;
- x_pitch = (char)(reg << 4) >> 4; /* sign extend lower 4 bits */
+ x_pitch = (s8)(reg << 4) >> 4; /* sign extend lower 4 bits */
x_pitch = 50 + 2 * x_pitch; /* In 0.1 mm units */
- y_pitch = (char)reg >> 4; /* sign extend upper 4 bits */
+ y_pitch = (s8)reg >> 4; /* sign extend upper 4 bits */
y_pitch = 36 + 2 * y_pitch; /* In 0.1 mm units */
reg = alps_command_mode_read_reg(psmouse, reg_pitch + 1);
if (reg < 0)
return reg;
- x_electrode = (char)(reg << 4) >> 4; /* sign extend lower 4 bits */
+ x_electrode = (s8)(reg << 4) >> 4; /* sign extend lower 4 bits */
x_electrode = 17 + x_electrode;
- y_electrode = (char)reg >> 4; /* sign extend upper 4 bits */
+ y_electrode = (s8)reg >> 4; /* sign extend upper 4 bits */
y_electrode = 13 + y_electrode;
x_phys = x_pitch * (x_electrode - 1); /* In 0.1 mm units */
diff --git a/drivers/input/mouse/focaltech.c b/drivers/input/mouse/focaltech.c
index 6fd5fff..c74b990 100644
--- a/drivers/input/mouse/focaltech.c
+++ b/drivers/input/mouse/focaltech.c
@@ -202,8 +202,8 @@ static void focaltech_process_rel_packet(struct psmouse *psmouse,
state->pressed = packet[0] >> 7;
finger1 = ((packet[0] >> 4) & 0x7) - 1;
if (finger1 < FOC_MAX_FINGERS) {
- state->fingers[finger1].x += (char)packet[1];
- state->fingers[finger1].y += (char)packet[2];
+ state->fingers[finger1].x += (s8)packet[1];
+ state->fingers[finger1].y += (s8)packet[2];
} else {
psmouse_err(psmouse, "First finger in rel packet invalid: %d\n",
finger1);
@@ -218,8 +218,8 @@ static void focaltech_process_rel_packet(struct psmouse *psmouse,
*/
finger2 = ((packet[3] >> 4) & 0x7) - 1;
if (finger2 < FOC_MAX_FINGERS) {
- state->fingers[finger2].x += (char)packet[4];
- state->fingers[finger2].y += (char)packet[5];
+ state->fingers[finger2].x += (s8)packet[4];
+ state->fingers[finger2].y += (s8)packet[5];
}
}
diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
index 973a4c1..ffad142 100644
--- a/drivers/input/mouse/synaptics.c
+++ b/drivers/input/mouse/synaptics.c
@@ -191,7 +191,6 @@ static const char * const smbus_pnp_ids[] = {
"SYN3221", /* HP 15-ay000 */
"SYN323d", /* HP Spectre X360 13-w013dx */
"SYN3257", /* HP Envy 13-ad105ng */
- "SYN3286", /* HP Laptop 15-da3001TU */
NULL
};
diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
index 3a41ac9..239c777 100644
--- a/drivers/input/serio/i8042-x86ia64io.h
+++ b/drivers/input/serio/i8042-x86ia64io.h
@@ -1231,6 +1231,13 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
},
{
.matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"),
+ },
+ .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+ SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
+ },
+ {
+ .matches = {
DMI_MATCH(DMI_BOARD_NAME, "X170SM"),
},
.driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
index 0b513fcd..166d36b 100644
--- a/drivers/input/touchscreen/goodix.c
+++ b/drivers/input/touchscreen/goodix.c
@@ -123,10 +123,18 @@ static const unsigned long goodix_irq_flags[] = {
static const struct dmi_system_id nine_bytes_report[] = {
#if defined(CONFIG_DMI) && defined(CONFIG_X86)
{
- .ident = "Lenovo YogaBook",
- /* YB1-X91L/F and YB1-X90L/F */
+ /* Lenovo Yoga Book X90F / X90L */
.matches = {
- DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X9")
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
+ }
+ },
+ {
+ /* Lenovo Yoga Book X91F / X91L */
+ .matches = {
+ /* Non exact match to match F + L versions */
+ DMI_MATCH(DMI_PRODUCT_NAME, "Lenovo YB1-X91"),
}
},
#endif
diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c
index 808f6e7..14d785e 100644
--- a/drivers/interconnect/core.c
+++ b/drivers/interconnect/core.c
@@ -850,6 +850,10 @@ void icc_node_destroy(int id)
mutex_unlock(&icc_lock);
+ if (!node)
+ return;
+
+ kfree(node->links);
kfree(node);
}
EXPORT_SYMBOL_GPL(icc_node_destroy);
diff --git a/drivers/interconnect/qcom/osm-l3.c b/drivers/interconnect/qcom/osm-l3.c
index c7af143..87edab1 100644
--- a/drivers/interconnect/qcom/osm-l3.c
+++ b/drivers/interconnect/qcom/osm-l3.c
@@ -275,7 +275,7 @@ static int qcom_osm_l3_probe(struct platform_device *pdev)
qnodes = desc->nodes;
num_nodes = desc->num_nodes;
- data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL);
+ data = devm_kzalloc(&pdev->dev, struct_size(data, nodes, num_nodes), GFP_KERNEL);
if (!data)
return -ENOMEM;
diff --git a/drivers/interconnect/samsung/exynos.c b/drivers/interconnect/samsung/exynos.c
index 6559d8c..e706658 100644
--- a/drivers/interconnect/samsung/exynos.c
+++ b/drivers/interconnect/samsung/exynos.c
@@ -149,6 +149,9 @@ static int exynos_generic_icc_probe(struct platform_device *pdev)
&priv->bus_clk_ratio))
priv->bus_clk_ratio = EXYNOS_ICC_DEFAULT_BUS_CLK_RATIO;
+ icc_node->data = priv;
+ icc_node_add(icc_node, provider);
+
/*
* Register a PM QoS request for the parent (devfreq) device.
*/
@@ -157,9 +160,6 @@ static int exynos_generic_icc_probe(struct platform_device *pdev)
if (ret < 0)
goto err_node_del;
- icc_node->data = priv;
- icc_node_add(icc_node, provider);
-
icc_parent_node = exynos_icc_get_parent(bus_dev->of_node);
if (IS_ERR(icc_parent_node)) {
ret = PTR_ERR(icc_parent_node);
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index d6ce3f1..e4ee86b 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -85,6 +85,10 @@
#define ACPI_DEVFLAG_ATSDIS 0x10000000
#define LOOP_TIMEOUT 2000000
+
+#define IVRS_GET_SBDF_ID(seg, bus, dev, fd) (((seg & 0xffff) << 16) | ((bus & 0xff) << 8) \
+ | ((dev & 0x1f) << 3) | (fn & 0x7))
+
/*
* ACPI table definitions
*
@@ -3147,24 +3151,32 @@ static int __init parse_amd_iommu_options(char *str)
static int __init parse_ivrs_ioapic(char *str)
{
- unsigned int bus, dev, fn;
- int ret, id, i;
- u16 devid;
+ u32 seg = 0, bus, dev, fn;
+ int id, i;
+ u32 devid;
- ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn);
+ if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
+ sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5)
+ goto found;
- if (ret != 4) {
- pr_err("Invalid command line: ivrs_ioapic%s\n", str);
- return 1;
+ if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
+ sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) {
+ pr_warn("ivrs_ioapic%s option format deprecated; use ivrs_ioapic=%d@%04x:%02x:%02x.%d instead\n",
+ str, id, seg, bus, dev, fn);
+ goto found;
}
+ pr_err("Invalid command line: ivrs_ioapic%s\n", str);
+ return 1;
+
+found:
if (early_ioapic_map_size == EARLY_MAP_SIZE) {
pr_err("Early IOAPIC map overflow - ignoring ivrs_ioapic%s\n",
str);
return 1;
}
- devid = ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7);
+ devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
cmdline_maps = true;
i = early_ioapic_map_size++;
@@ -3177,24 +3189,32 @@ static int __init parse_ivrs_ioapic(char *str)
static int __init parse_ivrs_hpet(char *str)
{
- unsigned int bus, dev, fn;
- int ret, id, i;
- u16 devid;
+ u32 seg = 0, bus, dev, fn;
+ int id, i;
+ u32 devid;
- ret = sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn);
+ if (sscanf(str, "=%d@%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
+ sscanf(str, "=%d@%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5)
+ goto found;
- if (ret != 4) {
- pr_err("Invalid command line: ivrs_hpet%s\n", str);
- return 1;
+ if (sscanf(str, "[%d]=%x:%x.%x", &id, &bus, &dev, &fn) == 4 ||
+ sscanf(str, "[%d]=%x:%x:%x.%x", &id, &seg, &bus, &dev, &fn) == 5) {
+ pr_warn("ivrs_hpet%s option format deprecated; use ivrs_hpet=%d@%04x:%02x:%02x.%d instead\n",
+ str, id, seg, bus, dev, fn);
+ goto found;
}
+ pr_err("Invalid command line: ivrs_hpet%s\n", str);
+ return 1;
+
+found:
if (early_hpet_map_size == EARLY_MAP_SIZE) {
pr_err("Early HPET map overflow - ignoring ivrs_hpet%s\n",
str);
return 1;
}
- devid = ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7);
+ devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
cmdline_maps = true;
i = early_hpet_map_size++;
@@ -3205,19 +3225,53 @@ static int __init parse_ivrs_hpet(char *str)
return 1;
}
+#define ACPIID_LEN (ACPIHID_UID_LEN + ACPIHID_HID_LEN)
+
static int __init parse_ivrs_acpihid(char *str)
{
- u32 bus, dev, fn;
- char *hid, *uid, *p;
- char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0};
- int ret, i;
+ u32 seg = 0, bus, dev, fn;
+ char *hid, *uid, *p, *addr;
+ char acpiid[ACPIID_LEN] = {0};
+ int i;
- ret = sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid);
- if (ret != 4) {
- pr_err("Invalid command line: ivrs_acpihid(%s)\n", str);
- return 1;
+ addr = strchr(str, '@');
+ if (!addr) {
+ addr = strchr(str, '=');
+ if (!addr)
+ goto not_found;
+
+ ++addr;
+
+ if (strlen(addr) > ACPIID_LEN)
+ goto not_found;
+
+ if (sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid) == 4 ||
+ sscanf(str, "[%x:%x:%x.%x]=%s", &seg, &bus, &dev, &fn, acpiid) == 5) {
+ pr_warn("ivrs_acpihid%s option format deprecated; use ivrs_acpihid=%s@%04x:%02x:%02x.%d instead\n",
+ str, acpiid, seg, bus, dev, fn);
+ goto found;
+ }
+ goto not_found;
}
+ /* We have the '@', make it the terminator to get just the acpiid */
+ *addr++ = 0;
+
+ if (strlen(str) > ACPIID_LEN + 1)
+ goto not_found;
+
+ if (sscanf(str, "=%s", acpiid) != 1)
+ goto not_found;
+
+ if (sscanf(addr, "%x:%x.%x", &bus, &dev, &fn) == 3 ||
+ sscanf(addr, "%x:%x:%x.%x", &seg, &bus, &dev, &fn) == 4)
+ goto found;
+
+not_found:
+ pr_err("Invalid command line: ivrs_acpihid%s\n", str);
+ return 1;
+
+found:
p = acpiid;
hid = strsep(&p, ":");
uid = p;
@@ -3237,8 +3291,7 @@ static int __init parse_ivrs_acpihid(char *str)
i = early_acpihid_map_size++;
memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
memcpy(early_acpihid_map[i].uid, uid, strlen(uid));
- early_acpihid_map[i].devid =
- ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7);
+ early_acpihid_map[i].devid = IVRS_GET_SBDF_ID(seg, bus, dev, fn);
early_acpihid_map[i].cmd_line = true;
return 1;
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 6946724..9e243dd 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1587,27 +1587,29 @@ static int pdev_iommuv2_enable(struct pci_dev *pdev)
/* Only allow access to user-accessible pages */
ret = pci_enable_pasid(pdev, 0);
if (ret)
- goto out_err;
+ return ret;
/* First reset the PRI state of the device */
ret = pci_reset_pri(pdev);
if (ret)
- goto out_err;
+ goto out_err_pasid;
/* Enable PRI */
/* FIXME: Hardcode number of outstanding requests for now */
ret = pci_enable_pri(pdev, 32);
if (ret)
- goto out_err;
+ goto out_err_pasid;
ret = pci_enable_ats(pdev, PAGE_SHIFT);
if (ret)
- goto out_err;
+ goto out_err_pri;
return 0;
-out_err:
+out_err_pri:
pci_disable_pri(pdev);
+
+out_err_pasid:
pci_disable_pasid(pdev);
return ret;
diff --git a/drivers/iommu/intel/cap_audit.c b/drivers/iommu/intel/cap_audit.c
index b12e421..71596fc 100644
--- a/drivers/iommu/intel/cap_audit.c
+++ b/drivers/iommu/intel/cap_audit.c
@@ -144,6 +144,7 @@ static int cap_audit_static(struct intel_iommu *iommu, enum cap_audit_type type)
{
struct dmar_drhd_unit *d;
struct intel_iommu *i;
+ int rc = 0;
rcu_read_lock();
if (list_empty(&dmar_drhd_units))
@@ -163,9 +164,17 @@ static int cap_audit_static(struct intel_iommu *iommu, enum cap_audit_type type)
check_irq_capabilities(iommu, i);
}
+ /*
+ * If the system is sane to support scalable mode, either SL or FL
+ * should be sane.
+ */
+ if (intel_cap_smts_sanity() &&
+ !intel_cap_flts_sanity() && !intel_cap_slts_sanity())
+ rc = -EOPNOTSUPP;
+
out:
rcu_read_unlock();
- return 0;
+ return rc;
}
int intel_cap_audit(enum cap_audit_type type, struct intel_iommu *iommu)
@@ -203,3 +212,8 @@ bool intel_cap_flts_sanity(void)
{
return ecap_flts(intel_iommu_ecap_sanity);
}
+
+bool intel_cap_slts_sanity(void)
+{
+ return ecap_slts(intel_iommu_ecap_sanity);
+}
diff --git a/drivers/iommu/intel/cap_audit.h b/drivers/iommu/intel/cap_audit.h
index 74cfcca..d07b759 100644
--- a/drivers/iommu/intel/cap_audit.h
+++ b/drivers/iommu/intel/cap_audit.h
@@ -111,6 +111,7 @@ bool intel_cap_smts_sanity(void);
bool intel_cap_pasid_sanity(void);
bool intel_cap_nest_sanity(void);
bool intel_cap_flts_sanity(void);
+bool intel_cap_slts_sanity(void);
static inline bool scalable_mode_support(void)
{
diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index bff2420..7c20083 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1080,7 +1080,8 @@ static int alloc_iommu(struct dmar_drhd_unit *drhd)
}
err = -EINVAL;
- if (cap_sagaw(iommu->cap) == 0) {
+ if (!cap_sagaw(iommu->cap) &&
+ (!ecap_smts(iommu->ecap) || ecap_slts(iommu->ecap))) {
pr_info("%s: No supported address widths. Not attempting DMA translation.\n",
iommu->name);
drhd->ignored = 1;
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 751ff91..2953847 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -514,7 +514,7 @@ static inline void free_devinfo_mem(void *vaddr)
static inline int domain_type_is_si(struct dmar_domain *domain)
{
- return domain->flags & DOMAIN_FLAG_STATIC_IDENTITY;
+ return domain->domain.type == IOMMU_DOMAIN_IDENTITY;
}
static inline bool domain_use_first_level(struct dmar_domain *domain)
@@ -1917,12 +1917,21 @@ static void free_dmar_iommu(struct intel_iommu *iommu)
* Check and return whether first level is used by default for
* DMA translation.
*/
-static bool first_level_by_default(void)
+static bool first_level_by_default(unsigned int type)
{
- return scalable_mode_support() && intel_cap_flts_sanity();
+ /* Only SL is available in legacy mode */
+ if (!scalable_mode_support())
+ return false;
+
+ /* Only level (either FL or SL) is available, just use it */
+ if (intel_cap_flts_sanity() ^ intel_cap_slts_sanity())
+ return intel_cap_flts_sanity();
+
+ /* Both levels are available, decide it based on domain type */
+ return type != IOMMU_DOMAIN_UNMANAGED;
}
-static struct dmar_domain *alloc_domain(int flags)
+static struct dmar_domain *alloc_domain(unsigned int type)
{
struct dmar_domain *domain;
@@ -1932,8 +1941,7 @@ static struct dmar_domain *alloc_domain(int flags)
memset(domain, 0, sizeof(*domain));
domain->nid = NUMA_NO_NODE;
- domain->flags = flags;
- if (first_level_by_default())
+ if (first_level_by_default(type))
domain->flags |= DOMAIN_FLAG_USE_FIRST_LEVEL;
domain->has_iotlb_device = false;
INIT_LIST_HEAD(&domain->devices);
@@ -2753,7 +2761,7 @@ static int __init si_domain_init(int hw)
struct device *dev;
int i, nid, ret;
- si_domain = alloc_domain(DOMAIN_FLAG_STATIC_IDENTITY);
+ si_domain = alloc_domain(IOMMU_DOMAIN_IDENTITY);
if (!si_domain)
return -EFAULT;
@@ -4415,7 +4423,8 @@ int __init intel_iommu_init(void)
* is likely to be much lower than the overhead of synchronizing
* the virtual and physical IOMMU page-tables.
*/
- if (cap_caching_mode(iommu->cap)) {
+ if (cap_caching_mode(iommu->cap) &&
+ !first_level_by_default(IOMMU_DOMAIN_DMA)) {
pr_info_once("IOMMU batching disallowed due to virtualization\n");
iommu_set_dma_strict();
}
@@ -4555,7 +4564,7 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
case IOMMU_DOMAIN_DMA:
case IOMMU_DOMAIN_DMA_FQ:
case IOMMU_DOMAIN_UNMANAGED:
- dmar_domain = alloc_domain(0);
+ dmar_domain = alloc_domain(type);
if (!dmar_domain) {
pr_err("Can't allocate dmar_domain\n");
return NULL;
@@ -5131,7 +5140,12 @@ static size_t intel_iommu_unmap(struct iommu_domain *domain,
if (dmar_domain->max_addr == iova + size)
dmar_domain->max_addr = iova;
- iommu_iotlb_gather_add_page(domain, gather, iova, size);
+ /*
+ * We do not use page-selective IOTLB invalidation in flush queue,
+ * so there is no need to track page and sync iotlb.
+ */
+ if (!iommu_iotlb_gather_queued(gather))
+ iommu_iotlb_gather_add_page(domain, gather, iova, size);
return size;
}
@@ -5405,8 +5419,12 @@ static int intel_iommu_enable_sva(struct device *dev)
return -EINVAL;
ret = iopf_queue_add_device(iommu->iopf_queue, dev);
- if (!ret)
- ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
+ if (ret)
+ return ret;
+
+ ret = iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
+ if (ret)
+ iopf_queue_remove_device(iommu->iopf_queue, dev);
return ret;
}
@@ -5418,8 +5436,12 @@ static int intel_iommu_disable_sva(struct device *dev)
int ret;
ret = iommu_unregister_device_fault_handler(dev);
- if (!ret)
- ret = iopf_queue_remove_device(iommu->iopf_queue, dev);
+ if (ret)
+ return ret;
+
+ ret = iopf_queue_remove_device(iommu->iopf_queue, dev);
+ if (ret)
+ iommu_register_device_fault_handler(dev, iommu_queue_iopf, dev);
return ret;
}
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 0060bd0..6dbc43b 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -186,6 +186,9 @@ int intel_pasid_alloc_table(struct device *dev)
attach_out:
device_attach_pasid_table(info, pasid_table);
+ if (!ecap_coherent(info->iommu->ecap))
+ clflush_cache_range(pasid_table->table, size);
+
return 0;
}
@@ -276,6 +279,10 @@ static struct pasid_entry *intel_pasid_get_entry(struct device *dev, u32 pasid)
free_pgtable_page(entries);
goto retry;
}
+ if (!ecap_coherent(info->iommu->ecap)) {
+ clflush_cache_range(entries, VTD_PAGE_SIZE);
+ clflush_cache_range(&dir[dir_index].val, sizeof(*dir));
+ }
}
return &entries[index];
@@ -426,6 +433,16 @@ static inline void pasid_set_page_snoop(struct pasid_entry *pe, bool value)
}
/*
+ * Setup No Execute Enable bit (Bit 133) of a scalable mode PASID
+ * entry. It is required when XD bit of the first level page table
+ * entry is about to be set.
+ */
+static inline void pasid_set_nxe(struct pasid_entry *pe)
+{
+ pasid_set_bits(&pe->val[2], 1 << 5, 1 << 5);
+}
+
+/*
* Setup the Page Snoop (PGSNP) field (Bit 88) of a scalable mode
* PASID entry.
*/
@@ -631,6 +648,7 @@ int intel_pasid_setup_first_level(struct intel_iommu *iommu,
pasid_set_domain_id(pte, did);
pasid_set_address_width(pte, iommu->agaw);
pasid_set_page_snoop(pte, !!ecap_smpwc(iommu->ecap));
+ pasid_set_nxe(pte);
/* Setup Present and PASID Granular Transfer Type: */
pasid_set_translation_type(pte, PASID_ENTRY_PGTT_FL_ONLY);
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index c10d35e..d672620 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -657,12 +657,16 @@ struct iommu_group *iommu_group_alloc(void)
ret = iommu_group_create_file(group,
&iommu_group_attr_reserved_regions);
- if (ret)
+ if (ret) {
+ kobject_put(group->devices_kobj);
return ERR_PTR(ret);
+ }
ret = iommu_group_create_file(group, &iommu_group_attr_type);
- if (ret)
+ if (ret) {
+ kobject_put(group->devices_kobj);
return ERR_PTR(ret);
+ }
pr_debug("Allocated group %d\n", group->id);
diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
index 5ddb8e5..fc1ef7d 100644
--- a/drivers/irqchip/irq-alpine-msi.c
+++ b/drivers/irqchip/irq-alpine-msi.c
@@ -199,6 +199,7 @@ static int alpine_msix_init_domains(struct alpine_msix_data *priv,
}
gic_domain = irq_find_host(gic_node);
+ of_node_put(gic_node);
if (!gic_domain) {
pr_err("Failed to find the GIC domain\n");
return -ENXIO;
diff --git a/drivers/irqchip/irq-bcm6345-l1.c b/drivers/irqchip/irq-bcm6345-l1.c
index 1bd0621..ebc3a25 100644
--- a/drivers/irqchip/irq-bcm6345-l1.c
+++ b/drivers/irqchip/irq-bcm6345-l1.c
@@ -220,11 +220,11 @@ static int bcm6345_l1_set_affinity(struct irq_data *d,
enabled = intc->cpus[old_cpu]->enable_cache[word] & mask;
if (enabled)
__bcm6345_l1_mask(d);
- cpumask_copy(irq_data_get_affinity_mask(d), dest);
+ irq_data_update_affinity(d, dest);
if (enabled)
__bcm6345_l1_unmask(d);
} else {
- cpumask_copy(irq_data_get_affinity_mask(d), dest);
+ irq_data_update_affinity(d, dest);
}
raw_spin_unlock_irqrestore(&intc->lock, flags);
diff --git a/drivers/irqchip/irq-bcm7120-l2.c b/drivers/irqchip/irq-bcm7120-l2.c
index f23d765..e91b38a 100644
--- a/drivers/irqchip/irq-bcm7120-l2.c
+++ b/drivers/irqchip/irq-bcm7120-l2.c
@@ -271,7 +271,8 @@ static int __init bcm7120_l2_intc_probe(struct device_node *dn,
flags |= IRQ_GC_BE_IO;
ret = irq_alloc_domain_generic_chips(data->domain, IRQS_PER_WORD, 1,
- dn->full_name, handle_level_irq, clr, 0, flags);
+ dn->full_name, handle_level_irq, clr,
+ IRQ_LEVEL, flags);
if (ret) {
pr_err("failed to allocate generic irq chip\n");
goto out_free_domain;
diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c
index 8e09115..fddea72 100644
--- a/drivers/irqchip/irq-brcmstb-l2.c
+++ b/drivers/irqchip/irq-brcmstb-l2.c
@@ -161,6 +161,7 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
*init_params)
{
unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN;
+ unsigned int set = 0;
struct brcmstb_l2_intc_data *data;
struct irq_chip_type *ct;
int ret;
@@ -208,9 +209,12 @@ static int __init brcmstb_l2_intc_of_init(struct device_node *np,
if (IS_ENABLED(CONFIG_MIPS) && IS_ENABLED(CONFIG_CPU_BIG_ENDIAN))
flags |= IRQ_GC_BE_IO;
+ if (init_params->handler == handle_level_irq)
+ set |= IRQ_LEVEL;
+
/* Allocate a single Generic IRQ chip for this node */
ret = irq_alloc_domain_generic_chips(data->domain, 32, 1,
- np->full_name, init_params->handler, clr, 0, flags);
+ np->full_name, init_params->handler, clr, set, flags);
if (ret) {
pr_err("failed to allocate generic irq chip\n");
goto out_free_domain;
diff --git a/drivers/irqchip/irq-mvebu-gicp.c b/drivers/irqchip/irq-mvebu-gicp.c
index fe88a782..c43a345 100644
--- a/drivers/irqchip/irq-mvebu-gicp.c
+++ b/drivers/irqchip/irq-mvebu-gicp.c
@@ -221,6 +221,7 @@ static int mvebu_gicp_probe(struct platform_device *pdev)
}
parent_domain = irq_find_host(irq_parent_dn);
+ of_node_put(irq_parent_dn);
if (!parent_domain) {
dev_err(&pdev->dev, "failed to find parent IRQ domain\n");
return -ENODEV;
diff --git a/drivers/irqchip/irq-ti-sci-intr.c b/drivers/irqchip/irq-ti-sci-intr.c
index fe8fad2..020ddf2 100644
--- a/drivers/irqchip/irq-ti-sci-intr.c
+++ b/drivers/irqchip/irq-ti-sci-intr.c
@@ -236,6 +236,7 @@ static int ti_sci_intr_irq_domain_probe(struct platform_device *pdev)
}
parent_domain = irq_find_host(parent_node);
+ of_node_put(parent_node);
if (!parent_domain) {
dev_err(dev, "Failed to find IRQ parent domain\n");
return -ENODEV;
diff --git a/drivers/irqchip/irqchip.c b/drivers/irqchip/irqchip.c
index 3570f0a..7899607 100644
--- a/drivers/irqchip/irqchip.c
+++ b/drivers/irqchip/irqchip.c
@@ -38,8 +38,10 @@ int platform_irqchip_probe(struct platform_device *pdev)
struct device_node *par_np = of_irq_find_parent(np);
of_irq_init_cb_t irq_init_cb = of_device_get_match_data(&pdev->dev);
- if (!irq_init_cb)
+ if (!irq_init_cb) {
+ of_node_put(par_np);
return -EINVAL;
+ }
if (par_np == np)
par_np = NULL;
@@ -52,8 +54,10 @@ int platform_irqchip_probe(struct platform_device *pdev)
* interrupt controller. The actual initialization callback of this
* interrupt controller can check for specific domains as necessary.
*/
- if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY))
+ if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY)) {
+ of_node_put(par_np);
return -EPROBE_DEFER;
+ }
return irq_init_cb(np, par_np);
}
diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
index f4bb02f..1024b15 100644
--- a/drivers/leds/led-class.c
+++ b/drivers/leds/led-class.c
@@ -235,14 +235,17 @@ struct led_classdev *of_led_get(struct device_node *np, int index)
led_dev = class_find_device_by_of_node(leds_class, led_node);
of_node_put(led_node);
+ put_device(led_dev);
if (!led_dev)
return ERR_PTR(-EPROBE_DEFER);
led_cdev = dev_get_drvdata(led_dev);
- if (!try_module_get(led_cdev->dev->parent->driver->owner))
+ if (!try_module_get(led_cdev->dev->parent->driver->owner)) {
+ put_device(led_cdev->dev);
return ERR_PTR(-ENODEV);
+ }
return led_cdev;
}
@@ -255,6 +258,7 @@ EXPORT_SYMBOL_GPL(of_led_get);
void led_put(struct led_classdev *led_cdev)
{
module_put(led_cdev->dev->parent->driver->owner);
+ put_device(led_cdev->dev);
}
EXPORT_SYMBOL_GPL(led_put);
diff --git a/drivers/macintosh/windfarm_lm75_sensor.c b/drivers/macintosh/windfarm_lm75_sensor.c
index 29f48c2..e90ad1b 100644
--- a/drivers/macintosh/windfarm_lm75_sensor.c
+++ b/drivers/macintosh/windfarm_lm75_sensor.c
@@ -34,8 +34,8 @@
#endif
struct wf_lm75_sensor {
- int ds1775 : 1;
- int inited : 1;
+ unsigned int ds1775 : 1;
+ unsigned int inited : 1;
struct i2c_client *i2c;
struct wf_sensor sens;
};
diff --git a/drivers/macintosh/windfarm_smu_sensors.c b/drivers/macintosh/windfarm_smu_sensors.c
index c8706cf..714c1e1 100644
--- a/drivers/macintosh/windfarm_smu_sensors.c
+++ b/drivers/macintosh/windfarm_smu_sensors.c
@@ -273,8 +273,8 @@ struct smu_cpu_power_sensor {
struct list_head link;
struct wf_sensor *volts;
struct wf_sensor *amps;
- int fake_volts : 1;
- int quadratic : 1;
+ unsigned int fake_volts : 1;
+ unsigned int quadratic : 1;
struct wf_sensor sens;
};
#define to_smu_cpu_power(c) container_of(c, struct smu_cpu_power_sensor, sens)
diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
index abfe7e3..24cd28e 100644
--- a/drivers/md/dm-cache-target.c
+++ b/drivers/md/dm-cache-target.c
@@ -1813,6 +1813,7 @@ static void process_deferred_bios(struct work_struct *ws)
else
commit_needed = process_bio(cache, bio) || commit_needed;
+ cond_resched();
}
if (commit_needed)
@@ -1835,6 +1836,7 @@ static void requeue_deferred_bios(struct cache *cache)
while ((bio = bio_list_pop(&bios))) {
bio->bi_status = BLK_STS_DM_REQUEUE;
bio_endio(bio);
+ cond_resched();
}
}
@@ -1875,6 +1877,8 @@ static void check_migrations(struct work_struct *ws)
r = mg_start(cache, op, NULL);
if (r)
break;
+
+ cond_resched();
}
}
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index f30fd38..a428770 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -68,7 +68,9 @@ struct dm_crypt_io {
struct crypt_config *cc;
struct bio *base_bio;
u8 *integrity_metadata;
- bool integrity_metadata_from_pool;
+ bool integrity_metadata_from_pool:1;
+ bool in_tasklet:1;
+
struct work_struct work;
struct tasklet_struct tasklet;
@@ -1723,6 +1725,7 @@ static void crypt_io_init(struct dm_crypt_io *io, struct crypt_config *cc,
io->ctx.r.req = NULL;
io->integrity_metadata = NULL;
io->integrity_metadata_from_pool = false;
+ io->in_tasklet = false;
atomic_set(&io->io_pending, 0);
}
@@ -1768,14 +1771,13 @@ static void crypt_dec_pending(struct dm_crypt_io *io)
* our tasklet. In this case we need to delay bio_endio()
* execution to after the tasklet is done and dequeued.
*/
- if (tasklet_trylock(&io->tasklet)) {
- tasklet_unlock(&io->tasklet);
- bio_endio(base_bio);
+ if (io->in_tasklet) {
+ INIT_WORK(&io->work, kcryptd_io_bio_endio);
+ queue_work(cc->io_queue, &io->work);
return;
}
- INIT_WORK(&io->work, kcryptd_io_bio_endio);
- queue_work(cc->io_queue, &io->work);
+ bio_endio(base_bio);
}
/*
@@ -1935,6 +1937,7 @@ static int dmcrypt_write(void *data)
io = crypt_io_from_node(rb_first(&write_tree));
rb_erase(&io->rb_node, &write_tree);
kcryptd_io_write(io);
+ cond_resched();
} while (!RB_EMPTY_ROOT(&write_tree));
blk_finish_plug(&plug);
}
@@ -2228,6 +2231,7 @@ static void kcryptd_queue_crypt(struct dm_crypt_io *io)
* it is being executed with irqs disabled.
*/
if (in_hardirq() || irqs_disabled()) {
+ io->in_tasklet = true;
tasklet_init(&io->tasklet, kcryptd_crypt_tasklet, (unsigned long)&io->work);
tasklet_schedule(&io->tasklet);
return;
diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
index 4b94ffe..bf7f205 100644
--- a/drivers/md/dm-flakey.c
+++ b/drivers/md/dm-flakey.c
@@ -301,9 +301,13 @@ static void corrupt_bio_data(struct bio *bio, struct flakey_c *fc)
*/
bio_for_each_segment(bvec, bio, iter) {
if (bio_iter_len(bio, iter) > corrupt_bio_byte) {
- char *segment = (page_address(bio_iter_page(bio, iter))
- + bio_iter_offset(bio, iter));
+ char *segment;
+ struct page *page = bio_iter_page(bio, iter);
+ if (unlikely(page == ZERO_PAGE(0)))
+ break;
+ segment = bvec_kmap_local(&bvec);
segment[corrupt_bio_byte] = fc->corrupt_bio_value;
+ kunmap_local(segment);
DMDEBUG("Corrupting data bio=%p by writing %u to byte %u "
"(rw=%c bi_opf=%u bi_sector=%llu size=%u)\n",
bio, fc->corrupt_bio_value, fc->corrupt_bio_byte,
@@ -359,9 +363,11 @@ static int flakey_map(struct dm_target *ti, struct bio *bio)
/*
* Corrupt matching writes.
*/
- if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == WRITE)) {
- if (all_corrupt_bio_flags_match(bio, fc))
- corrupt_bio_data(bio, fc);
+ if (fc->corrupt_bio_byte) {
+ if (fc->corrupt_bio_rw == WRITE) {
+ if (all_corrupt_bio_flags_match(bio, fc))
+ corrupt_bio_data(bio, fc);
+ }
goto map_bio;
}
@@ -387,13 +393,14 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio,
return DM_ENDIO_DONE;
if (!*error && pb->bio_submitted && (bio_data_dir(bio) == READ)) {
- if (fc->corrupt_bio_byte && (fc->corrupt_bio_rw == READ) &&
- all_corrupt_bio_flags_match(bio, fc)) {
- /*
- * Corrupt successful matching READs while in down state.
- */
- corrupt_bio_data(bio, fc);
-
+ if (fc->corrupt_bio_byte) {
+ if ((fc->corrupt_bio_rw == READ) &&
+ all_corrupt_bio_flags_match(bio, fc)) {
+ /*
+ * Corrupt successful matching READs while in down state.
+ */
+ corrupt_bio_data(bio, fc);
+ }
} else if (!test_bit(DROP_WRITES, &fc->flags) &&
!test_bit(ERROR_WRITES, &fc->flags)) {
/*
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index dcaca4a..e277feb 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -482,7 +482,7 @@ static struct mapped_device *dm_hash_rename(struct dm_ioctl *param,
dm_table_event(table);
dm_put_live_table(hc->md, srcu_idx);
- if (!dm_kobject_uevent(hc->md, KOBJ_CHANGE, param->event_nr))
+ if (!dm_kobject_uevent(hc->md, KOBJ_CHANGE, param->event_nr, false))
param->flags |= DM_UEVENT_GENERATED_FLAG;
md = hc->md;
@@ -989,7 +989,7 @@ static int dev_remove(struct file *filp, struct dm_ioctl *param, size_t param_si
dm_ima_measure_on_device_remove(md, false);
- if (!dm_kobject_uevent(md, KOBJ_REMOVE, param->event_nr))
+ if (!dm_kobject_uevent(md, KOBJ_REMOVE, param->event_nr, false))
param->flags |= DM_UEVENT_GENERATED_FLAG;
dm_put(md);
@@ -1123,6 +1123,7 @@ static int do_resume(struct dm_ioctl *param)
struct hash_cell *hc;
struct mapped_device *md;
struct dm_table *new_map, *old_map = NULL;
+ bool need_resize_uevent = false;
down_write(&_hash_lock);
@@ -1143,6 +1144,8 @@ static int do_resume(struct dm_ioctl *param)
/* Do we need to load a new map ? */
if (new_map) {
+ sector_t old_size, new_size;
+
/* Suspend if it isn't already suspended */
if (param->flags & DM_SKIP_LOCKFS_FLAG)
suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;
@@ -1151,6 +1154,7 @@ static int do_resume(struct dm_ioctl *param)
if (!dm_suspended_md(md))
dm_suspend(md, suspend_flags);
+ old_size = dm_get_size(md);
old_map = dm_swap_table(md, new_map);
if (IS_ERR(old_map)) {
dm_sync_table(md);
@@ -1158,6 +1162,9 @@ static int do_resume(struct dm_ioctl *param)
dm_put(md);
return PTR_ERR(old_map);
}
+ new_size = dm_get_size(md);
+ if (old_size && new_size && old_size != new_size)
+ need_resize_uevent = true;
if (dm_table_get_mode(new_map) & FMODE_WRITE)
set_disk_ro(dm_disk(md), 0);
@@ -1170,7 +1177,7 @@ static int do_resume(struct dm_ioctl *param)
if (!r) {
dm_ima_measure_on_device_resume(md, new_map ? true : false);
- if (!dm_kobject_uevent(md, KOBJ_CHANGE, param->event_nr))
+ if (!dm_kobject_uevent(md, KOBJ_CHANGE, param->event_nr, need_resize_uevent))
param->flags |= DM_UEVENT_GENERATED_FLAG;
}
}
diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
index a3f2050..9f71d16 100644
--- a/drivers/md/dm-stats.c
+++ b/drivers/md/dm-stats.c
@@ -188,7 +188,7 @@ static int dm_stat_in_flight(struct dm_stat_shared *shared)
atomic_read(&shared->in_flight[WRITE]);
}
-void dm_stats_init(struct dm_stats *stats)
+int dm_stats_init(struct dm_stats *stats)
{
int cpu;
struct dm_stats_last_position *last;
@@ -197,11 +197,16 @@ void dm_stats_init(struct dm_stats *stats)
INIT_LIST_HEAD(&stats->list);
stats->precise_timestamps = false;
stats->last = alloc_percpu(struct dm_stats_last_position);
+ if (!stats->last)
+ return -ENOMEM;
+
for_each_possible_cpu(cpu) {
last = per_cpu_ptr(stats->last, cpu);
last->last_sector = (sector_t)ULLONG_MAX;
last->last_rw = UINT_MAX;
}
+
+ return 0;
}
void dm_stats_cleanup(struct dm_stats *stats)
diff --git a/drivers/md/dm-stats.h b/drivers/md/dm-stats.h
index 09c81a1..ee32b09 100644
--- a/drivers/md/dm-stats.h
+++ b/drivers/md/dm-stats.h
@@ -21,7 +21,7 @@ struct dm_stats_aux {
unsigned long long duration_ns;
};
-void dm_stats_init(struct dm_stats *st);
+int dm_stats_init(struct dm_stats *st);
void dm_stats_cleanup(struct dm_stats *st);
struct mapped_device;
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index cce26f4..1cf6526 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -2217,6 +2217,7 @@ static void process_thin_deferred_bios(struct thin_c *tc)
throttle_work_update(&pool->throttle);
dm_pool_issue_prefetches(pool->pmd);
}
+ cond_resched();
}
blk_finish_plug(&plug);
}
@@ -2299,6 +2300,7 @@ static void process_thin_deferred_cells(struct thin_c *tc)
else
pool->process_cell(tc, cell);
}
+ cond_resched();
} while (!list_empty(&cells));
}
@@ -3381,6 +3383,7 @@ static int pool_ctr(struct dm_target *ti, unsigned argc, char **argv)
pt->low_water_blocks = low_water_blocks;
pt->adjusted_pf = pt->requested_pf = pf;
ti->num_flush_bios = 1;
+ ti->limit_swap_bios = true;
/*
* Only need to enable discards if the pool should pass
@@ -4261,6 +4264,7 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
goto bad;
ti->num_flush_bios = 1;
+ ti->limit_swap_bios = true;
ti->flush_supported = true;
ti->per_io_data_size = sizeof(struct dm_thin_endio_hook);
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 9dd2c2d..8bc121d 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -226,7 +226,6 @@ static int __init local_init(void)
static void local_exit(void)
{
- flush_scheduled_work();
destroy_workqueue(deferred_remove_workqueue);
unregister_blkdev(_major, _name);
@@ -1819,7 +1818,9 @@ static struct mapped_device *alloc_dev(int minor)
if (!md->pending_io)
goto bad;
- dm_stats_init(&md->stats);
+ r = dm_stats_init(&md->stats);
+ if (r < 0)
+ goto bad;
/* Populate the mapping, nobody knows we exist yet */
spin_lock(&_minor_lock);
@@ -1943,10 +1944,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
if (size != dm_get_size(md))
memset(&md->geometry, 0, sizeof(md->geometry));
- if (!get_capacity(md->disk))
- set_capacity(md->disk, size);
- else
- set_capacity_and_notify(md->disk, size);
+ set_capacity(md->disk, size);
dm_table_event_callback(t, event_callback, md);
@@ -2309,6 +2307,7 @@ static void dm_wq_work(struct work_struct *work)
break;
submit_bio_noacct(bio);
+ cond_resched();
}
}
@@ -2708,23 +2707,25 @@ EXPORT_SYMBOL_GPL(dm_internal_resume_fast);
* Event notification.
*---------------------------------------------------------------*/
int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,
- unsigned cookie)
+ unsigned cookie, bool need_resize_uevent)
{
int r;
unsigned noio_flag;
char udev_cookie[DM_COOKIE_LENGTH];
- char *envp[] = { udev_cookie, NULL };
+ char *envp[3] = { NULL, NULL, NULL };
+ char **envpp = envp;
+ if (cookie) {
+ snprintf(udev_cookie, DM_COOKIE_LENGTH, "%s=%u",
+ DM_COOKIE_ENV_VAR_NAME, cookie);
+ *envpp++ = udev_cookie;
+ }
+ if (need_resize_uevent) {
+ *envpp++ = "RESIZE=1";
+ }
noio_flag = memalloc_noio_save();
- if (!cookie)
- r = kobject_uevent(&disk_to_dev(md->disk)->kobj, action);
- else {
- snprintf(udev_cookie, DM_COOKIE_LENGTH, "%s=%u",
- DM_COOKIE_ENV_VAR_NAME, cookie);
- r = kobject_uevent_env(&disk_to_dev(md->disk)->kobj,
- action, envp);
- }
+ r = kobject_uevent_env(&disk_to_dev(md->disk)->kobj, action, envp);
memalloc_noio_restore(noio_flag);
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 742d9c8..10e4a34 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -210,7 +210,7 @@ int dm_get_table_device(struct mapped_device *md, dev_t dev, fmode_t mode,
void dm_put_table_device(struct mapped_device *md, struct dm_dev *d);
int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,
- unsigned cookie);
+ unsigned cookie, bool need_resize_uevent);
void dm_internal_suspend(struct mapped_device *md);
void dm_internal_resume(struct mapped_device *md);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 9e54b86..bd0c9df 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -3189,6 +3189,9 @@ slot_store(struct md_rdev *rdev, const char *buf, size_t len)
err = kstrtouint(buf, 10, (unsigned int *)&slot);
if (err < 0)
return err;
+ if (slot < 0)
+ /* overflow */
+ return -ENOSPC;
}
if (rdev->mddev->pers && slot == -1) {
/* Setting 'slot' on an active array requires also
diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
index e10af3f..de1f0aa 100644
--- a/drivers/media/i2c/imx219.c
+++ b/drivers/media/i2c/imx219.c
@@ -89,6 +89,12 @@
#define IMX219_REG_ORIENTATION 0x0172
+/* Binning Mode */
+#define IMX219_REG_BINNING_MODE 0x0174
+#define IMX219_BINNING_NONE 0x0000
+#define IMX219_BINNING_2X2 0x0101
+#define IMX219_BINNING_2X2_ANALOG 0x0303
+
/* Test Pattern Control */
#define IMX219_REG_TEST_PATTERN 0x0600
#define IMX219_TEST_PATTERN_DISABLE 0
@@ -143,6 +149,58 @@ struct imx219_mode {
/* Default register values */
struct imx219_reg_list reg_list;
+
+ /* 2x2 binning is used */
+ bool binning;
+};
+
+static const struct imx219_reg imx219_common_regs[] = {
+ {0x0100, 0x00}, /* Mode Select */
+
+ /* To Access Addresses 3000-5fff, send the following commands */
+ {0x30eb, 0x0c},
+ {0x30eb, 0x05},
+ {0x300a, 0xff},
+ {0x300b, 0xff},
+ {0x30eb, 0x05},
+ {0x30eb, 0x09},
+
+ /* PLL Clock Table */
+ {0x0301, 0x05}, /* VTPXCK_DIV */
+ {0x0303, 0x01}, /* VTSYSCK_DIV */
+ {0x0304, 0x03}, /* PREPLLCK_VT_DIV 0x03 = AUTO set */
+ {0x0305, 0x03}, /* PREPLLCK_OP_DIV 0x03 = AUTO set */
+ {0x0306, 0x00}, /* PLL_VT_MPY */
+ {0x0307, 0x39},
+ {0x030b, 0x01}, /* OP_SYS_CLK_DIV */
+ {0x030c, 0x00}, /* PLL_OP_MPY */
+ {0x030d, 0x72},
+
+ /* Undocumented registers */
+ {0x455e, 0x00},
+ {0x471e, 0x4b},
+ {0x4767, 0x0f},
+ {0x4750, 0x14},
+ {0x4540, 0x00},
+ {0x47b4, 0x14},
+ {0x4713, 0x30},
+ {0x478b, 0x10},
+ {0x478f, 0x10},
+ {0x4793, 0x10},
+ {0x4797, 0x0e},
+ {0x479b, 0x0e},
+
+ /* Frame Bank Register Group "A" */
+ {0x0162, 0x0d}, /* Line_Length_A */
+ {0x0163, 0x78},
+ {0x0170, 0x01}, /* X_ODD_INC_A */
+ {0x0171, 0x01}, /* Y_ODD_INC_A */
+
+ /* Output setup registers */
+ {0x0114, 0x01}, /* CSI 2-Lane Mode */
+ {0x0128, 0x00}, /* DPHY Auto Mode */
+ {0x012a, 0x18}, /* EXCK_Freq */
+ {0x012b, 0x00},
};
/*
@@ -151,17 +209,6 @@ struct imx219_mode {
* 3280x2464 = mode 2, 1920x1080 = mode 1, 1640x1232 = mode 4, 640x480 = mode 7.
*/
static const struct imx219_reg mode_3280x2464_regs[] = {
- {0x0100, 0x00},
- {0x30eb, 0x0c},
- {0x30eb, 0x05},
- {0x300a, 0xff},
- {0x300b, 0xff},
- {0x30eb, 0x05},
- {0x30eb, 0x09},
- {0x0114, 0x01},
- {0x0128, 0x00},
- {0x012a, 0x18},
- {0x012b, 0x00},
{0x0164, 0x00},
{0x0165, 0x00},
{0x0166, 0x0c},
@@ -174,53 +221,13 @@ static const struct imx219_reg mode_3280x2464_regs[] = {
{0x016d, 0xd0},
{0x016e, 0x09},
{0x016f, 0xa0},
- {0x0170, 0x01},
- {0x0171, 0x01},
- {0x0174, 0x00},
- {0x0175, 0x00},
- {0x0301, 0x05},
- {0x0303, 0x01},
- {0x0304, 0x03},
- {0x0305, 0x03},
- {0x0306, 0x00},
- {0x0307, 0x39},
- {0x030b, 0x01},
- {0x030c, 0x00},
- {0x030d, 0x72},
{0x0624, 0x0c},
{0x0625, 0xd0},
{0x0626, 0x09},
{0x0627, 0xa0},
- {0x455e, 0x00},
- {0x471e, 0x4b},
- {0x4767, 0x0f},
- {0x4750, 0x14},
- {0x4540, 0x00},
- {0x47b4, 0x14},
- {0x4713, 0x30},
- {0x478b, 0x10},
- {0x478f, 0x10},
- {0x4793, 0x10},
- {0x4797, 0x0e},
- {0x479b, 0x0e},
- {0x0162, 0x0d},
- {0x0163, 0x78},
};
static const struct imx219_reg mode_1920_1080_regs[] = {
- {0x0100, 0x00},
- {0x30eb, 0x05},
- {0x30eb, 0x0c},
- {0x300a, 0xff},
- {0x300b, 0xff},
- {0x30eb, 0x05},
- {0x30eb, 0x09},
- {0x0114, 0x01},
- {0x0128, 0x00},
- {0x012a, 0x18},
- {0x012b, 0x00},
- {0x0162, 0x0d},
- {0x0163, 0x78},
{0x0164, 0x02},
{0x0165, 0xa8},
{0x0166, 0x0a},
@@ -233,49 +240,13 @@ static const struct imx219_reg mode_1920_1080_regs[] = {
{0x016d, 0x80},
{0x016e, 0x04},
{0x016f, 0x38},
- {0x0170, 0x01},
- {0x0171, 0x01},
- {0x0174, 0x00},
- {0x0175, 0x00},
- {0x0301, 0x05},
- {0x0303, 0x01},
- {0x0304, 0x03},
- {0x0305, 0x03},
- {0x0306, 0x00},
- {0x0307, 0x39},
- {0x030b, 0x01},
- {0x030c, 0x00},
- {0x030d, 0x72},
{0x0624, 0x07},
{0x0625, 0x80},
{0x0626, 0x04},
{0x0627, 0x38},
- {0x455e, 0x00},
- {0x471e, 0x4b},
- {0x4767, 0x0f},
- {0x4750, 0x14},
- {0x4540, 0x00},
- {0x47b4, 0x14},
- {0x4713, 0x30},
- {0x478b, 0x10},
- {0x478f, 0x10},
- {0x4793, 0x10},
- {0x4797, 0x0e},
- {0x479b, 0x0e},
};
static const struct imx219_reg mode_1640_1232_regs[] = {
- {0x0100, 0x00},
- {0x30eb, 0x0c},
- {0x30eb, 0x05},
- {0x300a, 0xff},
- {0x300b, 0xff},
- {0x30eb, 0x05},
- {0x30eb, 0x09},
- {0x0114, 0x01},
- {0x0128, 0x00},
- {0x012a, 0x18},
- {0x012b, 0x00},
{0x0164, 0x00},
{0x0165, 0x00},
{0x0166, 0x0c},
@@ -288,53 +259,13 @@ static const struct imx219_reg mode_1640_1232_regs[] = {
{0x016d, 0x68},
{0x016e, 0x04},
{0x016f, 0xd0},
- {0x0170, 0x01},
- {0x0171, 0x01},
- {0x0174, 0x01},
- {0x0175, 0x01},
- {0x0301, 0x05},
- {0x0303, 0x01},
- {0x0304, 0x03},
- {0x0305, 0x03},
- {0x0306, 0x00},
- {0x0307, 0x39},
- {0x030b, 0x01},
- {0x030c, 0x00},
- {0x030d, 0x72},
{0x0624, 0x06},
{0x0625, 0x68},
{0x0626, 0x04},
{0x0627, 0xd0},
- {0x455e, 0x00},
- {0x471e, 0x4b},
- {0x4767, 0x0f},
- {0x4750, 0x14},
- {0x4540, 0x00},
- {0x47b4, 0x14},
- {0x4713, 0x30},
- {0x478b, 0x10},
- {0x478f, 0x10},
- {0x4793, 0x10},
- {0x4797, 0x0e},
- {0x479b, 0x0e},
- {0x0162, 0x0d},
- {0x0163, 0x78},
};
static const struct imx219_reg mode_640_480_regs[] = {
- {0x0100, 0x00},
- {0x30eb, 0x05},
- {0x30eb, 0x0c},
- {0x300a, 0xff},
- {0x300b, 0xff},
- {0x30eb, 0x05},
- {0x30eb, 0x09},
- {0x0114, 0x01},
- {0x0128, 0x00},
- {0x012a, 0x18},
- {0x012b, 0x00},
- {0x0162, 0x0d},
- {0x0163, 0x78},
{0x0164, 0x03},
{0x0165, 0xe8},
{0x0166, 0x08},
@@ -347,35 +278,10 @@ static const struct imx219_reg mode_640_480_regs[] = {
{0x016d, 0x80},
{0x016e, 0x01},
{0x016f, 0xe0},
- {0x0170, 0x01},
- {0x0171, 0x01},
- {0x0174, 0x03},
- {0x0175, 0x03},
- {0x0301, 0x05},
- {0x0303, 0x01},
- {0x0304, 0x03},
- {0x0305, 0x03},
- {0x0306, 0x00},
- {0x0307, 0x39},
- {0x030b, 0x01},
- {0x030c, 0x00},
- {0x030d, 0x72},
{0x0624, 0x06},
{0x0625, 0x68},
{0x0626, 0x04},
{0x0627, 0xd0},
- {0x455e, 0x00},
- {0x471e, 0x4b},
- {0x4767, 0x0f},
- {0x4750, 0x14},
- {0x4540, 0x00},
- {0x47b4, 0x14},
- {0x4713, 0x30},
- {0x478b, 0x10},
- {0x478f, 0x10},
- {0x4793, 0x10},
- {0x4797, 0x0e},
- {0x479b, 0x0e},
};
static const struct imx219_reg raw8_framefmt_regs[] = {
@@ -485,6 +391,7 @@ static const struct imx219_mode supported_modes[] = {
.num_of_regs = ARRAY_SIZE(mode_3280x2464_regs),
.regs = mode_3280x2464_regs,
},
+ .binning = false,
},
{
/* 1080P 30fps cropped */
@@ -501,6 +408,7 @@ static const struct imx219_mode supported_modes[] = {
.num_of_regs = ARRAY_SIZE(mode_1920_1080_regs),
.regs = mode_1920_1080_regs,
},
+ .binning = false,
},
{
/* 2x2 binned 30fps mode */
@@ -517,6 +425,7 @@ static const struct imx219_mode supported_modes[] = {
.num_of_regs = ARRAY_SIZE(mode_1640_1232_regs),
.regs = mode_1640_1232_regs,
},
+ .binning = true,
},
{
/* 640x480 30fps mode */
@@ -533,6 +442,7 @@ static const struct imx219_mode supported_modes[] = {
.num_of_regs = ARRAY_SIZE(mode_640_480_regs),
.regs = mode_640_480_regs,
},
+ .binning = true,
},
};
@@ -979,6 +889,35 @@ static int imx219_set_framefmt(struct imx219 *imx219)
return -EINVAL;
}
+static int imx219_set_binning(struct imx219 *imx219)
+{
+ if (!imx219->mode->binning) {
+ return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
+ IMX219_REG_VALUE_16BIT,
+ IMX219_BINNING_NONE);
+ }
+
+ switch (imx219->fmt.code) {
+ case MEDIA_BUS_FMT_SRGGB8_1X8:
+ case MEDIA_BUS_FMT_SGRBG8_1X8:
+ case MEDIA_BUS_FMT_SGBRG8_1X8:
+ case MEDIA_BUS_FMT_SBGGR8_1X8:
+ return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
+ IMX219_REG_VALUE_16BIT,
+ IMX219_BINNING_2X2_ANALOG);
+
+ case MEDIA_BUS_FMT_SRGGB10_1X10:
+ case MEDIA_BUS_FMT_SGRBG10_1X10:
+ case MEDIA_BUS_FMT_SGBRG10_1X10:
+ case MEDIA_BUS_FMT_SBGGR10_1X10:
+ return imx219_write_reg(imx219, IMX219_REG_BINNING_MODE,
+ IMX219_REG_VALUE_16BIT,
+ IMX219_BINNING_2X2);
+ }
+
+ return -EINVAL;
+}
+
static const struct v4l2_rect *
__imx219_get_pad_crop(struct imx219 *imx219,
struct v4l2_subdev_state *sd_state,
@@ -1041,6 +980,13 @@ static int imx219_start_streaming(struct imx219 *imx219)
if (ret < 0)
return ret;
+ /* Send all registers that are common to all modes */
+ ret = imx219_write_regs(imx219, imx219_common_regs, ARRAY_SIZE(imx219_common_regs));
+ if (ret) {
+ dev_err(&client->dev, "%s failed to send mfg header\n", __func__);
+ goto err_rpm_put;
+ }
+
/* Apply default values of current mode */
reg_list = &imx219->mode->reg_list;
ret = imx219_write_regs(imx219, reg_list->regs, reg_list->num_of_regs);
@@ -1056,6 +1002,13 @@ static int imx219_start_streaming(struct imx219 *imx219)
goto err_rpm_put;
}
+ ret = imx219_set_binning(imx219);
+ if (ret) {
+ dev_err(&client->dev, "%s failed to set binning: %d\n",
+ __func__, ret);
+ goto err_rpm_put;
+ }
+
/* Apply customized values from user */
ret = __v4l2_ctrl_handler_setup(imx219->sd.ctrl_handler);
if (ret)
diff --git a/drivers/media/i2c/m5mols/m5mols_core.c b/drivers/media/i2c/m5mols/m5mols_core.c
index e29be02..f4233fe 100644
--- a/drivers/media/i2c/m5mols/m5mols_core.c
+++ b/drivers/media/i2c/m5mols/m5mols_core.c
@@ -488,7 +488,7 @@ static enum m5mols_restype __find_restype(u32 code)
do {
if (code == m5mols_default_ffmt[type].code)
return type;
- } while (type++ != SIZE_DEFAULT_FFMT);
+ } while (++type != SIZE_DEFAULT_FFMT);
return 0;
}
diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c
index ce94370..b9513e9 100644
--- a/drivers/media/i2c/max9286.c
+++ b/drivers/media/i2c/max9286.c
@@ -928,6 +928,7 @@ static int max9286_v4l2_register(struct max9286_priv *priv)
err_put_node:
fwnode_handle_put(ep);
err_async:
+ v4l2_ctrl_handler_free(&priv->ctrls);
max9286_v4l2_notifier_unregister(priv);
return ret;
diff --git a/drivers/media/i2c/ov2740.c b/drivers/media/i2c/ov2740.c
index 934c9d6..4b1ab3e 100644
--- a/drivers/media/i2c/ov2740.c
+++ b/drivers/media/i2c/ov2740.c
@@ -603,8 +603,10 @@ static int ov2740_init_controls(struct ov2740 *ov2740)
V4L2_CID_TEST_PATTERN,
ARRAY_SIZE(ov2740_test_pattern_menu) - 1,
0, 0, ov2740_test_pattern_menu);
- if (ctrl_hdlr->error)
+ if (ctrl_hdlr->error) {
+ v4l2_ctrl_handler_free(ctrl_hdlr);
return ctrl_hdlr->error;
+ }
ov2740->sd.ctrl_handler = ctrl_hdlr;
diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
index db5a19b..a141552 100644
--- a/drivers/media/i2c/ov5640.c
+++ b/drivers/media/i2c/ov5640.c
@@ -2776,7 +2776,7 @@ static int ov5640_init_controls(struct ov5640_dev *sensor)
/* Auto/manual gain */
ctrls->auto_gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_AUTOGAIN,
0, 1, 1, 1);
- ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_GAIN,
+ ctrls->gain = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_ANALOGUE_GAIN,
0, 1023, 1, 0);
ctrls->saturation = v4l2_ctrl_new_std(hdl, ops, V4L2_CID_SATURATION,
diff --git a/drivers/media/i2c/ov5675.c b/drivers/media/i2c/ov5675.c
index da5850b..2104589 100644
--- a/drivers/media/i2c/ov5675.c
+++ b/drivers/media/i2c/ov5675.c
@@ -791,8 +791,10 @@ static int ov5675_init_controls(struct ov5675 *ov5675)
v4l2_ctrl_new_std(ctrl_hdlr, &ov5675_ctrl_ops,
V4L2_CID_VFLIP, 0, 1, 1, 0);
- if (ctrl_hdlr->error)
+ if (ctrl_hdlr->error) {
+ v4l2_ctrl_handler_free(ctrl_hdlr);
return ctrl_hdlr->error;
+ }
ov5675->sd.ctrl_handler = ctrl_hdlr;
diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c
index 1be2c0e..23001ed 100644
--- a/drivers/media/i2c/ov7670.c
+++ b/drivers/media/i2c/ov7670.c
@@ -1841,7 +1841,7 @@ static int ov7670_parse_dt(struct device *dev,
if (bus_cfg.bus_type != V4L2_MBUS_PARALLEL) {
dev_err(dev, "Unsupported media bus type\n");
- return ret;
+ return -EINVAL;
}
info->mbus_config = bus_cfg.bus.parallel.flags;
diff --git a/drivers/media/i2c/ov772x.c b/drivers/media/i2c/ov772x.c
index 78602a2..e05b48c 100644
--- a/drivers/media/i2c/ov772x.c
+++ b/drivers/media/i2c/ov772x.c
@@ -1462,7 +1462,7 @@ static int ov772x_probe(struct i2c_client *client)
priv->subdev.ctrl_handler = &priv->hdl;
if (priv->hdl.error) {
ret = priv->hdl.error;
- goto error_mutex_destroy;
+ goto error_ctrl_free;
}
priv->clk = clk_get(&client->dev, NULL);
@@ -1515,7 +1515,6 @@ static int ov772x_probe(struct i2c_client *client)
clk_put(priv->clk);
error_ctrl_free:
v4l2_ctrl_handler_free(&priv->hdl);
-error_mutex_destroy:
mutex_destroy(&priv->lock);
return ret;
diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
index 47db0ee..3a8af39 100644
--- a/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
+++ b/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c
@@ -1851,6 +1851,9 @@ static void cio2_pci_remove(struct pci_dev *pci_dev)
v4l2_device_unregister(&cio2->v4l2_dev);
media_device_cleanup(&cio2->media_dev);
mutex_destroy(&cio2->lock);
+
+ pm_runtime_forbid(&pci_dev->dev);
+ pm_runtime_get_noresume(&pci_dev->dev);
}
static int __maybe_unused cio2_runtime_suspend(struct device *dev)
diff --git a/drivers/media/pci/saa7134/saa7134-core.c b/drivers/media/pci/saa7134/saa7134-core.c
index 96328b0..cf28713 100644
--- a/drivers/media/pci/saa7134/saa7134-core.c
+++ b/drivers/media/pci/saa7134/saa7134-core.c
@@ -978,7 +978,7 @@ static void saa7134_unregister_video(struct saa7134_dev *dev)
}
if (dev->radio_dev) {
if (video_is_registered(dev->radio_dev))
- vb2_video_unregister_device(dev->radio_dev);
+ video_unregister_device(dev->radio_dev);
else
video_device_release(dev->radio_dev);
dev->radio_dev = NULL;
diff --git a/drivers/media/platform/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
index 984fcdf..e515325 100644
--- a/drivers/media/platform/imx-jpeg/mxc-jpeg.c
+++ b/drivers/media/platform/imx-jpeg/mxc-jpeg.c
@@ -2105,19 +2105,12 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
jpeg->mode = mode;
/* Get clocks */
- jpeg->clk_ipg = devm_clk_get(dev, "ipg");
- if (IS_ERR(jpeg->clk_ipg)) {
- dev_err(dev, "failed to get clock: ipg\n");
- ret = PTR_ERR(jpeg->clk_ipg);
+ ret = devm_clk_bulk_get_all(&pdev->dev, &jpeg->clks);
+ if (ret < 0) {
+ dev_err(dev, "failed to get clock\n");
goto err_clk;
}
-
- jpeg->clk_per = devm_clk_get(dev, "per");
- if (IS_ERR(jpeg->clk_per)) {
- dev_err(dev, "failed to get clock: per\n");
- ret = PTR_ERR(jpeg->clk_per);
- goto err_clk;
- }
+ jpeg->num_clks = ret;
ret = mxc_jpeg_attach_pm_domains(jpeg);
if (ret < 0) {
@@ -2214,32 +2207,20 @@ static int mxc_jpeg_runtime_resume(struct device *dev)
struct mxc_jpeg_dev *jpeg = dev_get_drvdata(dev);
int ret;
- ret = clk_prepare_enable(jpeg->clk_ipg);
+ ret = clk_bulk_prepare_enable(jpeg->num_clks, jpeg->clks);
if (ret < 0) {
- dev_err(dev, "failed to enable clock: ipg\n");
- goto err_ipg;
- }
-
- ret = clk_prepare_enable(jpeg->clk_per);
- if (ret < 0) {
- dev_err(dev, "failed to enable clock: per\n");
- goto err_per;
+ dev_err(dev, "failed to enable clock\n");
+ return ret;
}
return 0;
-
-err_per:
- clk_disable_unprepare(jpeg->clk_ipg);
-err_ipg:
- return ret;
}
static int mxc_jpeg_runtime_suspend(struct device *dev)
{
struct mxc_jpeg_dev *jpeg = dev_get_drvdata(dev);
- clk_disable_unprepare(jpeg->clk_ipg);
- clk_disable_unprepare(jpeg->clk_per);
+ clk_bulk_disable_unprepare(jpeg->num_clks, jpeg->clks);
return 0;
}
diff --git a/drivers/media/platform/imx-jpeg/mxc-jpeg.h b/drivers/media/platform/imx-jpeg/mxc-jpeg.h
index 542993e..4950008 100644
--- a/drivers/media/platform/imx-jpeg/mxc-jpeg.h
+++ b/drivers/media/platform/imx-jpeg/mxc-jpeg.h
@@ -112,8 +112,8 @@ struct mxc_jpeg_dev {
spinlock_t hw_lock; /* hardware access lock */
unsigned int mode;
struct mutex lock; /* v4l2 ioctls serialization */
- struct clk *clk_ipg;
- struct clk *clk_per;
+ struct clk_bulk_data *clks;
+ int num_clks;
struct platform_device *pdev;
struct device *dev;
void __iomem *base_reg;
diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
index 20f59c5..3222c98 100644
--- a/drivers/media/platform/omap3isp/isp.c
+++ b/drivers/media/platform/omap3isp/isp.c
@@ -2306,7 +2306,16 @@ static int isp_probe(struct platform_device *pdev)
/* Regulators */
isp->isp_csiphy1.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy1");
+ if (IS_ERR(isp->isp_csiphy1.vdd)) {
+ ret = PTR_ERR(isp->isp_csiphy1.vdd);
+ goto error;
+ }
+
isp->isp_csiphy2.vdd = devm_regulator_get(&pdev->dev, "vdd-csiphy2");
+ if (IS_ERR(isp->isp_csiphy2.vdd)) {
+ ret = PTR_ERR(isp->isp_csiphy2.vdd);
+ goto error;
+ }
/* Clocks
*
diff --git a/drivers/media/platform/ti-vpe/cal.c b/drivers/media/platform/ti-vpe/cal.c
index 8e469d5..35d62eb 100644
--- a/drivers/media/platform/ti-vpe/cal.c
+++ b/drivers/media/platform/ti-vpe/cal.c
@@ -940,8 +940,10 @@ static struct cal_ctx *cal_ctx_create(struct cal_dev *cal, int inst)
ctx->datatype = CAL_CSI2_CTX_DT_ANY;
ret = cal_ctx_v4l2_init(ctx);
- if (ret)
+ if (ret) {
+ kfree(ctx);
return NULL;
+ }
return ctx;
}
diff --git a/drivers/media/rc/ene_ir.c b/drivers/media/rc/ene_ir.c
index e0927091..11ee21a 100644
--- a/drivers/media/rc/ene_ir.c
+++ b/drivers/media/rc/ene_ir.c
@@ -1106,6 +1106,8 @@ static void ene_remove(struct pnp_dev *pnp_dev)
struct ene_device *dev = pnp_get_drvdata(pnp_dev);
unsigned long flags;
+ rc_unregister_device(dev->rdev);
+ del_timer_sync(&dev->tx_sim_timer);
spin_lock_irqsave(&dev->hw_lock, flags);
ene_rx_disable(dev);
ene_rx_restore_hw_buffer(dev);
@@ -1113,7 +1115,6 @@ static void ene_remove(struct pnp_dev *pnp_dev)
free_irq(dev->irq, dev);
release_region(dev->hw_io, ENE_IO_SIZE);
- rc_unregister_device(dev->rdev);
kfree(dev);
}
diff --git a/drivers/media/rc/gpio-ir-recv.c b/drivers/media/rc/gpio-ir-recv.c
index 22e524b..a56c844 100644
--- a/drivers/media/rc/gpio-ir-recv.c
+++ b/drivers/media/rc/gpio-ir-recv.c
@@ -130,6 +130,23 @@ static int gpio_ir_recv_probe(struct platform_device *pdev)
"gpio-ir-recv-irq", gpio_dev);
}
+static int gpio_ir_recv_remove(struct platform_device *pdev)
+{
+ struct gpio_rc_dev *gpio_dev = platform_get_drvdata(pdev);
+ struct device *pmdev = gpio_dev->pmdev;
+
+ if (pmdev) {
+ pm_runtime_get_sync(pmdev);
+ cpu_latency_qos_remove_request(&gpio_dev->qos);
+
+ pm_runtime_disable(pmdev);
+ pm_runtime_put_noidle(pmdev);
+ pm_runtime_set_suspended(pmdev);
+ }
+
+ return 0;
+}
+
#ifdef CONFIG_PM
static int gpio_ir_recv_suspend(struct device *dev)
{
@@ -189,6 +206,7 @@ MODULE_DEVICE_TABLE(of, gpio_ir_recv_of_match);
static struct platform_driver gpio_ir_recv_driver = {
.probe = gpio_ir_recv_probe,
+ .remove = gpio_ir_recv_remove,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = of_match_ptr(gpio_ir_recv_of_match),
diff --git a/drivers/media/usb/siano/smsusb.c b/drivers/media/usb/siano/smsusb.c
index df4c5dcb..1babfe6 100644
--- a/drivers/media/usb/siano/smsusb.c
+++ b/drivers/media/usb/siano/smsusb.c
@@ -179,6 +179,7 @@ static void smsusb_stop_streaming(struct smsusb_device_t *dev)
for (i = 0; i < MAX_URBS; i++) {
usb_kill_urb(&dev->surbs[i].urb);
+ cancel_work_sync(&dev->surbs[i].wq);
if (dev->surbs[i].cb) {
smscore_putbuffer(dev->coredev, dev->surbs[i].cb);
diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
index 5bb29fc..0533586 100644
--- a/drivers/media/usb/uvc/uvc_ctrl.c
+++ b/drivers/media/usb/uvc/uvc_ctrl.c
@@ -6,6 +6,7 @@
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*/
+#include <asm/barrier.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
@@ -357,6 +358,11 @@ static const struct uvc_control_info uvc_ctrls[] = {
},
};
+static const u32 uvc_control_classes[] = {
+ V4L2_CID_CAMERA_CLASS,
+ V4L2_CID_USER_CLASS,
+};
+
static const struct uvc_menu_info power_line_frequency_controls[] = {
{ 0, "Disabled" },
{ 1, "50 Hz" },
@@ -427,7 +433,6 @@ static void uvc_ctrl_set_rel_speed(struct uvc_control_mapping *mapping,
static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
{
.id = V4L2_CID_BRIGHTNESS,
- .name = "Brightness",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_BRIGHTNESS_CONTROL,
.size = 16,
@@ -437,7 +442,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_CONTRAST,
- .name = "Contrast",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_CONTRAST_CONTROL,
.size = 16,
@@ -447,7 +451,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_HUE,
- .name = "Hue",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_HUE_CONTROL,
.size = 16,
@@ -459,7 +462,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_SATURATION,
- .name = "Saturation",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_SATURATION_CONTROL,
.size = 16,
@@ -469,7 +471,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_SHARPNESS,
- .name = "Sharpness",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_SHARPNESS_CONTROL,
.size = 16,
@@ -479,7 +480,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_GAMMA,
- .name = "Gamma",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_GAMMA_CONTROL,
.size = 16,
@@ -489,7 +489,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_BACKLIGHT_COMPENSATION,
- .name = "Backlight Compensation",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_BACKLIGHT_COMPENSATION_CONTROL,
.size = 16,
@@ -499,7 +498,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_GAIN,
- .name = "Gain",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_GAIN_CONTROL,
.size = 16,
@@ -509,7 +507,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_POWER_LINE_FREQUENCY,
- .name = "Power Line Frequency",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_POWER_LINE_FREQUENCY_CONTROL,
.size = 2,
@@ -521,7 +518,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_HUE_AUTO,
- .name = "Hue, Auto",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_HUE_AUTO_CONTROL,
.size = 1,
@@ -532,7 +528,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_EXPOSURE_AUTO,
- .name = "Exposure, Auto",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_AE_MODE_CONTROL,
.size = 4,
@@ -545,7 +540,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_EXPOSURE_AUTO_PRIORITY,
- .name = "Exposure, Auto Priority",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_AE_PRIORITY_CONTROL,
.size = 1,
@@ -555,7 +549,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_EXPOSURE_ABSOLUTE,
- .name = "Exposure (Absolute)",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_EXPOSURE_TIME_ABSOLUTE_CONTROL,
.size = 32,
@@ -567,7 +560,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_AUTO_WHITE_BALANCE,
- .name = "White Balance Temperature, Auto",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_WHITE_BALANCE_TEMPERATURE_AUTO_CONTROL,
.size = 1,
@@ -578,7 +570,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_WHITE_BALANCE_TEMPERATURE,
- .name = "White Balance Temperature",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_WHITE_BALANCE_TEMPERATURE_CONTROL,
.size = 16,
@@ -590,7 +581,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_AUTO_WHITE_BALANCE,
- .name = "White Balance Component, Auto",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_WHITE_BALANCE_COMPONENT_AUTO_CONTROL,
.size = 1,
@@ -602,7 +592,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_BLUE_BALANCE,
- .name = "White Balance Blue Component",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_WHITE_BALANCE_COMPONENT_CONTROL,
.size = 16,
@@ -614,7 +603,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_RED_BALANCE,
- .name = "White Balance Red Component",
.entity = UVC_GUID_UVC_PROCESSING,
.selector = UVC_PU_WHITE_BALANCE_COMPONENT_CONTROL,
.size = 16,
@@ -626,7 +614,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_FOCUS_ABSOLUTE,
- .name = "Focus (absolute)",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_FOCUS_ABSOLUTE_CONTROL,
.size = 16,
@@ -638,7 +625,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_FOCUS_AUTO,
- .name = "Focus, Auto",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_FOCUS_AUTO_CONTROL,
.size = 1,
@@ -649,7 +635,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_IRIS_ABSOLUTE,
- .name = "Iris, Absolute",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_IRIS_ABSOLUTE_CONTROL,
.size = 16,
@@ -659,7 +644,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_IRIS_RELATIVE,
- .name = "Iris, Relative",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_IRIS_RELATIVE_CONTROL,
.size = 8,
@@ -669,7 +653,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_ZOOM_ABSOLUTE,
- .name = "Zoom, Absolute",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_ZOOM_ABSOLUTE_CONTROL,
.size = 16,
@@ -679,7 +662,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_ZOOM_CONTINUOUS,
- .name = "Zoom, Continuous",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_ZOOM_RELATIVE_CONTROL,
.size = 0,
@@ -691,7 +673,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_PAN_ABSOLUTE,
- .name = "Pan (Absolute)",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_PANTILT_ABSOLUTE_CONTROL,
.size = 32,
@@ -701,7 +682,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_TILT_ABSOLUTE,
- .name = "Tilt (Absolute)",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_PANTILT_ABSOLUTE_CONTROL,
.size = 32,
@@ -711,7 +691,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_PAN_SPEED,
- .name = "Pan (Speed)",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_PANTILT_RELATIVE_CONTROL,
.size = 16,
@@ -723,7 +702,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_TILT_SPEED,
- .name = "Tilt (Speed)",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_PANTILT_RELATIVE_CONTROL,
.size = 16,
@@ -735,7 +713,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_PRIVACY,
- .name = "Privacy",
.entity = UVC_GUID_UVC_CAMERA,
.selector = UVC_CT_PRIVACY_CONTROL,
.size = 1,
@@ -745,7 +722,6 @@ static const struct uvc_control_mapping uvc_ctrl_mappings[] = {
},
{
.id = V4L2_CID_PRIVACY,
- .name = "Privacy",
.entity = UVC_GUID_EXT_GPIO_CONTROLLER,
.selector = UVC_CT_PRIVACY_CONTROL,
.size = 1,
@@ -1044,6 +1020,125 @@ static int __uvc_ctrl_get(struct uvc_video_chain *chain,
return 0;
}
+static int __uvc_query_v4l2_class(struct uvc_video_chain *chain, u32 req_id,
+ u32 found_id)
+{
+ bool find_next = req_id & V4L2_CTRL_FLAG_NEXT_CTRL;
+ unsigned int i;
+
+ req_id &= V4L2_CTRL_ID_MASK;
+
+ for (i = 0; i < ARRAY_SIZE(uvc_control_classes); i++) {
+ if (!(chain->ctrl_class_bitmap & BIT(i)))
+ continue;
+ if (!find_next) {
+ if (uvc_control_classes[i] == req_id)
+ return i;
+ continue;
+ }
+ if (uvc_control_classes[i] > req_id &&
+ uvc_control_classes[i] < found_id)
+ return i;
+ }
+
+ return -ENODEV;
+}
+
+static int uvc_query_v4l2_class(struct uvc_video_chain *chain, u32 req_id,
+ u32 found_id, struct v4l2_queryctrl *v4l2_ctrl)
+{
+ int idx;
+
+ idx = __uvc_query_v4l2_class(chain, req_id, found_id);
+ if (idx < 0)
+ return -ENODEV;
+
+ memset(v4l2_ctrl, 0, sizeof(*v4l2_ctrl));
+ v4l2_ctrl->id = uvc_control_classes[idx];
+ strscpy(v4l2_ctrl->name, v4l2_ctrl_get_name(v4l2_ctrl->id),
+ sizeof(v4l2_ctrl->name));
+ v4l2_ctrl->type = V4L2_CTRL_TYPE_CTRL_CLASS;
+ v4l2_ctrl->flags = V4L2_CTRL_FLAG_WRITE_ONLY
+ | V4L2_CTRL_FLAG_READ_ONLY;
+ return 0;
+}
+
+/*
+ * Check if control @v4l2_id can be accessed by the given control @ioctl
+ * (VIDIOC_G_EXT_CTRLS, VIDIOC_TRY_EXT_CTRLS or VIDIOC_S_EXT_CTRLS).
+ *
+ * For set operations on slave controls, check if the master's value is set to
+ * manual, either in the others controls set in the same ioctl call, or from
+ * the master's current value. This catches VIDIOC_S_EXT_CTRLS calls that set
+ * both the master and slave control, such as for instance setting
+ * auto_exposure=1, exposure_time_absolute=251.
+ */
+int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id,
+ const struct v4l2_ext_controls *ctrls,
+ unsigned long ioctl)
+{
+ struct uvc_control_mapping *master_map = NULL;
+ struct uvc_control *master_ctrl = NULL;
+ struct uvc_control_mapping *mapping;
+ struct uvc_control *ctrl;
+ bool read = ioctl == VIDIOC_G_EXT_CTRLS;
+ s32 val;
+ int ret;
+ int i;
+
+ if (__uvc_query_v4l2_class(chain, v4l2_id, 0) >= 0)
+ return -EACCES;
+
+ ctrl = uvc_find_control(chain, v4l2_id, &mapping);
+ if (!ctrl)
+ return -EINVAL;
+
+ if (!(ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR) && read)
+ return -EACCES;
+
+ if (!(ctrl->info.flags & UVC_CTRL_FLAG_SET_CUR) && !read)
+ return -EACCES;
+
+ if (ioctl != VIDIOC_S_EXT_CTRLS || !mapping->master_id)
+ return 0;
+
+ /*
+ * Iterate backwards in cases where the master control is accessed
+ * multiple times in the same ioctl. We want the last value.
+ */
+ for (i = ctrls->count - 1; i >= 0; i--) {
+ if (ctrls->controls[i].id == mapping->master_id)
+ return ctrls->controls[i].value ==
+ mapping->master_manual ? 0 : -EACCES;
+ }
+
+ __uvc_find_control(ctrl->entity, mapping->master_id, &master_map,
+ &master_ctrl, 0);
+
+ if (!master_ctrl || !(master_ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR))
+ return 0;
+
+ ret = __uvc_ctrl_get(chain, master_ctrl, master_map, &val);
+ if (ret >= 0 && val != mapping->master_manual)
+ return -EACCES;
+
+ return 0;
+}
+
+static const char *uvc_map_get_name(const struct uvc_control_mapping *map)
+{
+ const char *name;
+
+ if (map->name)
+ return map->name;
+
+ name = v4l2_ctrl_get_name(map->id);
+ if (name)
+ return name;
+
+ return "Unknown Control";
+}
+
static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
struct uvc_control *ctrl,
struct uvc_control_mapping *mapping,
@@ -1057,7 +1152,8 @@ static int __uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
memset(v4l2_ctrl, 0, sizeof(*v4l2_ctrl));
v4l2_ctrl->id = mapping->id;
v4l2_ctrl->type = mapping->v4l2_type;
- strscpy(v4l2_ctrl->name, mapping->name, sizeof(v4l2_ctrl->name));
+ strscpy(v4l2_ctrl->name, uvc_map_get_name(mapping),
+ sizeof(v4l2_ctrl->name));
v4l2_ctrl->flags = 0;
if (!(ctrl->info.flags & UVC_CTRL_FLAG_GET_CUR))
@@ -1147,12 +1243,31 @@ int uvc_query_v4l2_ctrl(struct uvc_video_chain *chain,
if (ret < 0)
return -ERESTARTSYS;
+ /* Check if the ctrl is a know class */
+ if (!(v4l2_ctrl->id & V4L2_CTRL_FLAG_NEXT_CTRL)) {
+ ret = uvc_query_v4l2_class(chain, v4l2_ctrl->id, 0, v4l2_ctrl);
+ if (!ret)
+ goto done;
+ }
+
ctrl = uvc_find_control(chain, v4l2_ctrl->id, &mapping);
if (ctrl == NULL) {
ret = -EINVAL;
goto done;
}
+ /*
+ * If we're enumerating control with V4L2_CTRL_FLAG_NEXT_CTRL, check if
+ * a class should be inserted between the previous control and the one
+ * we have just found.
+ */
+ if (v4l2_ctrl->id & V4L2_CTRL_FLAG_NEXT_CTRL) {
+ ret = uvc_query_v4l2_class(chain, v4l2_ctrl->id, mapping->id,
+ v4l2_ctrl);
+ if (!ret)
+ goto done;
+ }
+
ret = __uvc_query_v4l2_ctrl(chain, ctrl, mapping, v4l2_ctrl);
done:
mutex_unlock(&chain->ctrl_mutex);
@@ -1345,6 +1460,10 @@ static void uvc_ctrl_status_event_work(struct work_struct *work)
uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
+ /* The barrier is needed to synchronize with uvc_status_stop(). */
+ if (smp_load_acquire(&dev->flush_status))
+ return;
+
/* Resubmit the URB. */
w->urb->interval = dev->int_ep->desc.bInterval;
ret = usb_submit_urb(w->urb, GFP_KERNEL);
@@ -1446,6 +1565,11 @@ static int uvc_ctrl_add_event(struct v4l2_subscribed_event *sev, unsigned elems)
if (ret < 0)
return -ERESTARTSYS;
+ if (__uvc_query_v4l2_class(handle->chain, sev->id, 0) >= 0) {
+ ret = 0;
+ goto done;
+ }
+
ctrl = uvc_find_control(handle->chain, sev->id, &mapping);
if (ctrl == NULL) {
ret = -EINVAL;
@@ -1479,7 +1603,10 @@ static void uvc_ctrl_del_event(struct v4l2_subscribed_event *sev)
struct uvc_fh *handle = container_of(sev->fh, struct uvc_fh, vfh);
mutex_lock(&handle->chain->ctrl_mutex);
+ if (__uvc_query_v4l2_class(handle->chain, sev->id, 0) >= 0)
+ goto done;
list_del(&sev->node);
+done:
mutex_unlock(&handle->chain->ctrl_mutex);
}
@@ -1597,6 +1724,9 @@ int uvc_ctrl_get(struct uvc_video_chain *chain,
struct uvc_control *ctrl;
struct uvc_control_mapping *mapping;
+ if (__uvc_query_v4l2_class(chain, xctrl->id, 0) >= 0)
+ return -EACCES;
+
ctrl = uvc_find_control(chain, xctrl->id, &mapping);
if (ctrl == NULL)
return -EINVAL;
@@ -1616,6 +1746,9 @@ int uvc_ctrl_set(struct uvc_fh *handle,
s32 max;
int ret;
+ if (__uvc_query_v4l2_class(chain, xctrl->id, 0) >= 0)
+ return -EACCES;
+
ctrl = uvc_find_control(chain, xctrl->id, &mapping);
if (ctrl == NULL)
return -EINVAL;
@@ -2066,11 +2199,12 @@ static int uvc_ctrl_add_info(struct uvc_device *dev, struct uvc_control *ctrl,
/*
* Add a control mapping to a given control.
*/
-static int __uvc_ctrl_add_mapping(struct uvc_device *dev,
+static int __uvc_ctrl_add_mapping(struct uvc_video_chain *chain,
struct uvc_control *ctrl, const struct uvc_control_mapping *mapping)
{
struct uvc_control_mapping *map;
unsigned int size;
+ unsigned int i;
/* Most mappings come from static kernel data and need to be duplicated.
* Mappings that come from userspace will be unnecessarily duplicated,
@@ -2094,9 +2228,18 @@ static int __uvc_ctrl_add_mapping(struct uvc_device *dev,
if (map->set == NULL)
map->set = uvc_set_le_value;
+ for (i = 0; i < ARRAY_SIZE(uvc_control_classes); i++) {
+ if (V4L2_CTRL_ID2WHICH(uvc_control_classes[i]) ==
+ V4L2_CTRL_ID2WHICH(map->id)) {
+ chain->ctrl_class_bitmap |= BIT(i);
+ break;
+ }
+ }
+
list_add_tail(&map->list, &ctrl->info.mappings);
- uvc_dbg(dev, CONTROL, "Adding mapping '%s' to control %pUl/%u\n",
- map->name, ctrl->info.entity, ctrl->info.selector);
+ uvc_dbg(chain->dev, CONTROL, "Adding mapping '%s' to control %pUl/%u\n",
+ uvc_map_get_name(map), ctrl->info.entity,
+ ctrl->info.selector);
return 0;
}
@@ -2114,7 +2257,7 @@ int uvc_ctrl_add_mapping(struct uvc_video_chain *chain,
if (mapping->id & ~V4L2_CTRL_ID_MASK) {
uvc_dbg(dev, CONTROL,
"Can't add mapping '%s', control id 0x%08x is invalid\n",
- mapping->name, mapping->id);
+ uvc_map_get_name(mapping), mapping->id);
return -EINVAL;
}
@@ -2161,7 +2304,7 @@ int uvc_ctrl_add_mapping(struct uvc_video_chain *chain,
if (mapping->id == map->id) {
uvc_dbg(dev, CONTROL,
"Can't add mapping '%s', control id 0x%08x already exists\n",
- mapping->name, mapping->id);
+ uvc_map_get_name(mapping), mapping->id);
ret = -EEXIST;
goto done;
}
@@ -2172,12 +2315,12 @@ int uvc_ctrl_add_mapping(struct uvc_video_chain *chain,
atomic_dec(&dev->nmappings);
uvc_dbg(dev, CONTROL,
"Can't add mapping '%s', maximum mappings count (%u) exceeded\n",
- mapping->name, UVC_MAX_CONTROL_MAPPINGS);
+ uvc_map_get_name(mapping), UVC_MAX_CONTROL_MAPPINGS);
ret = -ENOMEM;
goto done;
}
- ret = __uvc_ctrl_add_mapping(dev, ctrl, mapping);
+ ret = __uvc_ctrl_add_mapping(chain, ctrl, mapping);
if (ret < 0)
atomic_dec(&dev->nmappings);
@@ -2253,7 +2396,8 @@ static void uvc_ctrl_prune_entity(struct uvc_device *dev,
* Add control information and hardcoded stock control mappings to the given
* device.
*/
-static void uvc_ctrl_init_ctrl(struct uvc_device *dev, struct uvc_control *ctrl)
+static void uvc_ctrl_init_ctrl(struct uvc_video_chain *chain,
+ struct uvc_control *ctrl)
{
const struct uvc_control_info *info = uvc_ctrls;
const struct uvc_control_info *iend = info + ARRAY_SIZE(uvc_ctrls);
@@ -2272,14 +2416,14 @@ static void uvc_ctrl_init_ctrl(struct uvc_device *dev, struct uvc_control *ctrl)
for (; info < iend; ++info) {
if (uvc_entity_match_guid(ctrl->entity, info->entity) &&
ctrl->index == info->index) {
- uvc_ctrl_add_info(dev, ctrl, info);
+ uvc_ctrl_add_info(chain->dev, ctrl, info);
/*
* Retrieve control flags from the device. Ignore errors
* and work with default flag values from the uvc_ctrl
* array when the device doesn't properly implement
* GET_INFO on standard controls.
*/
- uvc_ctrl_get_flags(dev, ctrl, &ctrl->info);
+ uvc_ctrl_get_flags(chain->dev, ctrl, &ctrl->info);
break;
}
}
@@ -2290,22 +2434,20 @@ static void uvc_ctrl_init_ctrl(struct uvc_device *dev, struct uvc_control *ctrl)
for (; mapping < mend; ++mapping) {
if (uvc_entity_match_guid(ctrl->entity, mapping->entity) &&
ctrl->info.selector == mapping->selector)
- __uvc_ctrl_add_mapping(dev, ctrl, mapping);
+ __uvc_ctrl_add_mapping(chain, ctrl, mapping);
}
}
/*
* Initialize device controls.
*/
-int uvc_ctrl_init_device(struct uvc_device *dev)
+static int uvc_ctrl_init_chain(struct uvc_video_chain *chain)
{
struct uvc_entity *entity;
unsigned int i;
- INIT_WORK(&dev->async_ctrl.work, uvc_ctrl_status_event_work);
-
/* Walk the entities list and instantiate controls */
- list_for_each_entry(entity, &dev->entities, list) {
+ list_for_each_entry(entity, &chain->entities, chain) {
struct uvc_control *ctrl;
unsigned int bControlSize = 0, ncontrols;
u8 *bmControls = NULL;
@@ -2325,7 +2467,7 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
}
/* Remove bogus/blacklisted controls */
- uvc_ctrl_prune_entity(dev, entity);
+ uvc_ctrl_prune_entity(chain->dev, entity);
/* Count supported controls and allocate the controls array */
ncontrols = memweight(bmControls, bControlSize);
@@ -2347,7 +2489,7 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
ctrl->entity = entity;
ctrl->index = i;
- uvc_ctrl_init_ctrl(dev, ctrl);
+ uvc_ctrl_init_ctrl(chain, ctrl);
ctrl++;
}
}
@@ -2355,6 +2497,22 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
return 0;
}
+int uvc_ctrl_init_device(struct uvc_device *dev)
+{
+ struct uvc_video_chain *chain;
+ int ret;
+
+ INIT_WORK(&dev->async_ctrl.work, uvc_ctrl_status_event_work);
+
+ list_for_each_entry(chain, &dev->chains, list) {
+ ret = uvc_ctrl_init_chain(chain);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
/*
* Cleanup device controls.
*/
@@ -2366,6 +2524,7 @@ static void uvc_ctrl_cleanup_mappings(struct uvc_device *dev,
list_for_each_entry_safe(mapping, nm, &ctrl->info.mappings, list) {
list_del(&mapping->list);
kfree(mapping->menu_info);
+ kfree(mapping->name);
kfree(mapping);
}
}
diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
index 72fff72..2e7df1d 100644
--- a/drivers/media/usb/uvc/uvc_driver.c
+++ b/drivers/media/usb/uvc/uvc_driver.c
@@ -531,14 +531,10 @@ static int uvc_parse_format(struct uvc_device *dev,
fmtdesc = uvc_format_by_guid(&buffer[5]);
if (fmtdesc != NULL) {
- strscpy(format->name, fmtdesc->name,
- sizeof(format->name));
format->fcc = fmtdesc->fcc;
} else {
dev_info(&streaming->intf->dev,
"Unknown video format %pUl\n", &buffer[5]);
- snprintf(format->name, sizeof(format->name), "%pUl\n",
- &buffer[5]);
format->fcc = 0;
}
@@ -549,8 +545,6 @@ static int uvc_parse_format(struct uvc_device *dev,
*/
if (dev->quirks & UVC_QUIRK_FORCE_Y8) {
if (format->fcc == V4L2_PIX_FMT_YUYV) {
- strscpy(format->name, "Greyscale 8-bit (Y8 )",
- sizeof(format->name));
format->fcc = V4L2_PIX_FMT_GREY;
format->bpp = 8;
width_multiplier = 2;
@@ -591,7 +585,6 @@ static int uvc_parse_format(struct uvc_device *dev,
return -EINVAL;
}
- strscpy(format->name, "MJPEG", sizeof(format->name));
format->fcc = V4L2_PIX_FMT_MJPEG;
format->flags = UVC_FMT_FLAG_COMPRESSED;
format->bpp = 0;
@@ -607,17 +600,7 @@ static int uvc_parse_format(struct uvc_device *dev,
return -EINVAL;
}
- switch (buffer[8] & 0x7f) {
- case 0:
- strscpy(format->name, "SD-DV", sizeof(format->name));
- break;
- case 1:
- strscpy(format->name, "SDL-DV", sizeof(format->name));
- break;
- case 2:
- strscpy(format->name, "HD-DV", sizeof(format->name));
- break;
- default:
+ if ((buffer[8] & 0x7f) > 2) {
uvc_dbg(dev, DESCR,
"device %d videostreaming interface %d: unknown DV format %u\n",
dev->udev->devnum,
@@ -625,9 +608,6 @@ static int uvc_parse_format(struct uvc_device *dev,
return -EINVAL;
}
- strlcat(format->name, buffer[8] & (1 << 7) ? " 60Hz" : " 50Hz",
- sizeof(format->name));
-
format->fcc = V4L2_PIX_FMT_DV;
format->flags = UVC_FMT_FLAG_COMPRESSED | UVC_FMT_FLAG_STREAM;
format->bpp = 0;
@@ -654,7 +634,7 @@ static int uvc_parse_format(struct uvc_device *dev,
return -EINVAL;
}
- uvc_dbg(dev, DESCR, "Found format %s\n", format->name);
+ uvc_dbg(dev, DESCR, "Found format %p4cc", &format->fcc);
buflen -= buffer[0];
buffer += buffer[0];
@@ -1151,10 +1131,8 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
+ n;
memcpy(unit->extension.bmControls, &buffer[23+p], 2*n);
- if (buffer[24+p+2*n] != 0)
- usb_string(udev, buffer[24+p+2*n], unit->name,
- sizeof(unit->name));
- else
+ if (buffer[24+p+2*n] == 0 ||
+ usb_string(udev, buffer[24+p+2*n], unit->name, sizeof(unit->name)) < 0)
sprintf(unit->name, "Extension %u", buffer[3]);
list_add_tail(&unit->list, &dev->entities);
@@ -1278,15 +1256,15 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
memcpy(term->media.bmTransportModes, &buffer[10+n], p);
}
- if (buffer[7] != 0)
- usb_string(udev, buffer[7], term->name,
- sizeof(term->name));
- else if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA)
- sprintf(term->name, "Camera %u", buffer[3]);
- else if (UVC_ENTITY_TYPE(term) == UVC_ITT_MEDIA_TRANSPORT_INPUT)
- sprintf(term->name, "Media %u", buffer[3]);
- else
- sprintf(term->name, "Input %u", buffer[3]);
+ if (buffer[7] == 0 ||
+ usb_string(udev, buffer[7], term->name, sizeof(term->name)) < 0) {
+ if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA)
+ sprintf(term->name, "Camera %u", buffer[3]);
+ if (UVC_ENTITY_TYPE(term) == UVC_ITT_MEDIA_TRANSPORT_INPUT)
+ sprintf(term->name, "Media %u", buffer[3]);
+ else
+ sprintf(term->name, "Input %u", buffer[3]);
+ }
list_add_tail(&term->list, &dev->entities);
break;
@@ -1318,10 +1296,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
memcpy(term->baSourceID, &buffer[7], 1);
- if (buffer[8] != 0)
- usb_string(udev, buffer[8], term->name,
- sizeof(term->name));
- else
+ if (buffer[8] == 0 ||
+ usb_string(udev, buffer[8], term->name, sizeof(term->name)) < 0)
sprintf(term->name, "Output %u", buffer[3]);
list_add_tail(&term->list, &dev->entities);
@@ -1343,10 +1319,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
memcpy(unit->baSourceID, &buffer[5], p);
- if (buffer[5+p] != 0)
- usb_string(udev, buffer[5+p], unit->name,
- sizeof(unit->name));
- else
+ if (buffer[5+p] == 0 ||
+ usb_string(udev, buffer[5+p], unit->name, sizeof(unit->name)) < 0)
sprintf(unit->name, "Selector %u", buffer[3]);
list_add_tail(&unit->list, &dev->entities);
@@ -1376,10 +1350,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
if (dev->uvc_version >= 0x0110)
unit->processing.bmVideoStandards = buffer[9+n];
- if (buffer[8+n] != 0)
- usb_string(udev, buffer[8+n], unit->name,
- sizeof(unit->name));
- else
+ if (buffer[8+n] == 0 ||
+ usb_string(udev, buffer[8+n], unit->name, sizeof(unit->name)) < 0)
sprintf(unit->name, "Processing %u", buffer[3]);
list_add_tail(&unit->list, &dev->entities);
@@ -1407,10 +1379,8 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
unit->extension.bmControls = (u8 *)unit + sizeof(*unit);
memcpy(unit->extension.bmControls, &buffer[23+p], n);
- if (buffer[23+p+n] != 0)
- usb_string(udev, buffer[23+p+n], unit->name,
- sizeof(unit->name));
- else
+ if (buffer[23+p+n] == 0 ||
+ usb_string(udev, buffer[23+p+n], unit->name, sizeof(unit->name)) < 0)
sprintf(unit->name, "Extension %u", buffer[3]);
list_add_tail(&unit->list, &dev->entities);
@@ -2455,14 +2425,14 @@ static int uvc_probe(struct usb_interface *intf,
if (v4l2_device_register(&intf->dev, &dev->vdev) < 0)
goto error;
- /* Initialize controls. */
- if (uvc_ctrl_init_device(dev) < 0)
- goto error;
-
/* Scan the device for video chains. */
if (uvc_scan_device(dev) < 0)
goto error;
+ /* Initialize controls. */
+ if (uvc_ctrl_init_device(dev) < 0)
+ goto error;
+
/* Register video device nodes. */
if (uvc_register_chains(dev) < 0)
goto error;
@@ -2714,6 +2684,24 @@ static const struct usb_device_id uvc_ids[] = {
.bInterfaceSubClass = 1,
.bInterfaceProtocol = 0,
.driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
+ /* Logitech, Webcam C910 */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = 0x046d,
+ .idProduct = 0x0821,
+ .bInterfaceClass = USB_CLASS_VIDEO,
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_WAKE_AUTOSUSPEND)},
+ /* Logitech, Webcam B910 */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = 0x046d,
+ .idProduct = 0x0823,
+ .bInterfaceClass = USB_CLASS_VIDEO,
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_WAKE_AUTOSUSPEND)},
/* Logitech Quickcam Fusion */
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
| USB_DEVICE_ID_MATCH_INT_INFO,
diff --git a/drivers/media/usb/uvc/uvc_entity.c b/drivers/media/usb/uvc/uvc_entity.c
index 7c4d2f9..cc68dd2 100644
--- a/drivers/media/usb/uvc/uvc_entity.c
+++ b/drivers/media/usb/uvc/uvc_entity.c
@@ -37,7 +37,7 @@ static int uvc_mc_create_links(struct uvc_video_chain *chain,
continue;
remote = uvc_entity_by_id(chain->dev, entity->baSourceID[i]);
- if (remote == NULL)
+ if (remote == NULL || remote->num_pads == 0)
return -EINVAL;
source = (UVC_ENTITY_TYPE(remote) == UVC_TT_STREAMING)
diff --git a/drivers/media/usb/uvc/uvc_status.c b/drivers/media/usb/uvc/uvc_status.c
index 753c822..3fa658b 100644
--- a/drivers/media/usb/uvc/uvc_status.c
+++ b/drivers/media/usb/uvc/uvc_status.c
@@ -6,6 +6,7 @@
* Laurent Pinchart (laurent.pinchart@ideasonboard.com)
*/
+#include <asm/barrier.h>
#include <linux/kernel.h>
#include <linux/input.h>
#include <linux/slab.h>
@@ -309,5 +310,41 @@ int uvc_status_start(struct uvc_device *dev, gfp_t flags)
void uvc_status_stop(struct uvc_device *dev)
{
+ struct uvc_ctrl_work *w = &dev->async_ctrl;
+
+ /*
+ * Prevent the asynchronous control handler from requeing the URB. The
+ * barrier is needed so the flush_status change is visible to other
+ * CPUs running the asynchronous handler before usb_kill_urb() is
+ * called below.
+ */
+ smp_store_release(&dev->flush_status, true);
+
+ /*
+ * Cancel any pending asynchronous work. If any status event was queued,
+ * process it synchronously.
+ */
+ if (cancel_work_sync(&w->work))
+ uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
+
+ /* Kill the urb. */
usb_kill_urb(dev->int_urb);
+
+ /*
+ * The URB completion handler may have queued asynchronous work. This
+ * won't resubmit the URB as flush_status is set, but it needs to be
+ * cancelled before returning or it could then race with a future
+ * uvc_status_start() call.
+ */
+ if (cancel_work_sync(&w->work))
+ uvc_ctrl_status_event(w->chain, w->ctrl, w->data);
+
+ /*
+ * From this point, there are no events on the queue and the status URB
+ * is dead. No events will be queued until uvc_status_start() is called.
+ * The barrier is needed to make sure that flush_status is visible to
+ * uvc_ctrl_status_event_work() when uvc_status_start() will be called
+ * again.
+ */
+ smp_store_release(&dev->flush_status, false);
}
diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
index 63842eb..ab535e5 100644
--- a/drivers/media/usb/uvc/uvc_v4l2.c
+++ b/drivers/media/usb/uvc/uvc_v4l2.c
@@ -40,7 +40,15 @@ static int uvc_ioctl_ctrl_map(struct uvc_video_chain *chain,
return -ENOMEM;
map->id = xmap->id;
- memcpy(map->name, xmap->name, sizeof(map->name));
+ /* Non standard control id. */
+ if (v4l2_ctrl_get_name(map->id) == NULL) {
+ map->name = kmemdup(xmap->name, sizeof(xmap->name),
+ GFP_KERNEL);
+ if (!map->name) {
+ ret = -ENOMEM;
+ goto free_map;
+ }
+ }
memcpy(map->entity, xmap->entity, sizeof(map->entity));
map->selector = xmap->selector;
map->size = xmap->size;
@@ -649,8 +657,6 @@ static int uvc_ioctl_enum_fmt(struct uvc_streaming *stream,
fmt->flags = 0;
if (format->flags & UVC_FMT_FLAG_COMPRESSED)
fmt->flags |= V4L2_FMT_FLAG_COMPRESSED;
- strscpy(fmt->description, format->name, sizeof(fmt->description));
- fmt->description[sizeof(fmt->description) - 1] = 0;
fmt->pixelformat = format->fcc;
return 0;
}
@@ -1000,58 +1006,23 @@ static int uvc_ioctl_query_ext_ctrl(struct file *file, void *fh,
return 0;
}
-static int uvc_ioctl_g_ctrl(struct file *file, void *fh,
- struct v4l2_control *ctrl)
+static int uvc_ctrl_check_access(struct uvc_video_chain *chain,
+ struct v4l2_ext_controls *ctrls,
+ unsigned long ioctl)
{
- struct uvc_fh *handle = fh;
- struct uvc_video_chain *chain = handle->chain;
- struct v4l2_ext_control xctrl;
- int ret;
+ struct v4l2_ext_control *ctrl = ctrls->controls;
+ unsigned int i;
+ int ret = 0;
- memset(&xctrl, 0, sizeof(xctrl));
- xctrl.id = ctrl->id;
-
- ret = uvc_ctrl_begin(chain);
- if (ret < 0)
- return ret;
-
- ret = uvc_ctrl_get(chain, &xctrl);
- uvc_ctrl_rollback(handle);
- if (ret < 0)
- return ret;
-
- ctrl->value = xctrl.value;
- return 0;
-}
-
-static int uvc_ioctl_s_ctrl(struct file *file, void *fh,
- struct v4l2_control *ctrl)
-{
- struct uvc_fh *handle = fh;
- struct uvc_video_chain *chain = handle->chain;
- struct v4l2_ext_control xctrl;
- int ret;
-
- memset(&xctrl, 0, sizeof(xctrl));
- xctrl.id = ctrl->id;
- xctrl.value = ctrl->value;
-
- ret = uvc_ctrl_begin(chain);
- if (ret < 0)
- return ret;
-
- ret = uvc_ctrl_set(handle, &xctrl);
- if (ret < 0) {
- uvc_ctrl_rollback(handle);
- return ret;
+ for (i = 0; i < ctrls->count; ++ctrl, ++i) {
+ ret = uvc_ctrl_is_accessible(chain, ctrl->id, ctrls, ioctl);
+ if (ret)
+ break;
}
- ret = uvc_ctrl_commit(handle, &xctrl, 1);
- if (ret < 0)
- return ret;
+ ctrls->error_idx = ioctl == VIDIOC_TRY_EXT_CTRLS ? i : ctrls->count;
- ctrl->value = xctrl.value;
- return 0;
+ return ret;
}
static int uvc_ioctl_g_ext_ctrls(struct file *file, void *fh,
@@ -1063,6 +1034,10 @@ static int uvc_ioctl_g_ext_ctrls(struct file *file, void *fh,
unsigned int i;
int ret;
+ ret = uvc_ctrl_check_access(chain, ctrls, VIDIOC_G_EXT_CTRLS);
+ if (ret < 0)
+ return ret;
+
if (ctrls->which == V4L2_CTRL_WHICH_DEF_VAL) {
for (i = 0; i < ctrls->count; ++ctrl, ++i) {
struct v4l2_queryctrl qc = { .id = ctrl->id };
@@ -1099,16 +1074,16 @@ static int uvc_ioctl_g_ext_ctrls(struct file *file, void *fh,
static int uvc_ioctl_s_try_ext_ctrls(struct uvc_fh *handle,
struct v4l2_ext_controls *ctrls,
- bool commit)
+ unsigned long ioctl)
{
struct v4l2_ext_control *ctrl = ctrls->controls;
struct uvc_video_chain *chain = handle->chain;
unsigned int i;
int ret;
- /* Default value cannot be changed */
- if (ctrls->which == V4L2_CTRL_WHICH_DEF_VAL)
- return -EINVAL;
+ ret = uvc_ctrl_check_access(chain, ctrls, ioctl);
+ if (ret < 0)
+ return ret;
ret = uvc_ctrl_begin(chain);
if (ret < 0)
@@ -1118,14 +1093,15 @@ static int uvc_ioctl_s_try_ext_ctrls(struct uvc_fh *handle,
ret = uvc_ctrl_set(handle, ctrl);
if (ret < 0) {
uvc_ctrl_rollback(handle);
- ctrls->error_idx = commit ? ctrls->count : i;
+ ctrls->error_idx = ioctl == VIDIOC_S_EXT_CTRLS ?
+ ctrls->count : i;
return ret;
}
}
ctrls->error_idx = 0;
- if (commit)
+ if (ioctl == VIDIOC_S_EXT_CTRLS)
return uvc_ctrl_commit(handle, ctrls->controls, ctrls->count);
else
return uvc_ctrl_rollback(handle);
@@ -1136,7 +1112,7 @@ static int uvc_ioctl_s_ext_ctrls(struct file *file, void *fh,
{
struct uvc_fh *handle = fh;
- return uvc_ioctl_s_try_ext_ctrls(handle, ctrls, true);
+ return uvc_ioctl_s_try_ext_ctrls(handle, ctrls, VIDIOC_S_EXT_CTRLS);
}
static int uvc_ioctl_try_ext_ctrls(struct file *file, void *fh,
@@ -1144,7 +1120,7 @@ static int uvc_ioctl_try_ext_ctrls(struct file *file, void *fh,
{
struct uvc_fh *handle = fh;
- return uvc_ioctl_s_try_ext_ctrls(handle, ctrls, false);
+ return uvc_ioctl_s_try_ext_ctrls(handle, ctrls, VIDIOC_TRY_EXT_CTRLS);
}
static int uvc_ioctl_querymenu(struct file *file, void *fh,
@@ -1543,8 +1519,6 @@ const struct v4l2_ioctl_ops uvc_ioctl_ops = {
.vidioc_s_input = uvc_ioctl_s_input,
.vidioc_queryctrl = uvc_ioctl_queryctrl,
.vidioc_query_ext_ctrl = uvc_ioctl_query_ext_ctrl,
- .vidioc_g_ctrl = uvc_ioctl_g_ctrl,
- .vidioc_s_ctrl = uvc_ioctl_s_ctrl,
.vidioc_g_ext_ctrls = uvc_ioctl_g_ext_ctrls,
.vidioc_s_ext_ctrls = uvc_ioctl_s_ext_ctrls,
.vidioc_try_ext_ctrls = uvc_ioctl_try_ext_ctrls,
diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
index 1b4cc93..f477cfb 100644
--- a/drivers/media/usb/uvc/uvc_video.c
+++ b/drivers/media/usb/uvc/uvc_video.c
@@ -1334,7 +1334,9 @@ static void uvc_video_decode_meta(struct uvc_streaming *stream,
if (has_scr)
memcpy(stream->clock.last_scr, scr, 6);
- memcpy(&meta->length, mem, length);
+ meta->length = mem[0];
+ meta->flags = mem[1];
+ memcpy(meta->buf, &mem[2], length - 2);
meta_buf->bytesused += length + sizeof(meta->ns) + sizeof(meta->sof);
uvc_dbg(stream->dev, FRAME,
@@ -1951,6 +1953,17 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
"Selecting alternate setting %u (%u B/frame bandwidth)\n",
altsetting, best_psize);
+ /*
+ * Some devices, namely the Logitech C910 and B910, are unable
+ * to recover from a USB autosuspend, unless the alternate
+ * setting of the streaming interface is toggled.
+ */
+ if (stream->dev->quirks & UVC_QUIRK_WAKE_AUTOSUSPEND) {
+ usb_set_interface(stream->dev->udev, intfnum,
+ altsetting);
+ usb_set_interface(stream->dev->udev, intfnum, 0);
+ }
+
ret = usb_set_interface(stream->dev->udev, intfnum, altsetting);
if (ret < 0)
return ret;
diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
index c3ea6a5..1aa2cc9 100644
--- a/drivers/media/usb/uvc/uvcvideo.h
+++ b/drivers/media/usb/uvc/uvcvideo.h
@@ -209,6 +209,7 @@
#define UVC_QUIRK_RESTORE_CTRLS_ON_INIT 0x00000400
#define UVC_QUIRK_FORCE_Y8 0x00000800
#define UVC_QUIRK_FORCE_BPP 0x00001000
+#define UVC_QUIRK_WAKE_AUTOSUSPEND 0x00002000
/* Format flags */
#define UVC_FMT_FLAG_COMPRESSED 0x00000001
@@ -241,7 +242,7 @@ struct uvc_control_mapping {
struct list_head ev_subs;
u32 id;
- u8 name[32];
+ char *name;
u8 entity[16];
u8 selector;
@@ -405,8 +406,6 @@ struct uvc_format {
u32 fcc;
u32 flags;
- char name[32];
-
unsigned int nframes;
struct uvc_frame *frame;
};
@@ -476,6 +475,7 @@ struct uvc_video_chain {
struct v4l2_prio_state prio; /* V4L2 priority state */
u32 caps; /* V4L2 chain-wide caps */
+ u8 ctrl_class_bitmap; /* Bitmap of valid classes */
};
struct uvc_stats_frame {
@@ -697,6 +697,7 @@ struct uvc_device {
/* Status Interrupt Endpoint */
struct usb_host_endpoint *int_ep;
struct urb *int_urb;
+ bool flush_status;
u8 *status;
struct input_dev *input;
char input_phys[64];
@@ -900,6 +901,9 @@ static inline int uvc_ctrl_rollback(struct uvc_fh *handle)
int uvc_ctrl_get(struct uvc_video_chain *chain, struct v4l2_ext_control *xctrl);
int uvc_ctrl_set(struct uvc_fh *handle, struct v4l2_ext_control *xctrl);
+int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id,
+ const struct v4l2_ext_controls *ctrls,
+ unsigned long ioctl);
int uvc_xu_ctrl_query(struct uvc_video_chain *chain,
struct uvc_xu_control_query *xqry);
diff --git a/drivers/media/v4l2-core/v4l2-jpeg.c b/drivers/media/v4l2-core/v4l2-jpeg.c
index c2513b77..94435a7 100644
--- a/drivers/media/v4l2-core/v4l2-jpeg.c
+++ b/drivers/media/v4l2-core/v4l2-jpeg.c
@@ -460,7 +460,7 @@ static int jpeg_parse_app14_data(struct jpeg_stream *stream,
/* Check for "Adobe\0" in Ap1..6 */
if (stream->curr + 6 > stream->end ||
strncmp(stream->curr, "Adobe\0", 6))
- return -EINVAL;
+ return jpeg_skip(stream, lp - 2);
/* get to Ap12 */
ret = jpeg_skip(stream, 11);
@@ -474,7 +474,7 @@ static int jpeg_parse_app14_data(struct jpeg_stream *stream,
*tf = ret;
/* skip the rest of the segment, this ensures at least it is complete */
- skip = lp - 2 - 11;
+ skip = lp - 2 - 11 - 1;
return jpeg_skip(stream, skip);
}
diff --git a/drivers/memory/atmel-sdramc.c b/drivers/memory/atmel-sdramc.c
index 9c49d00..ea6e9e1 100644
--- a/drivers/memory/atmel-sdramc.c
+++ b/drivers/memory/atmel-sdramc.c
@@ -47,19 +47,17 @@ static int atmel_ramc_probe(struct platform_device *pdev)
caps = of_device_get_match_data(&pdev->dev);
if (caps->has_ddrck) {
- clk = devm_clk_get(&pdev->dev, "ddrck");
+ clk = devm_clk_get_enabled(&pdev->dev, "ddrck");
if (IS_ERR(clk))
return PTR_ERR(clk);
- clk_prepare_enable(clk);
}
if (caps->has_mpddr_clk) {
- clk = devm_clk_get(&pdev->dev, "mpddr");
+ clk = devm_clk_get_enabled(&pdev->dev, "mpddr");
if (IS_ERR(clk)) {
pr_err("AT91 RAMC: couldn't get mpddr clock\n");
return PTR_ERR(clk);
}
- clk_prepare_enable(clk);
}
return 0;
diff --git a/drivers/memory/mvebu-devbus.c b/drivers/memory/mvebu-devbus.c
index 8450638..efc6c08 100644
--- a/drivers/memory/mvebu-devbus.c
+++ b/drivers/memory/mvebu-devbus.c
@@ -280,10 +280,9 @@ static int mvebu_devbus_probe(struct platform_device *pdev)
if (IS_ERR(devbus->base))
return PTR_ERR(devbus->base);
- clk = devm_clk_get(&pdev->dev, NULL);
+ clk = devm_clk_get_enabled(&pdev->dev, NULL);
if (IS_ERR(clk))
return PTR_ERR(clk);
- clk_prepare_enable(clk);
/*
* Obtain clock period in picoseconds,
diff --git a/drivers/memory/tegra/tegra186.c b/drivers/memory/tegra/tegra186.c
index 3d15388..4bed0e5 100644
--- a/drivers/memory/tegra/tegra186.c
+++ b/drivers/memory/tegra/tegra186.c
@@ -20,32 +20,6 @@
#define MC_SID_STREAMID_SECURITY_WRITE_ACCESS_DISABLED BIT(16)
#define MC_SID_STREAMID_SECURITY_OVERRIDE BIT(8)
-static void tegra186_mc_program_sid(struct tegra_mc *mc)
-{
- unsigned int i;
-
- for (i = 0; i < mc->soc->num_clients; i++) {
- const struct tegra_mc_client *client = &mc->soc->clients[i];
- u32 override, security;
-
- override = readl(mc->regs + client->regs.sid.override);
- security = readl(mc->regs + client->regs.sid.security);
-
- dev_dbg(mc->dev, "client %s: override: %x security: %x\n",
- client->name, override, security);
-
- dev_dbg(mc->dev, "setting SID %u for %s\n", client->sid,
- client->name);
- writel(client->sid, mc->regs + client->regs.sid.override);
-
- override = readl(mc->regs + client->regs.sid.override);
- security = readl(mc->regs + client->regs.sid.security);
-
- dev_dbg(mc->dev, "client %s: override: %x security: %x\n",
- client->name, override, security);
- }
-}
-
static int tegra186_mc_probe(struct tegra_mc *mc)
{
int err;
@@ -54,8 +28,6 @@ static int tegra186_mc_probe(struct tegra_mc *mc)
if (err < 0)
return err;
- tegra186_mc_program_sid(mc);
-
return 0;
}
@@ -64,13 +36,6 @@ static void tegra186_mc_remove(struct tegra_mc *mc)
of_platform_depopulate(mc->dev);
}
-static int tegra186_mc_resume(struct tegra_mc *mc)
-{
- tegra186_mc_program_sid(mc);
-
- return 0;
-}
-
#if IS_ENABLED(CONFIG_IOMMU_API)
static void tegra186_mc_client_sid_override(struct tegra_mc *mc,
const struct tegra_mc_client *client,
@@ -142,7 +107,6 @@ static int tegra186_mc_probe_device(struct tegra_mc *mc, struct device *dev)
const struct tegra_mc_ops tegra186_mc_ops = {
.probe = tegra186_mc_probe,
.remove = tegra186_mc_remove,
- .resume = tegra186_mc_resume,
.probe_device = tegra186_mc_probe_device,
};
diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
index 5dd7ea0..ef550d3 100644
--- a/drivers/mfd/Kconfig
+++ b/drivers/mfd/Kconfig
@@ -15,6 +15,7 @@
tristate "AMD CS5535 and CS5536 southbridge core functions"
select MFD_CORE
depends on PCI && (X86_32 || (X86 && COMPILE_TEST))
+ depends on !UML
help
This is the core driver for CS5535/CS5536 MFD functions. This is
necessary for using the board's GPIO and MFGPT functionality.
diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c
index 9323b1e3a6..5c8317b 100644
--- a/drivers/mfd/arizona-core.c
+++ b/drivers/mfd/arizona-core.c
@@ -45,7 +45,7 @@ int arizona_clk32k_enable(struct arizona *arizona)
if (arizona->clk32k_ref == 1) {
switch (arizona->pdata.clk32k_src) {
case ARIZONA_32KZ_MCLK1:
- ret = pm_runtime_get_sync(arizona->dev);
+ ret = pm_runtime_resume_and_get(arizona->dev);
if (ret != 0)
goto err_ref;
ret = clk_prepare_enable(arizona->mclk[ARIZONA_MCLK1]);
diff --git a/drivers/mfd/pcf50633-adc.c b/drivers/mfd/pcf50633-adc.c
index 5cd653e..191b1bc 100644
--- a/drivers/mfd/pcf50633-adc.c
+++ b/drivers/mfd/pcf50633-adc.c
@@ -136,6 +136,7 @@ int pcf50633_adc_async_read(struct pcf50633 *pcf, int mux, int avg,
void *callback_param)
{
struct pcf50633_adc_request *req;
+ int ret;
/* req is freed when the result is ready, in interrupt handler */
req = kmalloc(sizeof(*req), GFP_KERNEL);
@@ -147,7 +148,11 @@ int pcf50633_adc_async_read(struct pcf50633 *pcf, int mux, int avg,
req->callback = callback;
req->callback_param = callback_param;
- return adc_enqueue_request(pcf, req);
+ ret = adc_enqueue_request(pcf, req);
+ if (ret)
+ kfree(req);
+
+ return ret;
}
EXPORT_SYMBOL_GPL(pcf50633_adc_async_read);
diff --git a/drivers/misc/eeprom/idt_89hpesx.c b/drivers/misc/eeprom/idt_89hpesx.c
index 7f43074..5298be4c 100644
--- a/drivers/misc/eeprom/idt_89hpesx.c
+++ b/drivers/misc/eeprom/idt_89hpesx.c
@@ -1568,12 +1568,20 @@ static struct i2c_driver idt_driver = {
*/
static int __init idt_init(void)
{
+ int ret;
+
/* Create Debugfs directory first */
if (debugfs_initialized())
csr_dbgdir = debugfs_create_dir("idt_csr", NULL);
/* Add new i2c-device driver */
- return i2c_add_driver(&idt_driver);
+ ret = i2c_add_driver(&idt_driver);
+ if (ret) {
+ debugfs_remove_recursive(csr_dbgdir);
+ return ret;
+ }
+
+ return 0;
}
module_init(idt_init);
diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
index cf57057..55e42cc 100644
--- a/drivers/misc/fastrpc.c
+++ b/drivers/misc/fastrpc.c
@@ -247,6 +247,13 @@ static void fastrpc_free_map(struct kref *ref)
dma_buf_put(map->buf);
}
+ if (map->fl) {
+ spin_lock(&map->fl->lock);
+ list_del(&map->node);
+ spin_unlock(&map->fl->lock);
+ map->fl = NULL;
+ }
+
kfree(map);
}
@@ -256,10 +263,12 @@ static void fastrpc_map_put(struct fastrpc_map *map)
kref_put(&map->refcount, fastrpc_free_map);
}
-static void fastrpc_map_get(struct fastrpc_map *map)
+static int fastrpc_map_get(struct fastrpc_map *map)
{
- if (map)
- kref_get(&map->refcount);
+ if (!map)
+ return -ENOENT;
+
+ return kref_get_unless_zero(&map->refcount) ? 0 : -ENOENT;
}
static int fastrpc_map_find(struct fastrpc_user *fl, int fd,
@@ -1114,12 +1123,7 @@ static int fastrpc_init_create_process(struct fastrpc_user *fl,
fl->init_mem = NULL;
fastrpc_buf_free(imem);
err_alloc:
- if (map) {
- spin_lock(&fl->lock);
- list_del(&map->node);
- spin_unlock(&fl->lock);
- fastrpc_map_put(map);
- }
+ fastrpc_map_put(map);
err:
kfree(args);
@@ -1196,10 +1200,8 @@ static int fastrpc_device_release(struct inode *inode, struct file *file)
fastrpc_context_put(ctx);
}
- list_for_each_entry_safe(map, m, &fl->maps, node) {
- list_del(&map->node);
+ list_for_each_entry_safe(map, m, &fl->maps, node)
fastrpc_map_put(map);
- }
list_for_each_entry_safe(buf, b, &fl->mmaps, node) {
list_del(&buf->node);
diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
index 6784408..9d08228 100644
--- a/drivers/misc/mei/bus-fixup.c
+++ b/drivers/misc/mei/bus-fixup.c
@@ -175,7 +175,7 @@ static int mei_fwver(struct mei_cl_device *cldev)
ret = __mei_cl_send(cldev->cl, (u8 *)&req, sizeof(req), 0,
MEI_CL_IO_TX_BLOCKING);
if (ret < 0) {
- dev_err(&cldev->dev, "Could not send ReqFWVersion cmd\n");
+ dev_err(&cldev->dev, "Could not send ReqFWVersion cmd ret = %d\n", ret);
return ret;
}
@@ -187,7 +187,7 @@ static int mei_fwver(struct mei_cl_device *cldev)
* Should be at least one version block,
* error out if nothing found
*/
- dev_err(&cldev->dev, "Could not read FW version\n");
+ dev_err(&cldev->dev, "Could not read FW version ret = %d\n", bytes_recv);
return -EIO;
}
@@ -337,7 +337,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
ret = __mei_cl_send(cl, (u8 *)&cmd, sizeof(cmd), 0,
MEI_CL_IO_TX_BLOCKING);
if (ret < 0) {
- dev_err(bus->dev, "Could not send IF version cmd\n");
+ dev_err(bus->dev, "Could not send IF version cmd ret = %d\n", ret);
return ret;
}
@@ -352,7 +352,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, &vtag,
0, 0);
if (bytes_recv < 0 || (size_t)bytes_recv < if_version_length) {
- dev_err(bus->dev, "Could not read IF version\n");
+ dev_err(bus->dev, "Could not read IF version ret = %d\n", bytes_recv);
ret = -EIO;
goto err;
}
diff --git a/drivers/misc/mei/hdcp/mei_hdcp.c b/drivers/misc/mei/hdcp/mei_hdcp.c
index ec2a4fc..5c4295d 100644
--- a/drivers/misc/mei/hdcp/mei_hdcp.c
+++ b/drivers/misc/mei/hdcp/mei_hdcp.c
@@ -859,8 +859,8 @@ static void mei_hdcp_remove(struct mei_cl_device *cldev)
dev_warn(&cldev->dev, "mei_cldev_disable() failed\n");
}
-#define MEI_UUID_HDCP GUID_INIT(0xB638AB7E, 0x94E2, 0x4EA2, 0xA5, \
- 0x52, 0xD1, 0xC5, 0x4B, 0x62, 0x7F, 0x04)
+#define MEI_UUID_HDCP UUID_LE(0xB638AB7E, 0x94E2, 0x4EA2, 0xA5, \
+ 0x52, 0xD1, 0xC5, 0x4B, 0x62, 0x7F, 0x04)
static const struct mei_cl_device_id mei_hdcp_tbl[] = {
{ .uuid = MEI_UUID_HDCP, .version = MEI_CL_VERSION_ANY },
diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
index 15e8e2b..6095195 100644
--- a/drivers/misc/mei/hw-me-regs.h
+++ b/drivers/misc/mei/hw-me-regs.h
@@ -111,6 +111,8 @@
#define MEI_DEV_ID_RPL_S 0x7A68 /* Raptor Lake Point S */
+#define MEI_DEV_ID_MTL_M 0x7E70 /* Meteor Lake Point M */
+
/*
* MEI HW Section
*/
diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
index 5324b65..f2765d6 100644
--- a/drivers/misc/mei/pci-me.c
+++ b/drivers/misc/mei/pci-me.c
@@ -117,6 +117,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
{MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)},
+
/* required last entry */
{0, }
};
diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
index f1d8ba6..dab8ad9 100644
--- a/drivers/misc/vmw_balloon.c
+++ b/drivers/misc/vmw_balloon.c
@@ -1711,7 +1711,7 @@ static void __init vmballoon_debugfs_init(struct vmballoon *b)
static void __exit vmballoon_debugfs_exit(struct vmballoon *b)
{
static_key_disable(&balloon_stat_enabled.key);
- debugfs_remove(debugfs_lookup("vmmemctl", NULL));
+ debugfs_lookup_and_remove("vmmemctl", NULL);
kfree(b->stats);
b->stats = NULL;
}
diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
index da1e2a7..857b985 100644
--- a/drivers/misc/vmw_vmci/vmci_host.c
+++ b/drivers/misc/vmw_vmci/vmci_host.c
@@ -242,6 +242,8 @@ static int vmci_host_setup_notify(struct vmci_ctx *context,
context->notify_page = NULL;
return VMCI_ERROR_GENERIC;
}
+ if (context->notify_page == NULL)
+ return VMCI_ERROR_UNAVAILABLE;
/*
* Map the locked page and set up notify pointer.
diff --git a/drivers/mmc/core/sdio_bus.c b/drivers/mmc/core/sdio_bus.c
index cac9e0f..f6cdec0 100644
--- a/drivers/mmc/core/sdio_bus.c
+++ b/drivers/mmc/core/sdio_bus.c
@@ -293,6 +293,12 @@ static void sdio_release_func(struct device *dev)
if (!(func->card->quirks & MMC_QUIRK_NONSTD_SDIO))
sdio_free_func_cis(func);
+ /*
+ * We have now removed the link to the tuples in the
+ * card structure, so remove the reference.
+ */
+ put_device(&func->card->dev);
+
kfree(func->info);
kfree(func->tmpbuf);
kfree(func);
@@ -323,6 +329,12 @@ struct sdio_func *sdio_alloc_func(struct mmc_card *card)
device_initialize(&func->dev);
+ /*
+ * We may link to tuples in the card structure,
+ * we need make sure we have a reference to it.
+ */
+ get_device(&func->card->dev);
+
func->dev.parent = &card->dev;
func->dev.bus = &sdio_bus_type;
func->dev.release = sdio_release_func;
@@ -376,10 +388,9 @@ int sdio_add_func(struct sdio_func *func)
*/
void sdio_remove_func(struct sdio_func *func)
{
- if (!sdio_func_present(func))
- return;
+ if (sdio_func_present(func))
+ device_del(&func->dev);
- device_del(&func->dev);
of_node_put(func->dev.of_node);
put_device(&func->dev);
}
diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c
index a705ba6..afaa6ca 100644
--- a/drivers/mmc/core/sdio_cis.c
+++ b/drivers/mmc/core/sdio_cis.c
@@ -404,12 +404,6 @@ int sdio_read_func_cis(struct sdio_func *func)
return ret;
/*
- * Since we've linked to tuples in the card structure,
- * we must make sure we have a reference to it.
- */
- get_device(&func->card->dev);
-
- /*
* Vendor/device id is optional for function CIS, so
* copy it from the card structure as needed.
*/
@@ -434,11 +428,5 @@ void sdio_free_func_cis(struct sdio_func *func)
}
func->tuples = NULL;
-
- /*
- * We have now removed the link to the tuples in the
- * card structure, so remove the reference.
- */
- put_device(&func->card->dev);
}
diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
index 6f971a3..493ed8c 100644
--- a/drivers/mmc/host/atmel-mci.c
+++ b/drivers/mmc/host/atmel-mci.c
@@ -1818,7 +1818,6 @@ static void atmci_tasklet_func(struct tasklet_struct *t)
atmci_writel(host, ATMCI_IER, ATMCI_NOTBUSY);
state = STATE_WAITING_NOTBUSY;
} else if (host->mrq->stop) {
- atmci_writel(host, ATMCI_IER, ATMCI_CMDRDY);
atmci_send_stop_cmd(host, data);
state = STATE_SENDING_STOP;
} else {
@@ -1851,8 +1850,6 @@ static void atmci_tasklet_func(struct tasklet_struct *t)
* command to send.
*/
if (host->mrq->stop) {
- atmci_writel(host, ATMCI_IER,
- ATMCI_CMDRDY);
atmci_send_stop_cmd(host, data);
state = STATE_SENDING_STOP;
} else {
diff --git a/drivers/mmc/host/jz4740_mmc.c b/drivers/mmc/host/jz4740_mmc.c
index 3c59dec..8586447 100644
--- a/drivers/mmc/host/jz4740_mmc.c
+++ b/drivers/mmc/host/jz4740_mmc.c
@@ -1038,6 +1038,16 @@ static int jz4740_mmc_probe(struct platform_device* pdev)
mmc->ops = &jz4740_mmc_ops;
if (!mmc->f_max)
mmc->f_max = JZ_MMC_CLK_RATE;
+
+ /*
+ * There seems to be a problem with this driver on the JZ4760 and
+ * JZ4760B SoCs. There, when using the maximum rate supported (50 MHz),
+ * the communication fails with many SD cards.
+ * Until this bug is sorted out, limit the maximum rate to 24 MHz.
+ */
+ if (host->version == JZ_MMC_JZ4760 && mmc->f_max > JZ_MMC_CLK_RATE)
+ mmc->f_max = JZ_MMC_CLK_RATE;
+
mmc->f_min = mmc->f_max / 128;
mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
index b431cdd..91fde49 100644
--- a/drivers/mmc/host/mmc_spi.c
+++ b/drivers/mmc/host/mmc_spi.c
@@ -1441,7 +1441,7 @@ static int mmc_spi_probe(struct spi_device *spi)
status = mmc_add_host(mmc);
if (status != 0)
- goto fail_add_host;
+ goto fail_glue_init;
/*
* Index 0 is card detect
@@ -1449,7 +1449,7 @@ static int mmc_spi_probe(struct spi_device *spi)
*/
status = mmc_gpiod_request_cd(mmc, NULL, 0, false, 1000);
if (status == -EPROBE_DEFER)
- goto fail_add_host;
+ goto fail_gpiod_request;
if (!status) {
/*
* The platform has a CD GPIO signal that may support
@@ -1464,7 +1464,7 @@ static int mmc_spi_probe(struct spi_device *spi)
/* Index 1 is write protect/read only */
status = mmc_gpiod_request_ro(mmc, NULL, 1, 0);
if (status == -EPROBE_DEFER)
- goto fail_add_host;
+ goto fail_gpiod_request;
if (!status)
has_ro = true;
@@ -1478,7 +1478,7 @@ static int mmc_spi_probe(struct spi_device *spi)
? ", cd polling" : "");
return 0;
-fail_add_host:
+fail_gpiod_request:
mmc_remove_host(mmc);
fail_glue_init:
mmc_spi_dma_free(host);
diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
index c6111adf..794702e 100644
--- a/drivers/mmc/host/sdhci-esdhc-imx.c
+++ b/drivers/mmc/host/sdhci-esdhc-imx.c
@@ -107,6 +107,7 @@
#define ESDHC_TUNING_START_TAP_DEFAULT 0x1
#define ESDHC_TUNING_START_TAP_MASK 0x7f
#define ESDHC_TUNING_CMD_CRC_CHECK_DISABLE (1 << 7)
+#define ESDHC_TUNING_STEP_DEFAULT 0x1
#define ESDHC_TUNING_STEP_MASK 0x00070000
#define ESDHC_TUNING_STEP_SHIFT 16
@@ -1346,7 +1347,7 @@ static void sdhci_esdhc_imx_hwinit(struct sdhci_host *host)
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host);
struct cqhci_host *cq_host = host->mmc->cqe_private;
- int tmp;
+ u32 tmp;
if (esdhc_is_usdhc(imx_data)) {
/*
@@ -1399,17 +1400,24 @@ static void sdhci_esdhc_imx_hwinit(struct sdhci_host *host)
if (imx_data->socdata->flags & ESDHC_FLAG_STD_TUNING) {
tmp = readl(host->ioaddr + ESDHC_TUNING_CTRL);
- tmp |= ESDHC_STD_TUNING_EN |
- ESDHC_TUNING_START_TAP_DEFAULT;
- if (imx_data->boarddata.tuning_start_tap) {
- tmp &= ~ESDHC_TUNING_START_TAP_MASK;
+ tmp |= ESDHC_STD_TUNING_EN;
+
+ /*
+ * ROM code or bootloader may config the start tap
+ * and step, unmask them first.
+ */
+ tmp &= ~(ESDHC_TUNING_START_TAP_MASK | ESDHC_TUNING_STEP_MASK);
+ if (imx_data->boarddata.tuning_start_tap)
tmp |= imx_data->boarddata.tuning_start_tap;
- }
+ else
+ tmp |= ESDHC_TUNING_START_TAP_DEFAULT;
if (imx_data->boarddata.tuning_step) {
- tmp &= ~ESDHC_TUNING_STEP_MASK;
tmp |= imx_data->boarddata.tuning_step
<< ESDHC_TUNING_STEP_SHIFT;
+ } else {
+ tmp |= ESDHC_TUNING_STEP_DEFAULT
+ << ESDHC_TUNING_STEP_SHIFT;
}
/* Disable the CMD CRC check for tuning, if not, need to
diff --git a/drivers/mmc/host/sdhci_am654.c b/drivers/mmc/host/sdhci_am654.c
index 9661e01..b3d3cb6 100644
--- a/drivers/mmc/host/sdhci_am654.c
+++ b/drivers/mmc/host/sdhci_am654.c
@@ -369,7 +369,7 @@ static void sdhci_am654_write_b(struct sdhci_host *host, u8 val, int reg)
MAX_POWER_ON_TIMEOUT, false, host, val,
reg);
if (ret)
- dev_warn(mmc_dev(host->mmc), "Power on failed\n");
+ dev_info(mmc_dev(host->mmc), "Power on failed\n");
}
}
diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c
index ce6cb8b..032f2c0 100644
--- a/drivers/mmc/host/sunxi-mmc.c
+++ b/drivers/mmc/host/sunxi-mmc.c
@@ -1483,9 +1483,11 @@ static int sunxi_mmc_remove(struct platform_device *pdev)
struct sunxi_mmc_host *host = mmc_priv(mmc);
mmc_remove_host(mmc);
- pm_runtime_force_suspend(&pdev->dev);
- disable_irq(host->irq);
- sunxi_mmc_disable(host);
+ pm_runtime_disable(&pdev->dev);
+ if (!pm_runtime_status_suspended(&pdev->dev)) {
+ disable_irq(host->irq);
+ sunxi_mmc_disable(host);
+ }
dma_free_coherent(&pdev->dev, PAGE_SIZE, host->sg_cpu, host->sg_dma);
mmc_free_host(mmc);
diff --git a/drivers/mtd/nand/raw/meson_nand.c b/drivers/mtd/nand/raw/meson_nand.c
index b97adee..4fd20e7 100644
--- a/drivers/mtd/nand/raw/meson_nand.c
+++ b/drivers/mtd/nand/raw/meson_nand.c
@@ -172,6 +172,7 @@ struct meson_nfc {
dma_addr_t daddr;
dma_addr_t iaddr;
+ u32 info_bytes;
unsigned long assigned_cs;
};
@@ -499,6 +500,7 @@ static int meson_nfc_dma_buffer_setup(struct nand_chip *nand, void *databuf,
nfc->daddr, datalen, dir);
return ret;
}
+ nfc->info_bytes = infolen;
cmd = GENCMDIADDRL(NFC_CMD_AIL, nfc->iaddr);
writel(cmd, nfc->reg_base + NFC_REG_CMD);
@@ -516,8 +518,10 @@ static void meson_nfc_dma_buffer_release(struct nand_chip *nand,
struct meson_nfc *nfc = nand_get_controller_data(nand);
dma_unmap_single(nfc->dev, nfc->daddr, datalen, dir);
- if (infolen)
+ if (infolen) {
dma_unmap_single(nfc->dev, nfc->iaddr, infolen, dir);
+ nfc->info_bytes = 0;
+ }
}
static int meson_nfc_read_buf(struct nand_chip *nand, u8 *buf, int len)
@@ -706,6 +710,8 @@ static void meson_nfc_check_ecc_pages_valid(struct meson_nfc *nfc,
usleep_range(10, 15);
/* info is updated by nfc dma engine*/
smp_rmb();
+ dma_sync_single_for_cpu(nfc->dev, nfc->iaddr, nfc->info_bytes,
+ DMA_FROM_DEVICE);
ret = *info & ECC_COMPLETE;
} while (!ret);
}
diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
index d5dcc74..1e61c23 100644
--- a/drivers/mtd/spi-nor/core.c
+++ b/drivers/mtd/spi-nor/core.c
@@ -2247,6 +2247,15 @@ void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
}
/**
+ * spi_nor_mask_erase_type() - mask out a SPI NOR erase type
+ * @erase: pointer to a structure that describes a SPI NOR erase type
+ */
+void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase)
+{
+ erase->size = 0;
+}
+
+/**
* spi_nor_init_uniform_erase_map() - Initialize uniform erase map
* @map: the erase map of the SPI NOR
* @erase_mask: bitmask encoding erase types that can erase the entire
diff --git a/drivers/mtd/spi-nor/core.h b/drivers/mtd/spi-nor/core.h
index 3348e1d..7eb2090 100644
--- a/drivers/mtd/spi-nor/core.h
+++ b/drivers/mtd/spi-nor/core.h
@@ -538,6 +538,7 @@ void spi_nor_set_pp_settings(struct spi_nor_pp_command *pp, u8 opcode,
void spi_nor_set_erase_type(struct spi_nor_erase_type *erase, u32 size,
u8 opcode);
+void spi_nor_mask_erase_type(struct spi_nor_erase_type *erase);
struct spi_nor_erase_region *
spi_nor_region_next(struct spi_nor_erase_region *region);
void spi_nor_init_uniform_erase_map(struct spi_nor_erase_map *map,
diff --git a/drivers/mtd/spi-nor/sfdp.c b/drivers/mtd/spi-nor/sfdp.c
index c500c21..c787fda 100644
--- a/drivers/mtd/spi-nor/sfdp.c
+++ b/drivers/mtd/spi-nor/sfdp.c
@@ -874,7 +874,7 @@ static int spi_nor_init_non_uniform_erase_map(struct spi_nor *nor,
*/
for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++)
if (!(regions_erase_type & BIT(erase[i].idx)))
- spi_nor_set_erase_type(&erase[i], 0, 0xFF);
+ spi_nor_mask_erase_type(&erase[i]);
return 0;
}
@@ -1088,7 +1088,7 @@ static int spi_nor_parse_4bait(struct spi_nor *nor,
erase_type[i].opcode = (dwords[1] >>
erase_type[i].idx * 8) & 0xFF;
else
- spi_nor_set_erase_type(&erase_type[i], 0u, 0xFF);
+ spi_nor_mask_erase_type(&erase_type[i]);
}
/*
@@ -1220,7 +1220,7 @@ static int spi_nor_parse_sccr(struct spi_nor *nor,
le32_to_cpu_array(dwords, sccr_header->length);
- if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE, dwords[22]))
+ if (FIELD_GET(SCCR_DWORD22_OCTAL_DTR_EN_VOLATILE, dwords[21]))
nor->flags |= SNOR_F_IO_MODE_EN_VOLATILE;
out:
diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
index ee82dcd..f3684b3 100644
--- a/drivers/mtd/spi-nor/spansion.c
+++ b/drivers/mtd/spi-nor/spansion.c
@@ -15,8 +15,13 @@
#define SPINOR_REG_CYPRESS_CFR3V 0x00800004
#define SPINOR_REG_CYPRESS_CFR3V_PGSZ BIT(4) /* Page size. */
#define SPINOR_REG_CYPRESS_CFR5V 0x00800006
-#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN 0x3
-#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS 0
+#define SPINOR_REG_CYPRESS_CFR5_BIT6 BIT(6)
+#define SPINOR_REG_CYPRESS_CFR5_DDR BIT(1)
+#define SPINOR_REG_CYPRESS_CFR5_OPI BIT(0)
+#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_EN \
+ (SPINOR_REG_CYPRESS_CFR5_BIT6 | SPINOR_REG_CYPRESS_CFR5_DDR | \
+ SPINOR_REG_CYPRESS_CFR5_OPI)
+#define SPINOR_REG_CYPRESS_CFR5V_OCT_DTR_DS SPINOR_REG_CYPRESS_CFR5_BIT6
#define SPINOR_OP_CYPRESS_RD_FAST 0xee
/**
diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
index a32050f..3499ff2 100644
--- a/drivers/mtd/ubi/build.c
+++ b/drivers/mtd/ubi/build.c
@@ -468,6 +468,7 @@ static int uif_init(struct ubi_device *ubi)
err = ubi_add_volume(ubi, ubi->volumes[i]);
if (err) {
ubi_err(ubi, "cannot add volume %d", i);
+ ubi->volumes[i] = NULL;
goto out_volumes;
}
}
@@ -663,6 +664,12 @@ static int io_init(struct ubi_device *ubi, int max_beb_per1024)
ubi->ec_hdr_alsize = ALIGN(UBI_EC_HDR_SIZE, ubi->hdrs_min_io_size);
ubi->vid_hdr_alsize = ALIGN(UBI_VID_HDR_SIZE, ubi->hdrs_min_io_size);
+ if (ubi->vid_hdr_offset && ((ubi->vid_hdr_offset + UBI_VID_HDR_SIZE) >
+ ubi->vid_hdr_alsize)) {
+ ubi_err(ubi, "VID header offset %d too large.", ubi->vid_hdr_offset);
+ return -EINVAL;
+ }
+
dbg_gen("min_io_size %d", ubi->min_io_size);
dbg_gen("max_write_size %d", ubi->max_write_size);
dbg_gen("hdrs_min_io_size %d", ubi->hdrs_min_io_size);
diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
index 053ab52..69592be 100644
--- a/drivers/mtd/ubi/fastmap-wl.c
+++ b/drivers/mtd/ubi/fastmap-wl.c
@@ -146,13 +146,15 @@ void ubi_refill_pools(struct ubi_device *ubi)
if (ubi->fm_anchor) {
wl_tree_add(ubi->fm_anchor, &ubi->free);
ubi->free_count++;
+ ubi->fm_anchor = NULL;
}
- /*
- * All available PEBs are in ubi->free, now is the time to get
- * the best anchor PEBs.
- */
- ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
+ if (!ubi->fm_disabled)
+ /*
+ * All available PEBs are in ubi->free, now is the time to get
+ * the best anchor PEBs.
+ */
+ ubi->fm_anchor = ubi_wl_get_fm_peb(ubi, 1);
for (;;) {
enough = 0;
diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
index 6ea95ad..d79323e 100644
--- a/drivers/mtd/ubi/vmt.c
+++ b/drivers/mtd/ubi/vmt.c
@@ -464,7 +464,7 @@ int ubi_resize_volume(struct ubi_volume_desc *desc, int reserved_pebs)
for (i = 0; i < -pebs; i++) {
err = ubi_eba_unmap_leb(ubi, vol, reserved_pebs + i);
if (err)
- goto out_acc;
+ goto out_free;
}
spin_lock(&ubi->volumes_lock);
ubi->rsvd_pebs += pebs;
@@ -512,8 +512,10 @@ int ubi_resize_volume(struct ubi_volume_desc *desc, int reserved_pebs)
ubi->avail_pebs += pebs;
spin_unlock(&ubi->volumes_lock);
}
+ return err;
+
out_free:
- kfree(new_eba_tbl);
+ ubi_eba_destroy_table(new_eba_tbl);
return err;
}
@@ -580,6 +582,7 @@ int ubi_add_volume(struct ubi_device *ubi, struct ubi_volume *vol)
if (err) {
ubi_err(ubi, "cannot add character device for volume %d, error %d",
vol_id, err);
+ vol_release(&vol->dev);
return err;
}
@@ -590,15 +593,14 @@ int ubi_add_volume(struct ubi_device *ubi, struct ubi_volume *vol)
vol->dev.groups = volume_dev_groups;
dev_set_name(&vol->dev, "%s_%d", ubi->ubi_name, vol->vol_id);
err = device_register(&vol->dev);
- if (err)
- goto out_cdev;
+ if (err) {
+ cdev_del(&vol->cdev);
+ put_device(&vol->dev);
+ return err;
+ }
self_check_volumes(ubi);
return err;
-
-out_cdev:
- cdev_del(&vol->cdev);
- return err;
}
/**
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index afcdacb..2ee0e60 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -886,8 +886,11 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
err = do_sync_erase(ubi, e1, vol_id, lnum, 0);
if (err) {
- if (e2)
+ if (e2) {
+ spin_lock(&ubi->wl_lock);
wl_entry_destroy(ubi, e2);
+ spin_unlock(&ubi->wl_lock);
+ }
goto out_ro;
}
@@ -969,11 +972,11 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
spin_lock(&ubi->wl_lock);
ubi->move_from = ubi->move_to = NULL;
ubi->move_to_put = ubi->wl_scheduled = 0;
+ wl_entry_destroy(ubi, e1);
+ wl_entry_destroy(ubi, e2);
spin_unlock(&ubi->wl_lock);
ubi_free_vid_buf(vidb);
- wl_entry_destroy(ubi, e1);
- wl_entry_destroy(ubi, e2);
out_ro:
ubi_ro_mode(ubi);
@@ -1120,14 +1123,18 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk)
/* Re-schedule the LEB for erasure */
err1 = schedule_erase(ubi, e, vol_id, lnum, 0, false);
if (err1) {
+ spin_lock(&ubi->wl_lock);
wl_entry_destroy(ubi, e);
+ spin_unlock(&ubi->wl_lock);
err = err1;
goto out_ro;
}
return err;
}
+ spin_lock(&ubi->wl_lock);
wl_entry_destroy(ubi, e);
+ spin_unlock(&ubi->wl_lock);
if (err != -EIO)
/*
* If this is not %-EIO, we have no idea what to do. Scheduling
@@ -1243,6 +1250,18 @@ int ubi_wl_put_peb(struct ubi_device *ubi, int vol_id, int lnum,
retry:
spin_lock(&ubi->wl_lock);
e = ubi->lookuptbl[pnum];
+ if (!e) {
+ /*
+ * This wl entry has been removed for some errors by other
+ * process (eg. wear leveling worker), corresponding process
+ * (except __erase_worker, which cannot concurrent with
+ * ubi_wl_put_peb) will set ubi ro_mode at the same time,
+ * just ignore this wl entry.
+ */
+ spin_unlock(&ubi->wl_lock);
+ up_read(&ubi->fm_protect);
+ return 0;
+ }
if (e == ubi->move_from) {
/*
* User is putting the physical eraseblock which was selected to
diff --git a/drivers/net/bonding/bond_debugfs.c b/drivers/net/bonding/bond_debugfs.c
index 4f9b4a1..5940945 100644
--- a/drivers/net/bonding/bond_debugfs.c
+++ b/drivers/net/bonding/bond_debugfs.c
@@ -76,7 +76,7 @@ void bond_debug_reregister(struct bonding *bond)
d = debugfs_rename(bonding_debug_root, bond->debug_dir,
bonding_debug_root, bond->dev->name);
- if (d) {
+ if (!IS_ERR(d)) {
bond->debug_dir = d;
} else {
netdev_warn(bond->dev, "failed to reregister, so just unregister old one\n");
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 4562989..e1dc94f 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -1744,6 +1744,19 @@ void bond_lower_state_changed(struct slave *slave)
slave_err(bond_dev, slave_dev, "Error: %s\n", errmsg); \
} while (0)
+/* The bonding driver uses ether_setup() to convert a master bond device
+ * to ARPHRD_ETHER, that resets the target netdevice's flags so we always
+ * have to restore the IFF_MASTER flag, and only restore IFF_SLAVE if it was set
+ */
+static void bond_ether_setup(struct net_device *bond_dev)
+{
+ unsigned int slave_flag = bond_dev->flags & IFF_SLAVE;
+
+ ether_setup(bond_dev);
+ bond_dev->flags |= IFF_MASTER | slave_flag;
+ bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+}
+
/* enslave device <slave> to bond device <master> */
int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
struct netlink_ext_ack *extack)
@@ -1835,10 +1848,8 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
if (slave_dev->type != ARPHRD_ETHER)
bond_setup_by_slave(bond_dev, slave_dev);
- else {
- ether_setup(bond_dev);
- bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
- }
+ else
+ bond_ether_setup(bond_dev);
call_netdevice_notifiers(NETDEV_POST_TYPE_CHANGE,
bond_dev);
@@ -2256,9 +2267,7 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
eth_hw_addr_random(bond_dev);
if (bond_dev->type != ARPHRD_ETHER) {
dev_close(bond_dev);
- ether_setup(bond_dev);
- bond_dev->flags |= IFF_MASTER;
- bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ bond_ether_setup(bond_dev);
}
}
diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
index 9ed048c..1abdf88 100644
--- a/drivers/net/can/usb/esd_usb2.c
+++ b/drivers/net/can/usb/esd_usb2.c
@@ -278,7 +278,6 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
cf->data[2] |= CAN_ERR_PROT_STUFF;
break;
default:
- cf->data[3] = ecc & SJA1000_ECC_SEG;
break;
}
@@ -286,6 +285,9 @@ static void esd_usb2_rx_event(struct esd_usb2_net_priv *priv,
if (!(ecc & SJA1000_ECC_DIR))
cf->data[2] |= CAN_ERR_PROT_TX;
+ /* Bit stream position in CAN frame as the error was detected */
+ cf->data[3] = ecc & SJA1000_ECC_SEG;
+
if (priv->can.state == CAN_STATE_ERROR_WARNING ||
priv->can.state == CAN_STATE_ERROR_PASSIVE) {
cf->data[1] = (txerr > rxerr) ?
diff --git a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
index 6cc65bf..562105b 100644
--- a/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
+++ b/drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
@@ -545,6 +545,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
u8 cmd_no, int channel)
{
struct kvaser_cmd *cmd;
+ size_t cmd_len;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
@@ -552,6 +553,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
return -ENOMEM;
cmd->header.cmd_no = cmd_no;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
if (channel < 0) {
kvaser_usb_hydra_set_cmd_dest_he
(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
@@ -568,7 +570,7 @@ static int kvaser_usb_hydra_send_simple_cmd(struct kvaser_usb *dev,
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
- err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
if (err)
goto end;
@@ -584,6 +586,7 @@ kvaser_usb_hydra_send_simple_cmd_async(struct kvaser_usb_net_priv *priv,
{
struct kvaser_cmd *cmd;
struct kvaser_usb *dev = priv->dev;
+ size_t cmd_len;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_ATOMIC);
@@ -591,14 +594,14 @@ kvaser_usb_hydra_send_simple_cmd_async(struct kvaser_usb_net_priv *priv,
return -ENOMEM;
cmd->header.cmd_no = cmd_no;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
- err = kvaser_usb_send_cmd_async(priv, cmd,
- kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd_async(priv, cmd, cmd_len);
if (err)
kfree(cmd);
@@ -742,6 +745,7 @@ static int kvaser_usb_hydra_get_single_capability(struct kvaser_usb *dev,
{
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
struct kvaser_cmd *cmd;
+ size_t cmd_len;
u32 value = 0;
u32 mask = 0;
u16 cap_cmd_res;
@@ -753,13 +757,14 @@ static int kvaser_usb_hydra_get_single_capability(struct kvaser_usb *dev,
return -ENOMEM;
cmd->header.cmd_no = CMD_GET_CAPABILITIES_REQ;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
cmd->cap_req.cap_cmd = cpu_to_le16(cap_cmd_req);
kvaser_usb_hydra_set_cmd_dest_he(cmd, card_data->hydra.sysdbg_he);
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
- err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
if (err)
goto end;
@@ -1582,6 +1587,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
struct kvaser_usb *dev = priv->dev;
struct kvaser_usb_net_hydra_priv *hydra = priv->sub_priv;
struct kvaser_cmd *cmd;
+ size_t cmd_len;
int err;
if (!hydra)
@@ -1592,6 +1598,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
return -ENOMEM;
cmd->header.cmd_no = CMD_GET_BUSPARAMS_REQ;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid
@@ -1601,7 +1608,7 @@ static int kvaser_usb_hydra_get_busparams(struct kvaser_usb_net_priv *priv,
reinit_completion(&priv->get_busparams_comp);
- err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
if (err)
return err;
@@ -1628,6 +1635,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
struct kvaser_cmd *cmd;
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
+ size_t cmd_len;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
@@ -1635,6 +1643,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
return -ENOMEM;
cmd->header.cmd_no = CMD_SET_BUSPARAMS_REQ;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
memcpy(&cmd->set_busparams_req.busparams_nominal, busparams,
sizeof(cmd->set_busparams_req.busparams_nominal));
@@ -1643,7 +1652,7 @@ static int kvaser_usb_hydra_set_bittiming(const struct net_device *netdev,
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
- err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
kfree(cmd);
@@ -1656,6 +1665,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
struct kvaser_cmd *cmd;
struct kvaser_usb_net_priv *priv = netdev_priv(netdev);
struct kvaser_usb *dev = priv->dev;
+ size_t cmd_len;
int err;
cmd = kcalloc(1, sizeof(struct kvaser_cmd), GFP_KERNEL);
@@ -1663,6 +1673,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
return -ENOMEM;
cmd->header.cmd_no = CMD_SET_BUSPARAMS_FD_REQ;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
memcpy(&cmd->set_busparams_req.busparams_data, busparams,
sizeof(cmd->set_busparams_req.busparams_data));
@@ -1680,7 +1691,7 @@ static int kvaser_usb_hydra_set_data_bittiming(const struct net_device *netdev,
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
- err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
kfree(cmd);
@@ -1808,6 +1819,7 @@ static int kvaser_usb_hydra_get_software_info(struct kvaser_usb *dev)
static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
{
struct kvaser_cmd *cmd;
+ size_t cmd_len;
int err;
u32 flags;
struct kvaser_usb_dev_card_data *card_data = &dev->card_data;
@@ -1817,6 +1829,7 @@ static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
return -ENOMEM;
cmd->header.cmd_no = CMD_GET_SOFTWARE_DETAILS_REQ;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
cmd->sw_detail_req.use_ext_cmd = 1;
kvaser_usb_hydra_set_cmd_dest_he
(cmd, KVASER_USB_HYDRA_HE_ADDRESS_ILLEGAL);
@@ -1824,7 +1837,7 @@ static int kvaser_usb_hydra_get_software_details(struct kvaser_usb *dev)
kvaser_usb_hydra_set_cmd_transid
(cmd, kvaser_usb_hydra_get_next_transid(dev));
- err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
if (err)
goto end;
@@ -1942,6 +1955,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
{
struct kvaser_usb *dev = priv->dev;
struct kvaser_cmd *cmd;
+ size_t cmd_len;
int err;
if ((priv->can.ctrlmode &
@@ -1957,6 +1971,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
return -ENOMEM;
cmd->header.cmd_no = CMD_SET_DRIVERMODE_REQ;
+ cmd_len = kvaser_usb_hydra_cmd_size(cmd);
kvaser_usb_hydra_set_cmd_dest_he
(cmd, dev->card_data.hydra.channel_to_he[priv->channel]);
kvaser_usb_hydra_set_cmd_transid
@@ -1966,7 +1981,7 @@ static int kvaser_usb_hydra_set_opt_mode(const struct kvaser_usb_net_priv *priv)
else
cmd->set_ctrlmode.mode = KVASER_USB_HYDRA_CTRLMODE_NORMAL;
- err = kvaser_usb_send_cmd(dev, cmd, kvaser_usb_hydra_cmd_size(cmd));
+ err = kvaser_usb_send_cmd(dev, cmd, cmd_len);
kfree(cmd);
return err;
diff --git a/drivers/net/dsa/b53/b53_mmap.c b/drivers/net/dsa/b53/b53_mmap.c
index ae4c79d..3388f62 100644
--- a/drivers/net/dsa/b53/b53_mmap.c
+++ b/drivers/net/dsa/b53/b53_mmap.c
@@ -263,7 +263,7 @@ static int b53_mmap_probe_of(struct platform_device *pdev,
if (of_property_read_u32(of_port, "reg", ®))
continue;
- if (reg < B53_CPU_PORT)
+ if (reg < B53_N_PORTS)
pdata->enabled_ports |= BIT(reg);
}
diff --git a/drivers/net/dsa/microchip/ksz8863_smi.c b/drivers/net/dsa/microchip/ksz8863_smi.c
index 5883fa7..54f3e033 100644
--- a/drivers/net/dsa/microchip/ksz8863_smi.c
+++ b/drivers/net/dsa/microchip/ksz8863_smi.c
@@ -86,22 +86,16 @@ static const struct regmap_bus regmap_smi[] = {
{
.read = ksz8863_mdio_read,
.write = ksz8863_mdio_write,
- .max_raw_read = 1,
- .max_raw_write = 1,
},
{
.read = ksz8863_mdio_read,
.write = ksz8863_mdio_write,
.val_format_endian_default = REGMAP_ENDIAN_BIG,
- .max_raw_read = 2,
- .max_raw_write = 2,
},
{
.read = ksz8863_mdio_read,
.write = ksz8863_mdio_write,
.val_format_endian_default = REGMAP_ENDIAN_BIG,
- .max_raw_read = 4,
- .max_raw_write = 4,
}
};
@@ -112,7 +106,6 @@ static const struct regmap_config ksz8863_regmap_config[] = {
.pad_bits = 24,
.val_bits = 8,
.cache_type = REGCACHE_NONE,
- .use_single_read = 1,
.lock = ksz_regmap_lock,
.unlock = ksz_regmap_unlock,
},
@@ -122,7 +115,6 @@ static const struct regmap_config ksz8863_regmap_config[] = {
.pad_bits = 24,
.val_bits = 16,
.cache_type = REGCACHE_NONE,
- .use_single_read = 1,
.lock = ksz_regmap_lock,
.unlock = ksz_regmap_unlock,
},
@@ -132,7 +124,6 @@ static const struct regmap_config ksz8863_regmap_config[] = {
.pad_bits = 24,
.val_bits = 32,
.cache_type = REGCACHE_NONE,
- .use_single_read = 1,
.lock = ksz_regmap_lock,
.unlock = ksz_regmap_unlock,
}
diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
index 379b38c..bf788e1 100644
--- a/drivers/net/dsa/microchip/ksz9477.c
+++ b/drivers/net/dsa/microchip/ksz9477.c
@@ -675,10 +675,10 @@ static int ksz9477_port_fdb_del(struct dsa_switch *ds, int port,
ksz_read32(dev, REG_SW_ALU_VAL_D, &alu_table[3]);
/* clear forwarding port */
- alu_table[2] &= ~BIT(port);
+ alu_table[1] &= ~BIT(port);
/* if there is no port to forward, clear table */
- if ((alu_table[2] & ALU_V_PORT_MAP) == 0) {
+ if ((alu_table[1] & ALU_V_PORT_MAP) == 0) {
alu_table[0] = 0;
alu_table[1] = 0;
alu_table[2] = 0;
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
index 704ba46..dfea2ab 100644
--- a/drivers/net/dsa/mt7530.c
+++ b/drivers/net/dsa/mt7530.c
@@ -388,12 +388,38 @@ mt7530_fdb_write(struct mt7530_priv *priv, u16 vid,
mt7530_write(priv, MT7530_ATA1 + (i * 4), reg[i]);
}
-/* Setup TX circuit including relevant PAD and driving */
+/* Set up switch core clock for MT7530 */
+static void mt7530_pll_setup(struct mt7530_priv *priv)
+{
+ /* Disable core clock */
+ core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
+
+ /* Disable PLL */
+ core_write(priv, CORE_GSWPLL_GRP1, 0);
+
+ /* Set core clock into 500Mhz */
+ core_write(priv, CORE_GSWPLL_GRP2,
+ RG_GSWPLL_POSDIV_500M(1) |
+ RG_GSWPLL_FBKDIV_500M(25));
+
+ /* Enable PLL */
+ core_write(priv, CORE_GSWPLL_GRP1,
+ RG_GSWPLL_EN_PRE |
+ RG_GSWPLL_POSDIV_200M(2) |
+ RG_GSWPLL_FBKDIV_200M(32));
+
+ udelay(20);
+
+ /* Enable core clock */
+ core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_GSWCK_EN);
+}
+
+/* Setup port 6 interface mode and TRGMII TX circuit */
static int
mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
{
struct mt7530_priv *priv = ds->priv;
- u32 ncpo1, ssc_delta, trgint, i, xtal;
+ u32 ncpo1, ssc_delta, trgint, xtal;
xtal = mt7530_read(priv, MT7530_MHWTRAP) & HWTRAP_XTAL_MASK;
@@ -407,11 +433,13 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
switch (interface) {
case PHY_INTERFACE_MODE_RGMII:
trgint = 0;
- /* PLL frequency: 125MHz */
- ncpo1 = 0x0c80;
break;
case PHY_INTERFACE_MODE_TRGMII:
trgint = 1;
+ if (xtal == HWTRAP_XTAL_25MHZ)
+ ssc_delta = 0x57;
+ else
+ ssc_delta = 0x87;
if (priv->id == ID_MT7621) {
/* PLL frequency: 150MHz: 1.2GBit */
if (xtal == HWTRAP_XTAL_40MHZ)
@@ -431,61 +459,32 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
return -EINVAL;
}
- if (xtal == HWTRAP_XTAL_25MHZ)
- ssc_delta = 0x57;
- else
- ssc_delta = 0x87;
-
mt7530_rmw(priv, MT7530_P6ECR, P6_INTF_MODE_MASK,
P6_INTF_MODE(trgint));
- /* Lower Tx Driving for TRGMII path */
- for (i = 0 ; i < NUM_TRGMII_CTRL ; i++)
- mt7530_write(priv, MT7530_TRGMII_TD_ODT(i),
- TD_DM_DRVP(8) | TD_DM_DRVN(8));
+ if (trgint) {
+ /* Disable the MT7530 TRGMII clocks */
+ core_clear(priv, CORE_TRGMII_GSW_CLK_CG, REG_TRGMIICK_EN);
- /* Disable MT7530 core and TRGMII Tx clocks */
- core_clear(priv, CORE_TRGMII_GSW_CLK_CG,
- REG_GSWCK_EN | REG_TRGMIICK_EN);
+ /* Setup the MT7530 TRGMII Tx Clock */
+ core_write(priv, CORE_PLL_GROUP5, RG_LCDDS_PCW_NCPO1(ncpo1));
+ core_write(priv, CORE_PLL_GROUP6, RG_LCDDS_PCW_NCPO0(0));
+ core_write(priv, CORE_PLL_GROUP10, RG_LCDDS_SSC_DELTA(ssc_delta));
+ core_write(priv, CORE_PLL_GROUP11, RG_LCDDS_SSC_DELTA1(ssc_delta));
+ core_write(priv, CORE_PLL_GROUP4,
+ RG_SYSPLL_DDSFBK_EN | RG_SYSPLL_BIAS_EN |
+ RG_SYSPLL_BIAS_LPF_EN);
+ core_write(priv, CORE_PLL_GROUP2,
+ RG_SYSPLL_EN_NORMAL | RG_SYSPLL_VODEN |
+ RG_SYSPLL_POSDIV(1));
+ core_write(priv, CORE_PLL_GROUP7,
+ RG_LCDDS_PCW_NCPO_CHG | RG_LCCDS_C(3) |
+ RG_LCDDS_PWDB | RG_LCDDS_ISO_EN);
- /* Setup core clock for MT7530 */
- /* Disable PLL */
- core_write(priv, CORE_GSWPLL_GRP1, 0);
+ /* Enable the MT7530 TRGMII clocks */
+ core_set(priv, CORE_TRGMII_GSW_CLK_CG, REG_TRGMIICK_EN);
+ }
- /* Set core clock into 500Mhz */
- core_write(priv, CORE_GSWPLL_GRP2,
- RG_GSWPLL_POSDIV_500M(1) |
- RG_GSWPLL_FBKDIV_500M(25));
-
- /* Enable PLL */
- core_write(priv, CORE_GSWPLL_GRP1,
- RG_GSWPLL_EN_PRE |
- RG_GSWPLL_POSDIV_200M(2) |
- RG_GSWPLL_FBKDIV_200M(32));
-
- /* Setup the MT7530 TRGMII Tx Clock */
- core_write(priv, CORE_PLL_GROUP5, RG_LCDDS_PCW_NCPO1(ncpo1));
- core_write(priv, CORE_PLL_GROUP6, RG_LCDDS_PCW_NCPO0(0));
- core_write(priv, CORE_PLL_GROUP10, RG_LCDDS_SSC_DELTA(ssc_delta));
- core_write(priv, CORE_PLL_GROUP11, RG_LCDDS_SSC_DELTA1(ssc_delta));
- core_write(priv, CORE_PLL_GROUP4,
- RG_SYSPLL_DDSFBK_EN | RG_SYSPLL_BIAS_EN |
- RG_SYSPLL_BIAS_LPF_EN);
- core_write(priv, CORE_PLL_GROUP2,
- RG_SYSPLL_EN_NORMAL | RG_SYSPLL_VODEN |
- RG_SYSPLL_POSDIV(1));
- core_write(priv, CORE_PLL_GROUP7,
- RG_LCDDS_PCW_NCPO_CHG | RG_LCCDS_C(3) |
- RG_LCDDS_PWDB | RG_LCDDS_ISO_EN);
-
- /* Enable MT7530 core and TRGMII Tx clocks */
- core_set(priv, CORE_TRGMII_GSW_CLK_CG,
- REG_GSWCK_EN | REG_TRGMIICK_EN);
-
- if (!trgint)
- for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
- mt7530_rmw(priv, MT7530_TRGMII_RD(i),
- RD_TAP_MASK, RD_TAP(16));
return 0;
}
@@ -1290,14 +1289,26 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
if (!priv->ports[port].pvid)
mt7530_rmw(priv, MT7530_PVC_P(port), ACC_FRM_MASK,
MT7530_VLAN_ACC_TAGGED);
- }
- /* Set the port as a user port which is to be able to recognize VID
- * from incoming packets before fetching entry within the VLAN table.
- */
- mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK,
- VLAN_ATTR(MT7530_VLAN_USER) |
- PVC_EG_TAG(MT7530_VLAN_EG_DISABLED));
+ /* Set the port as a user port which is to be able to recognize
+ * VID from incoming packets before fetching entry within the
+ * VLAN table.
+ */
+ mt7530_rmw(priv, MT7530_PVC_P(port),
+ VLAN_ATTR_MASK | PVC_EG_TAG_MASK,
+ VLAN_ATTR(MT7530_VLAN_USER) |
+ PVC_EG_TAG(MT7530_VLAN_EG_DISABLED));
+ } else {
+ /* Also set CPU ports to the "user" VLAN port attribute, to
+ * allow VLAN classification, but keep the EG_TAG attribute as
+ * "consistent" (i.o.w. don't change its value) for packets
+ * received by the switch from the CPU, so that tagged packets
+ * are forwarded to user ports as tagged, and untagged as
+ * untagged.
+ */
+ mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK,
+ VLAN_ATTR(MT7530_VLAN_USER));
+ }
}
static void
@@ -2151,7 +2162,18 @@ mt7530_setup(struct dsa_switch *ds)
SYS_CTRL_PHY_RST | SYS_CTRL_SW_RST |
SYS_CTRL_REG_RST);
- /* Enable Port 6 only; P5 as GMAC5 which currently is not supported */
+ mt7530_pll_setup(priv);
+
+ /* Lower Tx driving for TRGMII path */
+ for (i = 0; i < NUM_TRGMII_CTRL; i++)
+ mt7530_write(priv, MT7530_TRGMII_TD_ODT(i),
+ TD_DM_DRVP(8) | TD_DM_DRVN(8));
+
+ for (i = 0; i < NUM_TRGMII_CTRL; i++)
+ mt7530_rmw(priv, MT7530_TRGMII_RD(i),
+ RD_TAP_MASK, RD_TAP(16));
+
+ /* Enable port 6 */
val = mt7530_read(priv, MT7530_MHWTRAP);
val &= ~MHWTRAP_P6_DIS & ~MHWTRAP_PHY_ACCESS;
val |= MHWTRAP_MANUAL;
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index f9efd0c..bc363fc 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -2912,9 +2912,14 @@ static int mv88e6xxx_setup_port(struct mv88e6xxx_chip *chip, int port)
* If this is the upstream port for this switch, enable
* forwarding of unknown unicasts and multicasts.
*/
- reg = MV88E6XXX_PORT_CTL0_IGMP_MLD_SNOOP |
- MV88E6185_PORT_CTL0_USE_TAG | MV88E6185_PORT_CTL0_USE_IP |
+ reg = MV88E6185_PORT_CTL0_USE_TAG | MV88E6185_PORT_CTL0_USE_IP |
MV88E6XXX_PORT_CTL0_STATE_FORWARDING;
+ /* Forward any IPv4 IGMP or IPv6 MLD frames received
+ * by a USER port to the CPU port to allow snooping.
+ */
+ if (dsa_is_user_port(ds, port))
+ reg |= MV88E6XXX_PORT_CTL0_IGMP_MLD_SNOOP;
+
err = mv88e6xxx_port_write(chip, port, MV88E6XXX_PORT_CTL0, reg);
if (err)
return err;
@@ -3054,7 +3059,7 @@ static int mv88e6xxx_get_max_mtu(struct dsa_switch *ds, int port)
return 10240 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
else if (chip->info->ops->set_max_frame_size)
return 1632 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
- return 1522 - VLAN_ETH_HLEN - EDSA_HLEN - ETH_FCS_LEN;
+ return ETH_DATA_LEN;
}
static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
@@ -3062,6 +3067,17 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
struct mv88e6xxx_chip *chip = ds->priv;
int ret = 0;
+ /* For families where we don't know how to alter the MTU,
+ * just accept any value up to ETH_DATA_LEN
+ */
+ if (!chip->info->ops->port_set_jumbo_size &&
+ !chip->info->ops->set_max_frame_size) {
+ if (new_mtu > ETH_DATA_LEN)
+ return -EINVAL;
+
+ return 0;
+ }
+
if (dsa_is_dsa_port(ds, port) || dsa_is_cpu_port(ds, port))
new_mtu += EDSA_HLEN;
@@ -3070,9 +3086,6 @@ static int mv88e6xxx_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
ret = chip->info->ops->port_set_jumbo_size(chip, port, new_mtu);
else if (chip->info->ops->set_max_frame_size)
ret = chip->info->ops->set_max_frame_size(chip, new_mtu);
- else
- if (new_mtu > 1522)
- ret = -EINVAL;
mv88e6xxx_reg_unlock(chip);
return ret;
@@ -4954,7 +4967,7 @@ static const struct mv88e6xxx_ops mv88e6393x_ops = {
* .port_set_upstream_port method.
*/
.set_egress_port = mv88e6393x_set_egress_port,
- .watchdog_ops = &mv88e6390_watchdog_ops,
+ .watchdog_ops = &mv88e6393x_watchdog_ops,
.mgmt_rsvd2cpu = mv88e6393x_port_mgmt_rsvd2cpu,
.pot_clear = mv88e6xxx_g2_pot_clear,
.reset = mv88e6352_g1_reset,
diff --git a/drivers/net/dsa/mv88e6xxx/global2.c b/drivers/net/dsa/mv88e6xxx/global2.c
index fa65ecd..ec49939 100644
--- a/drivers/net/dsa/mv88e6xxx/global2.c
+++ b/drivers/net/dsa/mv88e6xxx/global2.c
@@ -931,6 +931,26 @@ const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops = {
.irq_free = mv88e6390_watchdog_free,
};
+static int mv88e6393x_watchdog_action(struct mv88e6xxx_chip *chip, int irq)
+{
+ mv88e6390_watchdog_action(chip, irq);
+
+ /* Fix for clearing the force WD event bit.
+ * Unreleased erratum on mv88e6393x.
+ */
+ mv88e6xxx_g2_write(chip, MV88E6390_G2_WDOG_CTL,
+ MV88E6390_G2_WDOG_CTL_UPDATE |
+ MV88E6390_G2_WDOG_CTL_PTR_EVENT);
+
+ return IRQ_HANDLED;
+}
+
+const struct mv88e6xxx_irq_ops mv88e6393x_watchdog_ops = {
+ .irq_action = mv88e6393x_watchdog_action,
+ .irq_setup = mv88e6390_watchdog_setup,
+ .irq_free = mv88e6390_watchdog_free,
+};
+
static irqreturn_t mv88e6xxx_g2_watchdog_thread_fn(int irq, void *dev_id)
{
struct mv88e6xxx_chip *chip = dev_id;
diff --git a/drivers/net/dsa/mv88e6xxx/global2.h b/drivers/net/dsa/mv88e6xxx/global2.h
index f3e2757..89ba09b 100644
--- a/drivers/net/dsa/mv88e6xxx/global2.h
+++ b/drivers/net/dsa/mv88e6xxx/global2.h
@@ -361,6 +361,7 @@ int mv88e6xxx_g2_device_mapping_write(struct mv88e6xxx_chip *chip, int target,
extern const struct mv88e6xxx_irq_ops mv88e6097_watchdog_ops;
extern const struct mv88e6xxx_irq_ops mv88e6250_watchdog_ops;
extern const struct mv88e6xxx_irq_ops mv88e6390_watchdog_ops;
+extern const struct mv88e6xxx_irq_ops mv88e6393x_watchdog_ops;
extern const struct mv88e6xxx_avb_ops mv88e6165_avb_ops;
extern const struct mv88e6xxx_avb_ops mv88e6352_avb_ops;
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
index d5fd49d..decc1c0 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
@@ -524,19 +524,28 @@ static void xgbe_disable_vxlan(struct xgbe_prv_data *pdata)
netif_dbg(pdata, drv, pdata->netdev, "VXLAN acceleration disabled\n");
}
+static unsigned int xgbe_get_fc_queue_count(struct xgbe_prv_data *pdata)
+{
+ unsigned int max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
+
+ /* From MAC ver 30H the TFCR is per priority, instead of per queue */
+ if (XGMAC_GET_BITS(pdata->hw_feat.version, MAC_VR, SNPSVER) >= 0x30)
+ return max_q_count;
+ else
+ return min_t(unsigned int, pdata->tx_q_count, max_q_count);
+}
+
static int xgbe_disable_tx_flow_control(struct xgbe_prv_data *pdata)
{
- unsigned int max_q_count, q_count;
unsigned int reg, reg_val;
- unsigned int i;
+ unsigned int i, q_count;
/* Clear MTL flow control */
for (i = 0; i < pdata->rx_q_count; i++)
XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, EHFC, 0);
/* Clear MAC flow control */
- max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
- q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
+ q_count = xgbe_get_fc_queue_count(pdata);
reg = MAC_Q0TFCR;
for (i = 0; i < q_count; i++) {
reg_val = XGMAC_IOREAD(pdata, reg);
@@ -553,9 +562,8 @@ static int xgbe_enable_tx_flow_control(struct xgbe_prv_data *pdata)
{
struct ieee_pfc *pfc = pdata->pfc;
struct ieee_ets *ets = pdata->ets;
- unsigned int max_q_count, q_count;
unsigned int reg, reg_val;
- unsigned int i;
+ unsigned int i, q_count;
/* Set MTL flow control */
for (i = 0; i < pdata->rx_q_count; i++) {
@@ -579,8 +587,7 @@ static int xgbe_enable_tx_flow_control(struct xgbe_prv_data *pdata)
}
/* Set MAC flow control */
- max_q_count = XGMAC_MAX_FLOW_CONTROL_QUEUES;
- q_count = min_t(unsigned int, pdata->tx_q_count, max_q_count);
+ q_count = xgbe_get_fc_queue_count(pdata);
reg = MAC_Q0TFCR;
for (i = 0; i < q_count; i++) {
reg_val = XGMAC_IOREAD(pdata, reg);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
index 0c5c1b1..43fdd11 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
@@ -496,6 +496,7 @@ static enum xgbe_an xgbe_an73_tx_training(struct xgbe_prv_data *pdata,
reg |= XGBE_KR_TRAINING_ENABLE;
reg |= XGBE_KR_TRAINING_START;
XMDIO_WRITE(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, reg);
+ pdata->kr_start_time = jiffies;
netif_dbg(pdata, link, pdata->netdev,
"KR training initiated\n");
@@ -632,6 +633,8 @@ static enum xgbe_an xgbe_an73_incompat_link(struct xgbe_prv_data *pdata)
xgbe_switch_mode(pdata);
+ pdata->an_result = XGBE_AN_READY;
+
xgbe_an_restart(pdata);
return XGBE_AN_INCOMPAT_LINK;
@@ -1275,9 +1278,30 @@ static bool xgbe_phy_aneg_done(struct xgbe_prv_data *pdata)
static void xgbe_check_link_timeout(struct xgbe_prv_data *pdata)
{
unsigned long link_timeout;
+ unsigned long kr_time;
+ int wait;
link_timeout = pdata->link_check + (XGBE_LINK_TIMEOUT * HZ);
if (time_after(jiffies, link_timeout)) {
+ if ((xgbe_cur_mode(pdata) == XGBE_MODE_KR) &&
+ pdata->phy.autoneg == AUTONEG_ENABLE) {
+ /* AN restart should not happen while KR training is in progress.
+ * The while loop ensures no AN restart during KR training,
+ * waits up to 500ms and AN restart is triggered only if KR
+ * training is failed.
+ */
+ wait = XGBE_KR_TRAINING_WAIT_ITER;
+ while (wait--) {
+ kr_time = pdata->kr_start_time +
+ msecs_to_jiffies(XGBE_AN_MS_TIMEOUT);
+ if (time_after(jiffies, kr_time))
+ break;
+ /* AN restart is not required, if AN result is COMPLETE */
+ if (pdata->an_result == XGBE_AN_COMPLETE)
+ return;
+ usleep_range(10000, 11000);
+ }
+ }
netif_dbg(pdata, link, pdata->netdev, "AN link timeout\n");
xgbe_phy_config_aneg(pdata);
}
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
index 3305979..e0b8f3c4 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
+++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
@@ -289,6 +289,7 @@
/* Auto-negotiation */
#define XGBE_AN_MS_TIMEOUT 500
#define XGBE_LINK_TIMEOUT 5
+#define XGBE_KR_TRAINING_WAIT_ITER 50
#define XGBE_SGMII_AN_LINK_STATUS BIT(1)
#define XGBE_SGMII_AN_LINK_SPEED (BIT(2) | BIT(3))
@@ -1253,6 +1254,7 @@ struct xgbe_prv_data {
unsigned int parallel_detect;
unsigned int fec_ability;
unsigned long an_start;
+ unsigned long kr_start_time;
enum xgbe_an_mode an_mode;
/* I2C support */
diff --git a/drivers/net/ethernet/broadcom/bgmac-bcma.c b/drivers/net/ethernet/broadcom/bgmac-bcma.c
index 0ce28bc..92453e6 100644
--- a/drivers/net/ethernet/broadcom/bgmac-bcma.c
+++ b/drivers/net/ethernet/broadcom/bgmac-bcma.c
@@ -228,12 +228,12 @@ static int bgmac_probe(struct bcma_device *core)
bgmac->feature_flags |= BGMAC_FEAT_CLKCTLST;
bgmac->feature_flags |= BGMAC_FEAT_FLW_CTRL1;
bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_PHY;
- if (ci->pkg == BCMA_PKG_ID_BCM47188 ||
- ci->pkg == BCMA_PKG_ID_BCM47186) {
+ if ((ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM47186) ||
+ (ci->id == BCMA_CHIP_ID_BCM53572 && ci->pkg == BCMA_PKG_ID_BCM47188)) {
bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_RGMII;
bgmac->feature_flags |= BGMAC_FEAT_IOST_ATTACHED;
}
- if (ci->pkg == BCMA_PKG_ID_BCM5358)
+ if (ci->id == BCMA_CHIP_ID_BCM5357 && ci->pkg == BCMA_PKG_ID_BCM5358)
bgmac->feature_flags |= BGMAC_FEAT_SW_TYPE_EPHYRMII;
break;
case BCMA_CHIP_ID_BCM53573:
diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
index fa2a43d..f8fd65a 100644
--- a/drivers/net/ethernet/broadcom/bgmac.c
+++ b/drivers/net/ethernet/broadcom/bgmac.c
@@ -890,13 +890,13 @@ static void bgmac_chip_reset_idm_config(struct bgmac *bgmac)
if (iost & BGMAC_BCMA_IOST_ATTACHED) {
flags = BGMAC_BCMA_IOCTL_SW_CLKEN;
- if (!bgmac->has_robosw)
+ if (bgmac->in_init || !bgmac->has_robosw)
flags |= BGMAC_BCMA_IOCTL_SW_RESET;
}
bgmac_clk_enable(bgmac, flags);
}
- if (iost & BGMAC_BCMA_IOST_ATTACHED && !bgmac->has_robosw)
+ if (iost & BGMAC_BCMA_IOST_ATTACHED && (bgmac->in_init || !bgmac->has_robosw))
bgmac_idm_write(bgmac, BCMA_IOCTL,
bgmac_idm_read(bgmac, BCMA_IOCTL) &
~BGMAC_BCMA_IOCTL_SW_RESET);
@@ -1490,6 +1490,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
struct net_device *net_dev = bgmac->net_dev;
int err;
+ bgmac->in_init = true;
+
bgmac_chip_intrs_off(bgmac);
net_dev->irq = bgmac->irq;
@@ -1542,6 +1544,8 @@ int bgmac_enet_probe(struct bgmac *bgmac)
/* Omit FCS from max MTU size */
net_dev->max_mtu = BGMAC_RX_MAX_FRAME_SIZE - ETH_FCS_LEN;
+ bgmac->in_init = false;
+
err = register_netdev(bgmac->net_dev);
if (err) {
dev_err(bgmac->dev, "Cannot register net device\n");
diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h
index 110088e..99a3441 100644
--- a/drivers/net/ethernet/broadcom/bgmac.h
+++ b/drivers/net/ethernet/broadcom/bgmac.h
@@ -474,6 +474,8 @@ struct bgmac {
int irq;
u32 int_mask;
+ bool in_init;
+
/* Current MAC state */
int mac_speed;
int mac_duplex;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 117f5cc..4ef90e0 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -223,12 +223,12 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
{ PCI_VDEVICE(BROADCOM, 0x1750), .driver_data = BCM57508 },
{ PCI_VDEVICE(BROADCOM, 0x1751), .driver_data = BCM57504 },
{ PCI_VDEVICE(BROADCOM, 0x1752), .driver_data = BCM57502 },
- { PCI_VDEVICE(BROADCOM, 0x1800), .driver_data = BCM57508_NPAR },
+ { PCI_VDEVICE(BROADCOM, 0x1800), .driver_data = BCM57502_NPAR },
{ PCI_VDEVICE(BROADCOM, 0x1801), .driver_data = BCM57504_NPAR },
- { PCI_VDEVICE(BROADCOM, 0x1802), .driver_data = BCM57502_NPAR },
- { PCI_VDEVICE(BROADCOM, 0x1803), .driver_data = BCM57508_NPAR },
+ { PCI_VDEVICE(BROADCOM, 0x1802), .driver_data = BCM57508_NPAR },
+ { PCI_VDEVICE(BROADCOM, 0x1803), .driver_data = BCM57502_NPAR },
{ PCI_VDEVICE(BROADCOM, 0x1804), .driver_data = BCM57504_NPAR },
- { PCI_VDEVICE(BROADCOM, 0x1805), .driver_data = BCM57502_NPAR },
+ { PCI_VDEVICE(BROADCOM, 0x1805), .driver_data = BCM57508_NPAR },
{ PCI_VDEVICE(BROADCOM, 0xd802), .driver_data = BCM58802 },
{ PCI_VDEVICE(BROADCOM, 0xd804), .driver_data = BCM58804 },
#ifdef CONFIG_BNXT_SRIOV
@@ -2999,7 +2999,7 @@ static int bnxt_alloc_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem)
static void bnxt_free_tpa_info(struct bnxt *bp)
{
- int i;
+ int i, j;
for (i = 0; i < bp->rx_nr_rings; i++) {
struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
@@ -3007,8 +3007,10 @@ static void bnxt_free_tpa_info(struct bnxt *bp)
kfree(rxr->rx_tpa_idx_map);
rxr->rx_tpa_idx_map = NULL;
if (rxr->rx_tpa) {
- kfree(rxr->rx_tpa[0].agg_arr);
- rxr->rx_tpa[0].agg_arr = NULL;
+ for (j = 0; j < bp->max_tpa; j++) {
+ kfree(rxr->rx_tpa[j].agg_arr);
+ rxr->rx_tpa[j].agg_arr = NULL;
+ }
}
kfree(rxr->rx_tpa);
rxr->rx_tpa = NULL;
@@ -3017,14 +3019,13 @@ static void bnxt_free_tpa_info(struct bnxt *bp)
static int bnxt_alloc_tpa_info(struct bnxt *bp)
{
- int i, j, total_aggs = 0;
+ int i, j;
bp->max_tpa = MAX_TPA;
if (bp->flags & BNXT_FLAG_CHIP_P5) {
if (!bp->max_tpa_v2)
return 0;
bp->max_tpa = max_t(u16, bp->max_tpa_v2, MAX_TPA_P5);
- total_aggs = bp->max_tpa * MAX_SKB_FRAGS;
}
for (i = 0; i < bp->rx_nr_rings; i++) {
@@ -3038,12 +3039,12 @@ static int bnxt_alloc_tpa_info(struct bnxt *bp)
if (!(bp->flags & BNXT_FLAG_CHIP_P5))
continue;
- agg = kcalloc(total_aggs, sizeof(*agg), GFP_KERNEL);
- rxr->rx_tpa[0].agg_arr = agg;
- if (!agg)
- return -ENOMEM;
- for (j = 1; j < bp->max_tpa; j++)
- rxr->rx_tpa[j].agg_arr = agg + j * MAX_SKB_FRAGS;
+ for (j = 0; j < bp->max_tpa; j++) {
+ agg = kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL);
+ if (!agg)
+ return -ENOMEM;
+ rxr->rx_tpa[j].agg_arr = agg;
+ }
rxr->rx_tpa_idx_map = kzalloc(sizeof(*rxr->rx_tpa_idx_map),
GFP_KERNEL);
if (!rxr->rx_tpa_idx_map)
@@ -9023,10 +9024,14 @@ int bnxt_reserve_rings(struct bnxt *bp, bool irq_re_init)
netdev_err(bp->dev, "ring reservation/IRQ init failure rc: %d\n", rc);
return rc;
}
- if (tcs && (bp->tx_nr_rings_per_tc * tcs != bp->tx_nr_rings)) {
+ if (tcs && (bp->tx_nr_rings_per_tc * tcs !=
+ bp->tx_nr_rings - bp->tx_nr_rings_xdp)) {
netdev_err(bp->dev, "tx ring reservation failure\n");
netdev_reset_tc(bp->dev);
- bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
+ if (bp->tx_nr_rings_xdp)
+ bp->tx_nr_rings_per_tc = bp->tx_nr_rings_xdp;
+ else
+ bp->tx_nr_rings_per_tc = bp->tx_nr_rings;
return -ENOMEM;
}
return 0;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index e5874c8..ae4695f 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1202,6 +1202,7 @@ struct bnxt_link_info {
#define BNXT_LINK_SPEED_40GB PORT_PHY_QCFG_RESP_LINK_SPEED_40GB
#define BNXT_LINK_SPEED_50GB PORT_PHY_QCFG_RESP_LINK_SPEED_50GB
#define BNXT_LINK_SPEED_100GB PORT_PHY_QCFG_RESP_LINK_SPEED_100GB
+#define BNXT_LINK_SPEED_200GB PORT_PHY_QCFG_RESP_LINK_SPEED_200GB
u16 support_speeds;
u16 support_pam4_speeds;
u16 auto_link_speeds; /* fw adv setting */
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 586311a..bc9812a 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -1670,6 +1670,8 @@ u32 bnxt_fw_to_ethtool_speed(u16 fw_link_speed)
return SPEED_50000;
case BNXT_LINK_SPEED_100GB:
return SPEED_100000;
+ case BNXT_LINK_SPEED_200GB:
+ return SPEED_200000;
default:
return SPEED_UNKNOWN;
}
@@ -3504,6 +3506,7 @@ static void bnxt_self_test(struct net_device *dev, struct ethtool_test *etest,
bnxt_ulp_stop(bp);
rc = bnxt_close_nic(bp, true, false);
if (rc) {
+ etest->flags |= ETH_TEST_FL_FAILED;
bnxt_ulp_start(bp, rc);
return;
}
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index ea13917..92cd291 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -2267,6 +2267,14 @@ static unsigned int bcmgenet_desc_rx(struct bcmgenet_rx_ring *ring,
__func__, p_index, ring->c_index,
ring->read_ptr, dma_length_status);
+ if (unlikely(len > RX_BUF_LENGTH)) {
+ netif_err(priv, rx_status, dev, "oversized packet\n");
+ dev->stats.rx_length_errors++;
+ dev->stats.rx_errors++;
+ dev_kfree_skb_any(skb);
+ goto next;
+ }
+
if (unlikely(!(dma_flag & DMA_EOP) || !(dma_flag & DMA_SOP))) {
netif_err(priv, rx_status, dev,
"dropping fragmented packet!\n");
diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
index dbd2ede..f61f832 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
@@ -165,15 +165,6 @@ void bcmgenet_phy_power_set(struct net_device *dev, bool enable)
static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
{
- u32 reg;
-
- if (!GENET_IS_V5(priv)) {
- /* Speed settings are set in bcmgenet_mii_setup() */
- reg = bcmgenet_sys_readl(priv, SYS_PORT_CTRL);
- reg |= LED_ACT_SOURCE_MAC;
- bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL);
- }
-
if (priv->hw_params->flags & GENET_HAS_MOCA_LINK_DET)
fixed_phy_set_link_update(priv->dev->phydev,
bcmgenet_fixed_phy_link_update);
@@ -206,6 +197,8 @@ int bcmgenet_mii_config(struct net_device *dev, bool init)
if (!phy_name) {
phy_name = "MoCA";
+ if (!GENET_IS_V5(priv))
+ port_ctrl |= LED_ACT_SOURCE_MAC;
bcmgenet_moca_phy_setup(priv);
}
break;
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 8aab074..50f86be 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -11176,7 +11176,7 @@ static void tg3_reset_task(struct work_struct *work)
rtnl_lock();
tg3_full_lock(tp, 0);
- if (!netif_running(tp->dev)) {
+ if (tp->pcierr_recovery || !netif_running(tp->dev)) {
tg3_flag_clear(tp, RESET_TASK_PENDING);
tg3_full_unlock(tp);
rtnl_unlock();
@@ -18111,6 +18111,9 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
netdev_info(netdev, "PCI I/O error detected\n");
+ /* Want to make sure that the reset task doesn't run */
+ tg3_reset_task_cancel(tp);
+
rtnl_lock();
/* Could be second call or maybe we don't have netdev yet */
@@ -18127,9 +18130,6 @@ static pci_ers_result_t tg3_io_error_detected(struct pci_dev *pdev,
tg3_timer_stop(tp);
- /* Want to make sure that the reset task doesn't run */
- tg3_reset_task_cancel(tp);
-
netif_device_detach(netdev);
/* Clean up software state, even if MMIO is blocked */
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 61efb23..906c5bb 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -2154,7 +2154,6 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
bool cloned = skb_cloned(*skb) || skb_header_cloned(*skb) ||
skb_is_nonlinear(*skb);
int padlen = ETH_ZLEN - (*skb)->len;
- int headroom = skb_headroom(*skb);
int tailroom = skb_tailroom(*skb);
struct sk_buff *nskb;
u32 fcs;
@@ -2168,9 +2167,6 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
/* FCS could be appeded to tailroom. */
if (tailroom >= ETH_FCS_LEN)
goto add_fcs;
- /* FCS could be appeded by moving data to headroom. */
- else if (!cloned && headroom + tailroom >= ETH_FCS_LEN)
- padlen = 0;
/* No room for FCS, need to reallocate skb. */
else
padlen = ETH_FCS_LEN;
@@ -2179,10 +2175,7 @@ static int macb_pad_and_fcs(struct sk_buff **skb, struct net_device *ndev)
padlen += ETH_FCS_LEN;
}
- if (!cloned && headroom + tailroom >= padlen) {
- (*skb)->data = memmove((*skb)->head, (*skb)->data, (*skb)->len);
- skb_set_tail_pointer(*skb, (*skb)->len);
- } else {
+ if (cloned || tailroom < padlen) {
nskb = skb_copy_expand(*skb, 0, padlen, GFP_ATOMIC);
if (!nskb)
return -ENOMEM;
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 685d2d8..fe5fc2b 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -2395,6 +2395,9 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget)
cleaned = qman_p_poll_dqrr(np->p, budget);
+ if (np->xdp_act & XDP_REDIRECT)
+ xdp_do_flush();
+
if (cleaned < budget) {
napi_complete_done(napi, cleaned);
qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
@@ -2402,9 +2405,6 @@ static int dpaa_eth_poll(struct napi_struct *napi, int budget)
qman_p_irqsource_add(np->p, QM_PIRQ_DQRI);
}
- if (np->xdp_act & XDP_REDIRECT)
- xdp_do_flush();
-
return cleaned;
}
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
index 5899139a..c48d410 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -1597,10 +1597,15 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
if (rx_cleaned >= budget ||
txconf_cleaned >= DPAA2_ETH_TXCONF_PER_NAPI) {
work_done = budget;
+ if (ch->xdp.res & XDP_REDIRECT)
+ xdp_do_flush();
goto out;
}
} while (store_cleaned);
+ if (ch->xdp.res & XDP_REDIRECT)
+ xdp_do_flush();
+
/* We didn't consume the entire budget, so finish napi and
* re-enable data availability notifications
*/
@@ -1625,9 +1630,7 @@ static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
txc_fq->dq_bytes = 0;
}
- if (ch->xdp.res & XDP_REDIRECT)
- xdp_do_flush_map();
- else if (rx_cleaned && ch->xdp.res & XDP_TX)
+ if (rx_cleaned && ch->xdp.res & XDP_TX)
dpaa2_eth_xdp_tx_flush(priv, ch, &priv->fq[flowid]);
return work_done;
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index adccb14..8b7c934 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -2000,14 +2000,14 @@ static void enetc_tx_onestep_tstamp(struct work_struct *work)
priv = container_of(work, struct enetc_ndev_priv, tx_onestep_tstamp);
- netif_tx_lock(priv->ndev);
+ netif_tx_lock_bh(priv->ndev);
clear_bit_unlock(ENETC_TX_ONESTEP_TSTAMP_IN_PROGRESS, &priv->flags);
skb = skb_dequeue(&priv->tx_skbs);
if (skb)
enetc_start_xmit(skb, priv->ndev);
- netif_tx_unlock(priv->ndev);
+ netif_tx_unlock_bh(priv->ndev);
}
static void enetc_tx_onestep_tstamp_init(struct enetc_ndev_priv *priv)
diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
index 1607354..458149a 100644
--- a/drivers/net/ethernet/google/gve/gve.h
+++ b/drivers/net/ethernet/google/gve/gve.h
@@ -47,6 +47,8 @@
#define GVE_RX_BUFFER_SIZE_DQO 2048
+#define GVE_GQ_TX_MIN_PKT_DESC_BYTES 182
+
/* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
struct gve_rx_desc_queue {
struct gve_rx_desc *desc_ring; /* the descriptor ring */
diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
index 0bceb5e..e25322e 100644
--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
+++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
@@ -533,7 +533,10 @@ static int gve_get_link_ksettings(struct net_device *netdev,
struct ethtool_link_ksettings *cmd)
{
struct gve_priv *priv = netdev_priv(netdev);
- int err = gve_adminq_report_link_speed(priv);
+ int err = 0;
+
+ if (priv->link_speed == 0)
+ err = gve_adminq_report_link_speed(priv);
cmd->base.speed = priv->link_speed;
return err;
diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
index 4888bf0..5e11b82 100644
--- a/drivers/net/ethernet/google/gve/gve_tx.c
+++ b/drivers/net/ethernet/google/gve/gve_tx.c
@@ -284,8 +284,8 @@ static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx,
int bytes;
int hlen;
- hlen = skb_is_gso(skb) ? skb_checksum_start_offset(skb) +
- tcp_hdrlen(skb) : skb_headlen(skb);
+ hlen = skb_is_gso(skb) ? skb_checksum_start_offset(skb) + tcp_hdrlen(skb) :
+ min_t(int, GVE_GQ_TX_MIN_PKT_DESC_BYTES, skb->len);
pad_bytes = gve_tx_fifo_pad_alloc_one_frag(&tx->tx_fifo,
hlen);
@@ -454,13 +454,11 @@ static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, st
pkt_desc = &tx->desc[idx];
l4_hdr_offset = skb_checksum_start_offset(skb);
- /* If the skb is gso, then we want the tcp header in the first segment
- * otherwise we want the linear portion of the skb (which will contain
- * the checksum because skb->csum_start and skb->csum_offset are given
- * relative to skb->head) in the first segment.
+ /* If the skb is gso, then we want the tcp header alone in the first segment
+ * otherwise we want the minimum required by the gVNIC spec.
*/
hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) :
- skb_headlen(skb);
+ min_t(int, GVE_GQ_TX_MIN_PKT_DESC_BYTES, skb->len);
info->skb = skb;
/* We don't want to split the header, so if necessary, pad to the end
diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.c b/drivers/net/ethernet/intel/i40e/i40e_diag.c
index ef4d3762..ca229b0 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_diag.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_diag.c
@@ -44,7 +44,7 @@ static i40e_status i40e_diag_reg_pattern_test(struct i40e_hw *hw,
return 0;
}
-struct i40e_diag_reg_test_info i40e_reg_list[] = {
+const struct i40e_diag_reg_test_info i40e_reg_list[] = {
/* offset mask elements stride */
{I40E_QTX_CTL(0), 0x0000FFBF, 1,
I40E_QTX_CTL(1) - I40E_QTX_CTL(0)},
@@ -78,27 +78,28 @@ i40e_status i40e_diag_reg_test(struct i40e_hw *hw)
{
i40e_status ret_code = 0;
u32 reg, mask;
+ u32 elements;
u32 i, j;
for (i = 0; i40e_reg_list[i].offset != 0 &&
!ret_code; i++) {
+ elements = i40e_reg_list[i].elements;
/* set actual reg range for dynamically allocated resources */
if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) &&
hw->func_caps.num_tx_qp != 0)
- i40e_reg_list[i].elements = hw->func_caps.num_tx_qp;
+ elements = hw->func_caps.num_tx_qp;
if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) ||
i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) ||
i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) ||
i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) ||
i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) &&
hw->func_caps.num_msix_vectors != 0)
- i40e_reg_list[i].elements =
- hw->func_caps.num_msix_vectors - 1;
+ elements = hw->func_caps.num_msix_vectors - 1;
/* test register access */
mask = i40e_reg_list[i].mask;
- for (j = 0; j < i40e_reg_list[i].elements && !ret_code; j++) {
+ for (j = 0; j < elements && !ret_code; j++) {
reg = i40e_reg_list[i].offset +
(j * i40e_reg_list[i].stride);
ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.h b/drivers/net/ethernet/intel/i40e/i40e_diag.h
index c3340f3..1db7c6d 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_diag.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_diag.h
@@ -20,7 +20,7 @@ struct i40e_diag_reg_test_info {
u32 stride; /* bytes between each element */
};
-extern struct i40e_diag_reg_test_info i40e_reg_list[];
+extern const struct i40e_diag_reg_test_info i40e_reg_list[];
i40e_status i40e_diag_reg_test(struct i40e_hw *hw);
i40e_status i40e_diag_eeprom_test(struct i40e_hw *hw);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index c013d86..85d48ef 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -2789,7 +2789,7 @@ static int i40e_change_mtu(struct net_device *netdev, int new_mtu)
struct i40e_pf *pf = vsi->back;
if (i40e_enabled_xdp_vsi(vsi)) {
- int frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+ int frame_size = new_mtu + I40E_PACKET_HDR_PAD;
if (frame_size > i40e_max_xdp_frame_size(vsi))
return -EINVAL;
@@ -13005,6 +13005,8 @@ static int i40e_ndo_bridge_setlink(struct net_device *dev,
}
br_spec = nlmsg_find_attr(nlh, sizeof(struct ifinfomsg), IFLA_AF_SPEC);
+ if (!br_spec)
+ return -EINVAL;
nla_for_each_nested(attr, br_spec, rem) {
__u16 mode;
@@ -15336,6 +15338,7 @@ static int i40e_init_recovery_mode(struct i40e_pf *pf, struct i40e_hw *hw)
int err;
int v_idx;
+ pci_set_drvdata(pf->pdev, pf);
pci_save_state(pf->pdev);
/* set up periodic task facility */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 8f5aad9..9787e79 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -170,10 +170,10 @@ static char *i40e_create_dummy_packet(u8 *dummy_packet, bool ipv4, u8 l4proto,
struct i40e_fdir_filter *data)
{
bool is_vlan = !!data->vlan_tag;
- struct vlan_hdr vlan;
- struct ipv6hdr ipv6;
- struct ethhdr eth;
- struct iphdr ip;
+ struct vlan_hdr vlan = {};
+ struct ipv6hdr ipv6 = {};
+ struct ethhdr eth = {};
+ struct iphdr ip = {};
u8 *tmp;
if (ipv4) {
diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c
index e9cc7f6..c423e73 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_common.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_common.c
@@ -661,7 +661,7 @@ struct iavf_rx_ptype_decoded iavf_ptype_lookup[BIT(8)] = {
/* Non Tunneled IPv6 */
IAVF_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
IAVF_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
- IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3),
+ IAVF_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4),
IAVF_PTT_UNUSED_ENTRY(91),
IAVF_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4),
IAVF_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 82c4f11..3b62f37 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -3103,7 +3103,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad ether dest mask %pM\n",
match.mask->dst);
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
}
@@ -3113,7 +3113,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad ether src mask %pM\n",
match.mask->src);
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
}
@@ -3148,7 +3148,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad vlan mask %u\n",
match.mask->vlan_id);
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
}
vf->mask.tcp_spec.vlan_id |= cpu_to_be16(0xffff);
@@ -3172,7 +3172,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad ip dst mask 0x%08x\n",
be32_to_cpu(match.mask->dst));
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
}
@@ -3181,14 +3181,14 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
field_flags |= IAVF_CLOUD_FIELD_IIP;
} else {
dev_err(&adapter->pdev->dev, "Bad ip src mask 0x%08x\n",
- be32_to_cpu(match.mask->dst));
- return IAVF_ERR_CONFIG;
+ be32_to_cpu(match.mask->src));
+ return -EINVAL;
}
}
if (field_flags & IAVF_CLOUD_FIELD_TEN_ID) {
dev_info(&adapter->pdev->dev, "Tenant id not allowed for ip filter\n");
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
if (match.key->dst) {
vf->mask.tcp_spec.dst_ip[0] |= cpu_to_be32(0xffffffff);
@@ -3209,7 +3209,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
if (ipv6_addr_any(&match.mask->dst)) {
dev_err(&adapter->pdev->dev, "Bad ipv6 dst mask 0x%02x\n",
IPV6_ADDR_ANY);
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
/* src and dest IPv6 address should not be LOOPBACK
@@ -3219,7 +3219,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
ipv6_addr_loopback(&match.key->src)) {
dev_err(&adapter->pdev->dev,
"ipv6 addr should not be loopback\n");
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
if (!ipv6_addr_any(&match.mask->dst) ||
!ipv6_addr_any(&match.mask->src))
@@ -3244,7 +3244,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad src port mask %u\n",
be16_to_cpu(match.mask->src));
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
}
@@ -3254,7 +3254,7 @@ static int iavf_parse_cls_flower(struct iavf_adapter *adapter,
} else {
dev_err(&adapter->pdev->dev, "Bad dst port mask %u\n",
be16_to_cpu(match.mask->dst));
- return IAVF_ERR_CONFIG;
+ return -EINVAL;
}
}
if (match.key->dst) {
@@ -4213,6 +4213,11 @@ static void iavf_remove(struct pci_dev *pdev)
mutex_unlock(&adapter->crit_lock);
break;
}
+ /* Simply return if we already went through iavf_shutdown */
+ if (adapter->state == __IAVF_REMOVE) {
+ mutex_unlock(&adapter->crit_lock);
+ return;
+ }
mutex_unlock(&adapter->crit_lock);
usleep_range(500, 1000);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index e76e3df..643dbe5b 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -1061,7 +1061,7 @@ static inline void iavf_rx_hash(struct iavf_ring *ring,
cpu_to_le64((u64)IAVF_RX_DESC_FLTSTAT_RSS_HASH <<
IAVF_RX_DESC_STATUS_FLTSTAT_SHIFT);
- if (ring->netdev->features & NETIF_F_RXHASH)
+ if (!(ring->netdev->features & NETIF_F_RXHASH))
return;
if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) {
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 89bca2e..43fe912 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -399,6 +399,7 @@ enum ice_pf_flags {
ICE_FLAG_MDD_AUTO_RESET_VF,
ICE_FLAG_LINK_LENIENT_MODE_ENA,
ICE_FLAG_PLUG_AUX_DEV,
+ ICE_FLAG_UNPLUG_AUX_DEV,
ICE_FLAG_MTU_CHANGED,
ICE_PF_FLAGS_NBITS /* must be last */
};
@@ -641,7 +642,7 @@ void ice_set_ethtool_ops(struct net_device *netdev);
void ice_set_ethtool_safe_mode_ops(struct net_device *netdev);
u16 ice_get_avail_txq_count(struct ice_pf *pf);
u16 ice_get_avail_rxq_count(struct ice_pf *pf);
-int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx);
+int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked);
void ice_update_vsi_stats(struct ice_vsi *vsi);
void ice_update_pf_stats(struct ice_pf *pf);
int ice_up(struct ice_vsi *vsi);
@@ -706,16 +707,11 @@ static inline void ice_set_rdma_cap(struct ice_pf *pf)
*/
static inline void ice_clear_rdma_cap(struct ice_pf *pf)
{
- /* We can directly unplug aux device here only if the flag bit
- * ICE_FLAG_PLUG_AUX_DEV is not set because ice_unplug_aux_dev()
- * could race with ice_plug_aux_dev() called from
- * ice_service_task(). In this case we only clear that bit now and
- * aux device will be unplugged later once ice_plug_aux_device()
- * called from ice_service_task() finishes (see ice_service_task()).
+ /* defer unplug to service task to avoid RTNL lock and
+ * clear PLUG bit so that pending plugs don't interfere
*/
- if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
- ice_unplug_aux_dev(pf);
-
+ clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags);
+ set_bit(ICE_FLAG_UNPLUG_AUX_DEV, pf->flags);
clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
clear_bit(ICE_FLAG_AUX_ENA, pf->flags);
}
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index 926cf74..dd4195e 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -355,7 +355,7 @@ int ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked)
goto out;
}
- ice_pf_dcb_recfg(pf);
+ ice_pf_dcb_recfg(pf, false);
out:
ice_ena_vsi(pf_vsi, true);
@@ -644,12 +644,13 @@ static int ice_dcb_noncontig_cfg(struct ice_pf *pf)
/**
* ice_pf_dcb_recfg - Reconfigure all VEBs and VSIs
* @pf: pointer to the PF struct
+ * @locked: is adev device lock held
*
* Assumed caller has already disabled all VSIs before
* calling this function. Reconfiguring DCB based on
* local_dcbx_cfg.
*/
-void ice_pf_dcb_recfg(struct ice_pf *pf)
+void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked)
{
struct ice_dcbx_cfg *dcbcfg = &pf->hw.port_info->qos_cfg.local_dcbx_cfg;
struct iidc_event *event;
@@ -688,14 +689,16 @@ void ice_pf_dcb_recfg(struct ice_pf *pf)
if (vsi->type == ICE_VSI_PF)
ice_dcbnl_set_all(vsi);
}
- /* Notify the AUX drivers that TC change is finished */
- event = kzalloc(sizeof(*event), GFP_KERNEL);
- if (!event)
- return;
+ if (!locked) {
+ /* Notify the AUX drivers that TC change is finished */
+ event = kzalloc(sizeof(*event), GFP_KERNEL);
+ if (!event)
+ return;
- set_bit(IIDC_EVENT_AFTER_TC_CHANGE, event->type);
- ice_send_event_to_aux(pf, event);
- kfree(event);
+ set_bit(IIDC_EVENT_AFTER_TC_CHANGE, event->type);
+ ice_send_event_to_aux(pf, event);
+ kfree(event);
+ }
}
/**
@@ -943,7 +946,7 @@ ice_dcb_process_lldp_set_mib_change(struct ice_pf *pf,
}
/* changes in configuration update VSI */
- ice_pf_dcb_recfg(pf);
+ ice_pf_dcb_recfg(pf, false);
ice_ena_vsi(pf_vsi, true);
unlock_rtnl:
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
index 261b6e2..33a609e 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.h
@@ -23,7 +23,7 @@ u8 ice_dcb_get_tc(struct ice_vsi *vsi, int queue_index);
int
ice_pf_dcb_cfg(struct ice_pf *pf, struct ice_dcbx_cfg *new_cfg, bool locked);
int ice_dcb_bwchk(struct ice_pf *pf, struct ice_dcbx_cfg *dcbcfg);
-void ice_pf_dcb_recfg(struct ice_pf *pf);
+void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked);
void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi);
int ice_init_pf_dcb(struct ice_pf *pf, bool locked);
void ice_update_dcb_stats(struct ice_pf *pf);
@@ -113,7 +113,7 @@ ice_is_pfc_causing_hung_q(struct ice_pf __always_unused *pf,
return false;
}
-static inline void ice_pf_dcb_recfg(struct ice_pf *pf) { }
+static inline void ice_pf_dcb_recfg(struct ice_pf *pf, bool locked) { }
static inline void ice_vsi_cfg_dcb_rings(struct ice_vsi *vsi) { }
static inline void ice_update_dcb_stats(struct ice_pf *pf) { }
static inline void
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index f10d9c3..60f73e7 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -3393,7 +3393,9 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
struct ice_vsi *vsi = np->vsi;
struct ice_pf *pf = vsi->back;
int new_rx = 0, new_tx = 0;
+ bool locked = false;
u32 curr_combined;
+ int ret = 0;
/* do not support changing channels in Safe Mode */
if (ice_is_safe_mode(pf)) {
@@ -3442,15 +3444,33 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch)
return -EINVAL;
}
- ice_vsi_recfg_qs(vsi, new_rx, new_tx);
+ if (pf->adev) {
+ mutex_lock(&pf->adev_mutex);
+ device_lock(&pf->adev->dev);
+ locked = true;
+ if (pf->adev->dev.driver) {
+ netdev_err(dev, "Cannot change channels when RDMA is active\n");
+ ret = -EBUSY;
+ goto adev_unlock;
+ }
+ }
- if (!netif_is_rxfh_configured(dev))
- return ice_vsi_set_dflt_rss_lut(vsi, new_rx);
+ ice_vsi_recfg_qs(vsi, new_rx, new_tx, locked);
+
+ if (!netif_is_rxfh_configured(dev)) {
+ ret = ice_vsi_set_dflt_rss_lut(vsi, new_rx);
+ goto adev_unlock;
+ }
/* Update rss_size due to change in Rx queues */
vsi->rss_size = ice_get_valid_rss_size(&pf->hw, new_rx);
- return 0;
+adev_unlock:
+ if (locked) {
+ device_unlock(&pf->adev->dev);
+ mutex_unlock(&pf->adev_mutex);
+ }
+ return ret;
}
/**
@@ -3978,6 +3998,8 @@ ice_get_module_eeprom(struct net_device *netdev,
* SFP modules only ever use page 0.
*/
if (page == 0 || !(data[0x2] & 0x4)) {
+ u32 copy_len;
+
/* If i2c bus is busy due to slow page change or
* link management access, call can fail. This is normal.
* So we retry this a few times.
@@ -4001,8 +4023,8 @@ ice_get_module_eeprom(struct net_device *netdev,
}
/* Make sure we have enough room for the new block */
- if ((i + SFF_READ_BLOCK_SIZE) < ee->len)
- memcpy(data + i, value, SFF_READ_BLOCK_SIZE);
+ copy_len = min_t(u32, SFF_READ_BLOCK_SIZE, ee->len - i);
+ memcpy(data + i, value, copy_len);
}
}
return 0;
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index ffbba5f..bf9fe38 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -2156,18 +2156,15 @@ static void ice_service_task(struct work_struct *work)
}
}
- if (test_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags)) {
- /* Plug aux device per request */
- ice_plug_aux_dev(pf);
+ /* unplug aux dev per request, if an unplug request came in
+ * while processing a plug request, this will handle it
+ */
+ if (test_and_clear_bit(ICE_FLAG_UNPLUG_AUX_DEV, pf->flags))
+ ice_unplug_aux_dev(pf);
- /* Mark plugging as done but check whether unplug was
- * requested during ice_plug_aux_dev() call
- * (e.g. from ice_clear_rdma_cap()) and if so then
- * plug aux device.
- */
- if (!test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
- ice_unplug_aux_dev(pf);
- }
+ /* Plug aux device per request */
+ if (test_and_clear_bit(ICE_FLAG_PLUG_AUX_DEV, pf->flags))
+ ice_plug_aux_dev(pf);
if (test_and_clear_bit(ICE_FLAG_MTU_CHANGED, pf->flags)) {
struct iidc_event *event;
@@ -3776,12 +3773,13 @@ bool ice_is_wol_supported(struct ice_hw *hw)
* @vsi: VSI being changed
* @new_rx: new number of Rx queues
* @new_tx: new number of Tx queues
+ * @locked: is adev device_lock held
*
* Only change the number of queues if new_tx, or new_rx is non-0.
*
* Returns 0 on success.
*/
-int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx)
+int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx, bool locked)
{
struct ice_pf *pf = vsi->back;
int err = 0, timeout = 50;
@@ -3810,7 +3808,7 @@ int ice_vsi_recfg_qs(struct ice_vsi *vsi, int new_rx, int new_tx)
ice_vsi_close(vsi);
ice_vsi_rebuild(vsi, false);
- ice_pf_dcb_recfg(pf);
+ ice_pf_dcb_recfg(pf, locked);
ice_vsi_open(vsi);
done:
clear_bit(ICE_CFG_BUSY, pf->state);
@@ -5129,7 +5127,7 @@ static int __init ice_module_init(void)
pr_info("%s\n", ice_driver_string);
pr_info("%s\n", ice_copyright);
- ice_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME);
+ ice_wq = alloc_workqueue("%s", 0, 0, KBUILD_MODNAME);
if (!ice_wq) {
pr_err("Failed to create workqueue\n");
return -ENOMEM;
@@ -5476,15 +5474,12 @@ int ice_vsi_cfg(struct ice_vsi *vsi)
{
int err;
- if (vsi->netdev) {
+ if (vsi->netdev && vsi->type == ICE_VSI_PF) {
ice_set_rx_mode(vsi->netdev);
- if (vsi->type != ICE_VSI_LB) {
- err = ice_vsi_vlan_setup(vsi);
-
- if (err)
- return err;
- }
+ err = ice_vsi_vlan_setup(vsi);
+ if (err)
+ return err;
}
ice_vsi_cfg_dcb_rings(vsi);
@@ -5650,7 +5645,7 @@ static int ice_up_complete(struct ice_vsi *vsi)
if (vsi->port_info &&
(vsi->port_info->phy.link_info.link_info & ICE_AQ_LINK_UP) &&
- vsi->netdev) {
+ vsi->netdev && vsi->type == ICE_VSI_PF) {
ice_print_link_msg(vsi, true);
netif_tx_start_all_queues(vsi->netdev);
netif_carrier_on(vsi->netdev);
@@ -5660,7 +5655,9 @@ static int ice_up_complete(struct ice_vsi *vsi)
* set the baseline so counters are ready when interface is up
*/
ice_update_eth_stats(vsi);
- ice_service_task_schedule(pf);
+
+ if (vsi->type == ICE_VSI_PF)
+ ice_service_task_schedule(pf);
return 0;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
index 9b50e9e..4d7aa49 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
@@ -1090,7 +1090,7 @@ static void ice_ptp_set_caps(struct ice_pf *pf)
snprintf(info->name, sizeof(info->name) - 1, "%s-%s-clk",
dev_driver_string(dev), dev_name(dev));
info->owner = THIS_MODULE;
- info->max_adj = 999999999;
+ info->max_adj = 100000000;
info->adjtime = ice_ptp_adjtime;
info->adjfine = ice_ptp_adjfine;
info->gettimex64 = ice_ptp_gettimex64;
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 2d9b102..0b61fde 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -2758,7 +2758,7 @@ static enum ice_status
ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id,
u16 vsi_handle, unsigned long *tc_bitmap)
{
- struct ice_sched_agg_vsi_info *agg_vsi_info, *old_agg_vsi_info = NULL;
+ struct ice_sched_agg_vsi_info *agg_vsi_info, *iter, *old_agg_vsi_info = NULL;
struct ice_sched_agg_info *agg_info, *old_agg_info;
enum ice_status status = 0;
struct ice_hw *hw = pi->hw;
@@ -2776,11 +2776,13 @@ ice_sched_assoc_vsi_to_agg(struct ice_port_info *pi, u32 agg_id,
if (old_agg_info && old_agg_info != agg_info) {
struct ice_sched_agg_vsi_info *vtmp;
- list_for_each_entry_safe(old_agg_vsi_info, vtmp,
+ list_for_each_entry_safe(iter, vtmp,
&old_agg_info->agg_vsi_list,
list_entry)
- if (old_agg_vsi_info->vsi_handle == vsi_handle)
+ if (iter->vsi_handle == vsi_handle) {
+ old_agg_vsi_info = iter;
break;
+ }
}
/* check if entry already exist */
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
index eee180d..412deb3 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
@@ -732,6 +732,87 @@ static void ice_vc_fdir_rem_prof_all(struct ice_vf *vf)
}
/**
+ * ice_vc_fdir_reset_cnt_all - reset all FDIR counters for this VF FDIR
+ * @fdir: pointer to the VF FDIR structure
+ */
+static void ice_vc_fdir_reset_cnt_all(struct ice_vf_fdir *fdir)
+{
+ enum ice_fltr_ptype flow;
+
+ for (flow = ICE_FLTR_PTYPE_NONF_NONE;
+ flow < ICE_FLTR_PTYPE_MAX; flow++) {
+ fdir->fdir_fltr_cnt[flow][0] = 0;
+ fdir->fdir_fltr_cnt[flow][1] = 0;
+ }
+}
+
+/**
+ * ice_vc_fdir_has_prof_conflict
+ * @vf: pointer to the VF structure
+ * @conf: FDIR configuration for each filter
+ *
+ * Check if @conf has conflicting profile with existing profiles
+ *
+ * Return: true on success, and false on error.
+ */
+static bool
+ice_vc_fdir_has_prof_conflict(struct ice_vf *vf,
+ struct virtchnl_fdir_fltr_conf *conf)
+{
+ struct ice_fdir_fltr *desc;
+
+ list_for_each_entry(desc, &vf->fdir.fdir_rule_list, fltr_node) {
+ struct virtchnl_fdir_fltr_conf *existing_conf;
+ enum ice_fltr_ptype flow_type_a, flow_type_b;
+ struct ice_fdir_fltr *a, *b;
+
+ existing_conf = to_fltr_conf_from_desc(desc);
+ a = &existing_conf->input;
+ b = &conf->input;
+ flow_type_a = a->flow_type;
+ flow_type_b = b->flow_type;
+
+ /* No need to compare two rules with different tunnel types or
+ * with the same protocol type.
+ */
+ if (existing_conf->ttype != conf->ttype ||
+ flow_type_a == flow_type_b)
+ continue;
+
+ switch (flow_type_a) {
+ case ICE_FLTR_PTYPE_NONF_IPV4_UDP:
+ case ICE_FLTR_PTYPE_NONF_IPV4_TCP:
+ case ICE_FLTR_PTYPE_NONF_IPV4_SCTP:
+ if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_OTHER)
+ return true;
+ break;
+ case ICE_FLTR_PTYPE_NONF_IPV4_OTHER:
+ if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_UDP ||
+ flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_TCP ||
+ flow_type_b == ICE_FLTR_PTYPE_NONF_IPV4_SCTP)
+ return true;
+ break;
+ case ICE_FLTR_PTYPE_NONF_IPV6_UDP:
+ case ICE_FLTR_PTYPE_NONF_IPV6_TCP:
+ case ICE_FLTR_PTYPE_NONF_IPV6_SCTP:
+ if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_OTHER)
+ return true;
+ break;
+ case ICE_FLTR_PTYPE_NONF_IPV6_OTHER:
+ if (flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_UDP ||
+ flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_TCP ||
+ flow_type_b == ICE_FLTR_PTYPE_NONF_IPV6_SCTP)
+ return true;
+ break;
+ default:
+ break;
+ }
+ }
+
+ return false;
+}
+
+/**
* ice_vc_fdir_write_flow_prof
* @vf: pointer to the VF structure
* @flow: filter flow type
@@ -871,6 +952,13 @@ ice_vc_fdir_config_input_set(struct ice_vf *vf, struct virtchnl_fdir_add *fltr,
enum ice_fltr_ptype flow;
int ret;
+ ret = ice_vc_fdir_has_prof_conflict(vf, conf);
+ if (ret) {
+ dev_dbg(dev, "Found flow profile conflict for VF %d\n",
+ vf->vf_id);
+ return ret;
+ }
+
flow = input->flow_type;
ret = ice_vc_fdir_alloc_prof(vf, flow);
if (ret) {
@@ -2063,7 +2151,7 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
v_ret = VIRTCHNL_STATUS_SUCCESS;
stat->status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE;
dev_dbg(dev, "VF %d: set FDIR context failed\n", vf->vf_id);
- goto err_free_conf;
+ goto err_rem_entry;
}
ret = ice_vc_fdir_write_fltr(vf, conf, true, is_tun);
@@ -2072,15 +2160,16 @@ int ice_vc_add_fdir_fltr(struct ice_vf *vf, u8 *msg)
stat->status = VIRTCHNL_FDIR_FAILURE_RULE_NORESOURCE;
dev_err(dev, "VF %d: writing FDIR rule failed, ret:%d\n",
vf->vf_id, ret);
- goto err_rem_entry;
+ goto err_clr_irq;
}
exit:
kfree(stat);
return ret;
-err_rem_entry:
+err_clr_irq:
ice_vc_fdir_clear_irq_ctx(vf);
+err_rem_entry:
ice_vc_fdir_remove_entry(vf, conf, conf->flow_id);
err_free_conf:
devm_kfree(dev, conf);
@@ -2189,6 +2278,7 @@ void ice_vf_fdir_init(struct ice_vf *vf)
spin_lock_init(&fdir->ctx_lock);
fdir->ctx_irq.flags = 0;
fdir->ctx_done.flags = 0;
+ ice_vc_fdir_reset_cnt_all(fdir);
}
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 60d8ef0..070be30 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -166,8 +166,6 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
}
netif_tx_stop_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
- ice_qvec_dis_irq(vsi, rx_ring, q_vector);
-
ice_fill_txq_meta(vsi, tx_ring, &txq_meta);
err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta);
if (err)
@@ -182,6 +180,8 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
if (err)
return err;
}
+ ice_qvec_dis_irq(vsi, rx_ring, q_vector);
+
err = ice_vsi_ctrl_one_rx_ring(vsi, false, q_idx, true);
if (err)
return err;
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index b246ff8..bff9649 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -3820,9 +3820,7 @@ static void igb_remove(struct pci_dev *pdev)
igb_release_hw_control(adapter);
#ifdef CONFIG_PCI_IOV
- rtnl_lock();
igb_disable_sriov(pdev);
- rtnl_unlock();
#endif
unregister_netdev(netdev);
diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
index d051918..ebd6d46 100644
--- a/drivers/net/ethernet/intel/igbvf/netdev.c
+++ b/drivers/net/ethernet/intel/igbvf/netdev.c
@@ -1074,7 +1074,7 @@ static int igbvf_request_msix(struct igbvf_adapter *adapter)
igbvf_intr_msix_rx, 0, adapter->rx_ring->name,
netdev);
if (err)
- goto out;
+ goto free_irq_tx;
adapter->rx_ring->itr_register = E1000_EITR(vector);
adapter->rx_ring->itr_val = adapter->current_itr;
@@ -1083,10 +1083,14 @@ static int igbvf_request_msix(struct igbvf_adapter *adapter)
err = request_irq(adapter->msix_entries[vector].vector,
igbvf_msix_other, 0, netdev->name, netdev);
if (err)
- goto out;
+ goto free_irq_rx;
igbvf_configure_msix(adapter);
return 0;
+free_irq_rx:
+ free_irq(adapter->msix_entries[--vector].vector, netdev);
+free_irq_tx:
+ free_irq(adapter->msix_entries[--vector].vector, netdev);
out:
return err;
}
diff --git a/drivers/net/ethernet/intel/igbvf/vf.c b/drivers/net/ethernet/intel/igbvf/vf.c
index b8ba3f9..a47a2e3 100644
--- a/drivers/net/ethernet/intel/igbvf/vf.c
+++ b/drivers/net/ethernet/intel/igbvf/vf.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2009 - 2018 Intel Corporation. */
+#include <linux/etherdevice.h>
+
#include "vf.h"
static s32 e1000_check_for_link_vf(struct e1000_hw *hw);
@@ -131,11 +133,16 @@ static s32 e1000_reset_hw_vf(struct e1000_hw *hw)
/* set our "perm_addr" based on info provided by PF */
ret_val = mbx->ops.read_posted(hw, msgbuf, 3);
if (!ret_val) {
- if (msgbuf[0] == (E1000_VF_RESET |
- E1000_VT_MSGTYPE_ACK))
+ switch (msgbuf[0]) {
+ case E1000_VF_RESET | E1000_VT_MSGTYPE_ACK:
memcpy(hw->mac.perm_addr, addr, ETH_ALEN);
- else
+ break;
+ case E1000_VF_RESET | E1000_VT_MSGTYPE_NACK:
+ eth_zero_addr(hw->mac.perm_addr);
+ break;
+ default:
ret_val = -E1000_ERR_MAC_INIT;
+ }
}
}
diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
index 3726c84..e255b0a 100644
--- a/drivers/net/ethernet/intel/igc/igc_main.c
+++ b/drivers/net/ethernet/intel/igc/igc_main.c
@@ -2892,7 +2892,9 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget)
if (tx_buffer->next_to_watch &&
time_after(jiffies, tx_buffer->time_stamp +
(adapter->tx_timeout_factor * HZ)) &&
- !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF)) {
+ !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) &&
+ (rd32(IGC_TDH(tx_ring->reg_idx)) !=
+ readl(tx_ring->tail))) {
/* detected Tx unit hang */
netdev_err(tx_ring->netdev,
"Detected Tx Unit Hang\n"
@@ -5020,6 +5022,24 @@ static int igc_change_mtu(struct net_device *netdev, int new_mtu)
}
/**
+ * igc_tx_timeout - Respond to a Tx Hang
+ * @netdev: network interface device structure
+ * @txqueue: queue number that timed out
+ **/
+static void igc_tx_timeout(struct net_device *netdev,
+ unsigned int __always_unused txqueue)
+{
+ struct igc_adapter *adapter = netdev_priv(netdev);
+ struct igc_hw *hw = &adapter->hw;
+
+ /* Do the reset outside of interrupt context */
+ adapter->tx_timeout_count++;
+ schedule_work(&adapter->reset_task);
+ wr32(IGC_EICS,
+ (adapter->eims_enable_mask & ~adapter->eims_other));
+}
+
+/**
* igc_get_stats64 - Get System Network Statistics
* @netdev: network interface device structure
* @stats: rtnl_link_stats64 pointer
@@ -5446,7 +5466,7 @@ static void igc_watchdog_task(struct work_struct *work)
case SPEED_100:
case SPEED_1000:
case SPEED_2500:
- adapter->tx_timeout_factor = 7;
+ adapter->tx_timeout_factor = 1;
break;
}
@@ -5931,18 +5951,18 @@ static bool validate_schedule(struct igc_adapter *adapter,
if (e->command != TC_TAPRIO_CMD_SET_GATES)
return false;
- for (i = 0; i < adapter->num_tx_queues; i++) {
- if (e->gate_mask & BIT(i))
+ for (i = 0; i < adapter->num_tx_queues; i++)
+ if (e->gate_mask & BIT(i)) {
queue_uses[i]++;
- /* There are limitations: A single queue cannot be
- * opened and closed multiple times per cycle unless the
- * gate stays open. Check for it.
- */
- if (queue_uses[i] > 1 &&
- !(prev->gate_mask & BIT(i)))
- return false;
- }
+ /* There are limitations: A single queue cannot
+ * be opened and closed multiple times per cycle
+ * unless the gate stays open. Check for it.
+ */
+ if (queue_uses[i] > 1 &&
+ !(prev->gate_mask & BIT(i)))
+ return false;
+ }
}
return true;
@@ -6264,6 +6284,7 @@ static const struct net_device_ops igc_netdev_ops = {
.ndo_set_rx_mode = igc_set_rx_mode,
.ndo_set_mac_address = igc_set_mac,
.ndo_change_mtu = igc_change_mtu,
+ .ndo_tx_timeout = igc_tx_timeout,
.ndo_get_stats64 = igc_get_stats64,
.ndo_fix_features = igc_fix_features,
.ndo_set_features = igc_set_features,
diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
index fbde782..743c316 100644
--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
+++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
@@ -418,10 +418,12 @@ static int igc_ptp_verify_pin(struct ptp_clock_info *ptp, unsigned int pin,
*
* We need to convert the system time value stored in the RX/TXSTMP registers
* into a hwtstamp which can be used by the upper level timestamping functions.
+ *
+ * Returns 0 on success.
**/
-static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
- struct skb_shared_hwtstamps *hwtstamps,
- u64 systim)
+static int igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
+ struct skb_shared_hwtstamps *hwtstamps,
+ u64 systim)
{
switch (adapter->hw.mac.type) {
case igc_i225:
@@ -431,8 +433,9 @@ static void igc_ptp_systim_to_hwtstamp(struct igc_adapter *adapter,
systim & 0xFFFFFFFF);
break;
default:
- break;
+ return -EINVAL;
}
+ return 0;
}
/**
@@ -657,7 +660,8 @@ static void igc_ptp_tx_hwtstamp(struct igc_adapter *adapter)
regval = rd32(IGC_TXSTMPL);
regval |= (u64)rd32(IGC_TXSTMPH) << 32;
- igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
+ if (igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval))
+ return;
switch (adapter->link_speed) {
case SPEED_10:
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index c375a5d..737590a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -67,6 +67,8 @@
#define IXGBE_RXBUFFER_4K 4096
#define IXGBE_MAX_RXBUFFER 16384 /* largest size for a single descriptor */
+#define IXGBE_PKT_HDR_PAD (ETH_HLEN + ETH_FCS_LEN + (VLAN_HLEN * 2))
+
/* Attempt to maximize the headroom available for incoming frames. We
* use a 2K buffer for receives and need 1536/1534 to store the data for
* the frame. This leaves us with 512 bytes of room. From that we need
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 8cb20af..6fb9c18 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -6730,6 +6730,18 @@ static void ixgbe_free_all_rx_resources(struct ixgbe_adapter *adapter)
}
/**
+ * ixgbe_max_xdp_frame_size - returns the maximum allowed frame size for XDP
+ * @adapter: device handle, pointer to adapter
+ */
+static int ixgbe_max_xdp_frame_size(struct ixgbe_adapter *adapter)
+{
+ if (PAGE_SIZE >= 8192 || adapter->flags2 & IXGBE_FLAG2_RX_LEGACY)
+ return IXGBE_RXBUFFER_2K;
+ else
+ return IXGBE_RXBUFFER_3K;
+}
+
+/**
* ixgbe_change_mtu - Change the Maximum Transfer Unit
* @netdev: network interface device structure
* @new_mtu: new value for maximum frame size
@@ -6740,18 +6752,12 @@ static int ixgbe_change_mtu(struct net_device *netdev, int new_mtu)
{
struct ixgbe_adapter *adapter = netdev_priv(netdev);
- if (adapter->xdp_prog) {
- int new_frame_size = new_mtu + ETH_HLEN + ETH_FCS_LEN +
- VLAN_HLEN;
- int i;
+ if (ixgbe_enabled_xdp_adapter(adapter)) {
+ int new_frame_size = new_mtu + IXGBE_PKT_HDR_PAD;
- for (i = 0; i < adapter->num_rx_queues; i++) {
- struct ixgbe_ring *ring = adapter->rx_ring[i];
-
- if (new_frame_size > ixgbe_rx_bufsz(ring)) {
- e_warn(probe, "Requested MTU size is not supported with XDP\n");
- return -EINVAL;
- }
+ if (new_frame_size > ixgbe_max_xdp_frame_size(adapter)) {
+ e_warn(probe, "Requested MTU size is not supported with XDP\n");
+ return -EINVAL;
}
}
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
index 41d935d..40aeaa7 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
@@ -62,35 +62,38 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_UNTAG,
MVPP22_CLS_HEK_IP4_2T,
MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 |
- MVPP2_PRS_RI_L4_TCP,
+ MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_UNTAG,
MVPP22_CLS_HEK_IP4_2T,
MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT |
- MVPP2_PRS_RI_L4_TCP,
+ MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_UNTAG,
MVPP22_CLS_HEK_IP4_2T,
MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER |
- MVPP2_PRS_RI_L4_TCP,
+ MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
/* TCP over IPv4 flows, fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG,
MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
- MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_TCP,
+ MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_IP_FRAG_TRUE |
+ MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG,
MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
- MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_TCP,
+ MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_IP_FRAG_TRUE |
+ MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_TCP4, MVPP2_FL_IP4_TCP_FRAG_TAG,
MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
- MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_TCP,
+ MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_IP_FRAG_TRUE |
+ MVPP2_PRS_RI_L4_TCP,
MVPP2_PRS_IP_MASK),
/* UDP over IPv4 flows, Not fragmented, no vlan tag */
@@ -132,35 +135,38 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_UNTAG,
MVPP22_CLS_HEK_IP4_2T,
MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4 |
- MVPP2_PRS_RI_L4_UDP,
+ MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_UNTAG,
MVPP22_CLS_HEK_IP4_2T,
MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OPT |
- MVPP2_PRS_RI_L4_UDP,
+ MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_UNTAG,
MVPP22_CLS_HEK_IP4_2T,
MVPP2_PRS_RI_VLAN_NONE | MVPP2_PRS_RI_L3_IP4_OTHER |
- MVPP2_PRS_RI_L4_UDP,
+ MVPP2_PRS_RI_IP_FRAG_TRUE | MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK | MVPP2_PRS_RI_VLAN_MASK),
/* UDP over IPv4 flows, fragmented, with vlan tag */
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG,
MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
- MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_L4_UDP,
+ MVPP2_PRS_RI_L3_IP4 | MVPP2_PRS_RI_IP_FRAG_TRUE |
+ MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG,
MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
- MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_L4_UDP,
+ MVPP2_PRS_RI_L3_IP4_OPT | MVPP2_PRS_RI_IP_FRAG_TRUE |
+ MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
MVPP2_DEF_FLOW(MVPP22_FLOW_UDP4, MVPP2_FL_IP4_UDP_FRAG_TAG,
MVPP22_CLS_HEK_IP4_2T | MVPP22_CLS_HEK_TAGGED,
- MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_L4_UDP,
+ MVPP2_PRS_RI_L3_IP4_OTHER | MVPP2_PRS_RI_IP_FRAG_TRUE |
+ MVPP2_PRS_RI_L4_UDP,
MVPP2_PRS_IP_MASK),
/* TCP over IPv6 flows, not fragmented, no vlan tag */
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
index 9357580..a8188b9 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
@@ -1539,8 +1539,8 @@ static int mvpp2_prs_vlan_init(struct platform_device *pdev, struct mvpp2 *priv)
if (!priv->prs_double_vlans)
return -ENOMEM;
- /* Double VLAN: 0x8100, 0x88A8 */
- err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021Q, ETH_P_8021AD,
+ /* Double VLAN: 0x88A8, 0x8100 */
+ err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021AD, ETH_P_8021Q,
MVPP2_PRS_PORT_MASK);
if (err)
return err;
@@ -1607,59 +1607,45 @@ static int mvpp2_prs_vlan_init(struct platform_device *pdev, struct mvpp2 *priv)
static int mvpp2_prs_pppoe_init(struct mvpp2 *priv)
{
struct mvpp2_prs_entry pe;
- int tid;
+ int tid, ihl;
- /* IPv4 over PPPoE with options */
- tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID,
- MVPP2_PE_LAST_FREE_TID);
- if (tid < 0)
- return tid;
+ /* IPv4 over PPPoE with header length >= 5 */
+ for (ihl = MVPP2_PRS_IPV4_IHL_MIN; ihl <= MVPP2_PRS_IPV4_IHL_MAX; ihl++) {
+ tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID,
+ MVPP2_PE_LAST_FREE_TID);
+ if (tid < 0)
+ return tid;
- memset(&pe, 0, sizeof(pe));
- mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_PPPOE);
- pe.index = tid;
+ memset(&pe, 0, sizeof(pe));
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_PPPOE);
+ pe.index = tid;
- mvpp2_prs_match_etype(&pe, 0, PPP_IP);
+ mvpp2_prs_match_etype(&pe, 0, PPP_IP);
+ mvpp2_prs_tcam_data_byte_set(&pe, MVPP2_ETH_TYPE_LEN,
+ MVPP2_PRS_IPV4_HEAD | ihl,
+ MVPP2_PRS_IPV4_HEAD_MASK |
+ MVPP2_PRS_IPV4_IHL_MASK);
- mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_IP4);
- mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP4_OPT,
- MVPP2_PRS_RI_L3_PROTO_MASK);
- /* goto ipv4 dest-address (skip eth_type + IP-header-size - 4) */
- mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN +
- sizeof(struct iphdr) - 4,
- MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
- /* Set L3 offset */
- mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L3,
- MVPP2_ETH_TYPE_LEN,
- MVPP2_PRS_SRAM_OP_SEL_UDF_ADD);
+ mvpp2_prs_sram_next_lu_set(&pe, MVPP2_PRS_LU_IP4);
+ mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP4,
+ MVPP2_PRS_RI_L3_PROTO_MASK);
+ /* goto ipv4 dst-address (skip eth_type + IP-header-size - 4) */
+ mvpp2_prs_sram_shift_set(&pe, MVPP2_ETH_TYPE_LEN +
+ sizeof(struct iphdr) - 4,
+ MVPP2_PRS_SRAM_OP_SEL_SHIFT_ADD);
+ /* Set L3 offset */
+ mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L3,
+ MVPP2_ETH_TYPE_LEN,
+ MVPP2_PRS_SRAM_OP_SEL_UDF_ADD);
+ /* Set L4 offset */
+ mvpp2_prs_sram_offset_set(&pe, MVPP2_PRS_SRAM_UDF_TYPE_L4,
+ MVPP2_ETH_TYPE_LEN + (ihl * 4),
+ MVPP2_PRS_SRAM_OP_SEL_UDF_ADD);
- /* Update shadow table and hw entry */
- mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_PPPOE);
- mvpp2_prs_hw_write(priv, &pe);
-
- /* IPv4 over PPPoE without options */
- tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID,
- MVPP2_PE_LAST_FREE_TID);
- if (tid < 0)
- return tid;
-
- pe.index = tid;
-
- mvpp2_prs_tcam_data_byte_set(&pe, MVPP2_ETH_TYPE_LEN,
- MVPP2_PRS_IPV4_HEAD |
- MVPP2_PRS_IPV4_IHL_MIN,
- MVPP2_PRS_IPV4_HEAD_MASK |
- MVPP2_PRS_IPV4_IHL_MASK);
-
- /* Clear ri before updating */
- pe.sram[MVPP2_PRS_SRAM_RI_WORD] = 0x0;
- pe.sram[MVPP2_PRS_SRAM_RI_CTRL_WORD] = 0x0;
- mvpp2_prs_sram_ri_update(&pe, MVPP2_PRS_RI_L3_IP4,
- MVPP2_PRS_RI_L3_PROTO_MASK);
-
- /* Update shadow table and hw entry */
- mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_PPPOE);
- mvpp2_prs_hw_write(priv, &pe);
+ /* Update shadow table and hw entry */
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_PPPOE);
+ mvpp2_prs_hw_write(priv, &pe);
+ }
/* IPv6 over PPPoE */
tid = mvpp2_prs_tcam_first_free(priv, MVPP2_PE_FIRST_FREE_TID,
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
index a7213db..fed49d6 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
@@ -811,6 +811,9 @@ bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr,
/* CPT APIs */
int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot);
+#define NDC_AF_BANK_MASK GENMASK_ULL(7, 0)
+#define NDC_AF_BANK_LINE_MASK GENMASK_ULL(31, 16)
+
/* CN10K RVU */
int rvu_set_channels_base(struct rvu *rvu);
void rvu_program_channels(struct rvu *rvu);
@@ -826,6 +829,8 @@ static inline void rvu_dbg_init(struct rvu *rvu) {}
static inline void rvu_dbg_exit(struct rvu *rvu) {}
#endif
+int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr);
+
/* RVU Switch */
void rvu_switch_enable(struct rvu *rvu);
void rvu_switch_disable(struct rvu *rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
index 66d3469..4dddf6e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
@@ -196,9 +196,6 @@ enum cpt_eng_type {
CPT_IE_TYPE = 3,
};
-#define NDC_MAX_BANK(rvu, blk_addr) (rvu_read64(rvu, \
- blk_addr, NDC_AF_CONST) & 0xFF)
-
#define rvu_dbg_NULL NULL
#define rvu_dbg_open_NULL NULL
@@ -1009,6 +1006,7 @@ static int ndc_blk_hits_miss_stats(struct seq_file *s, int idx, int blk_addr)
struct nix_hw *nix_hw;
struct rvu *rvu;
int bank, max_bank;
+ u64 ndc_af_const;
if (blk_addr == BLKADDR_NDC_NPA0) {
rvu = s->private;
@@ -1017,7 +1015,8 @@ static int ndc_blk_hits_miss_stats(struct seq_file *s, int idx, int blk_addr)
rvu = nix_hw->rvu;
}
- max_bank = NDC_MAX_BANK(rvu, blk_addr);
+ ndc_af_const = rvu_read64(rvu, blk_addr, NDC_AF_CONST);
+ max_bank = FIELD_GET(NDC_AF_BANK_MASK, ndc_af_const);
for (bank = 0; bank < max_bank; bank++) {
seq_printf(s, "BANK:%d\n", bank);
seq_printf(s, "\tHits:\t%lld\n",
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index 0989270..d274d55 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -797,6 +797,7 @@ static int nix_aq_enqueue_wait(struct rvu *rvu, struct rvu_block *block,
struct nix_aq_res_s *result;
int timeout = 1000;
u64 reg, head;
+ int ret;
result = (struct nix_aq_res_s *)aq->res->base;
@@ -820,9 +821,22 @@ static int nix_aq_enqueue_wait(struct rvu *rvu, struct rvu_block *block,
return -EBUSY;
}
- if (result->compcode != NIX_AQ_COMP_GOOD)
+ if (result->compcode != NIX_AQ_COMP_GOOD) {
/* TODO: Replace this with some error code */
+ if (result->compcode == NIX_AQ_COMP_CTX_FAULT ||
+ result->compcode == NIX_AQ_COMP_LOCKERR ||
+ result->compcode == NIX_AQ_COMP_CTX_POISON) {
+ ret = rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX0_RX);
+ ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX0_TX);
+ ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX1_RX);
+ ret |= rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NIX1_TX);
+ if (ret)
+ dev_err(rvu->dev,
+ "%s: Not able to unlock cachelines\n", __func__);
+ }
+
return -EBUSY;
+ }
return 0;
}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c
index 70bd036..4f5ca5a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npa.c
@@ -4,7 +4,7 @@
* Copyright (C) 2018 Marvell.
*
*/
-
+#include <linux/bitfield.h>
#include <linux/module.h>
#include <linux/pci.h>
@@ -42,9 +42,18 @@ static int npa_aq_enqueue_wait(struct rvu *rvu, struct rvu_block *block,
return -EBUSY;
}
- if (result->compcode != NPA_AQ_COMP_GOOD)
+ if (result->compcode != NPA_AQ_COMP_GOOD) {
/* TODO: Replace this with some error code */
+ if (result->compcode == NPA_AQ_COMP_CTX_FAULT ||
+ result->compcode == NPA_AQ_COMP_LOCKERR ||
+ result->compcode == NPA_AQ_COMP_CTX_POISON) {
+ if (rvu_ndc_fix_locked_cacheline(rvu, BLKADDR_NDC_NPA0))
+ dev_err(rvu->dev,
+ "%s: Not able to unlock cachelines\n", __func__);
+ }
+
return -EBUSY;
+ }
return 0;
}
@@ -545,3 +554,48 @@ void rvu_npa_lf_teardown(struct rvu *rvu, u16 pcifunc, int npalf)
npa_ctx_free(rvu, pfvf);
}
+
+/* Due to an Hardware errata, in some corner cases, AQ context lock
+ * operations can result in a NDC way getting into an illegal state
+ * of not valid but locked.
+ *
+ * This API solves the problem by clearing the lock bit of the NDC block.
+ * The operation needs to be done for each line of all the NDC banks.
+ */
+int rvu_ndc_fix_locked_cacheline(struct rvu *rvu, int blkaddr)
+{
+ int bank, max_bank, line, max_line, err;
+ u64 reg, ndc_af_const;
+
+ /* Set the ENABLE bit(63) to '0' */
+ reg = rvu_read64(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL);
+ rvu_write64(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL, reg & GENMASK_ULL(62, 0));
+
+ /* Poll until the BUSY bits(47:32) are set to '0' */
+ err = rvu_poll_reg(rvu, blkaddr, NDC_AF_CAMS_RD_INTERVAL, GENMASK_ULL(47, 32), true);
+ if (err) {
+ dev_err(rvu->dev, "Timed out while polling for NDC CAM busy bits.\n");
+ return err;
+ }
+
+ ndc_af_const = rvu_read64(rvu, blkaddr, NDC_AF_CONST);
+ max_bank = FIELD_GET(NDC_AF_BANK_MASK, ndc_af_const);
+ max_line = FIELD_GET(NDC_AF_BANK_LINE_MASK, ndc_af_const);
+ for (bank = 0; bank < max_bank; bank++) {
+ for (line = 0; line < max_line; line++) {
+ /* Check if 'cache line valid bit(63)' is not set
+ * but 'cache line lock bit(60)' is set and on
+ * success, reset the lock bit(60).
+ */
+ reg = rvu_read64(rvu, blkaddr,
+ NDC_AF_BANKX_LINEX_METADATA(bank, line));
+ if (!(reg & BIT_ULL(63)) && (reg & BIT_ULL(60))) {
+ rvu_write64(rvu, blkaddr,
+ NDC_AF_BANKX_LINEX_METADATA(bank, line),
+ reg & ~BIT_ULL(60));
+ }
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
index 21f1ed4..d81b63a 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h
@@ -670,6 +670,7 @@
#define NDC_AF_INTR_ENA_W1S (0x00068)
#define NDC_AF_INTR_ENA_W1C (0x00070)
#define NDC_AF_ACTIVE_PC (0x00078)
+#define NDC_AF_CAMS_RD_INTERVAL (0x00080)
#define NDC_AF_BP_TEST_ENABLE (0x001F8)
#define NDC_AF_BP_TEST(a) (0x00200 | (a) << 3)
#define NDC_AF_BLK_RST (0x002F0)
@@ -685,6 +686,8 @@
(0x00F00 | (a) << 5 | (b) << 4)
#define NDC_AF_BANKX_HIT_PC(a) (0x01000 | (a) << 3)
#define NDC_AF_BANKX_MISS_PC(a) (0x01100 | (a) << 3)
+#define NDC_AF_BANKX_LINEX_METADATA(a, b) \
+ (0x10000 | (a) << 12 | (b) << 3)
/* LBK */
#define LBK_CONST (0x10ull)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index f6306ee..2e22530 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -962,7 +962,6 @@ static void otx2_pool_refill_task(struct work_struct *work)
rbpool = cq->rbpool;
free_ptrs = cq->pool_ptrs;
- get_cpu();
while (cq->pool_ptrs) {
if (otx2_alloc_rbuf(pfvf, rbpool, &bufptr)) {
/* Schedule a WQ if we fails to free atleast half of the
@@ -982,7 +981,6 @@ static void otx2_pool_refill_task(struct work_struct *work)
pfvf->hw_ops->aura_freeptr(pfvf, qidx, bufptr + OTX2_HEAD_ROOM);
cq->pool_ptrs--;
}
- put_cpu();
cq->refill_task_sched = false;
}
@@ -1316,7 +1314,6 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
if (err)
goto fail;
- get_cpu();
/* Allocate pointers and free them to aura/pool */
for (qidx = 0; qidx < hw->tx_queues; qidx++) {
pool_id = otx2_get_pool_idx(pfvf, AURA_NIX_SQ, qidx);
@@ -1340,7 +1337,6 @@ int otx2_sq_aura_pool_init(struct otx2_nic *pfvf)
}
err_mem:
- put_cpu();
return err ? -ENOMEM : 0;
fail:
@@ -1381,21 +1377,18 @@ int otx2_rq_aura_pool_init(struct otx2_nic *pfvf)
if (err)
goto fail;
- get_cpu();
/* Allocate pointers and free them to aura/pool */
for (pool_id = 0; pool_id < hw->rqpool_cnt; pool_id++) {
pool = &pfvf->qset.pool[pool_id];
for (ptr = 0; ptr < num_ptrs; ptr++) {
err = otx2_alloc_rbuf(pfvf, pool, &bufptr);
if (err)
- goto err_mem;
+ return -ENOMEM;
pfvf->hw_ops->aura_freeptr(pfvf, pool_id,
bufptr + OTX2_HEAD_ROOM);
}
}
-err_mem:
- put_cpu();
- return err ? -ENOMEM : 0;
+ return 0;
fail:
otx2_mbox_reset(&pfvf->mbox.mbox, 0);
otx2_aura_pool_free(pfvf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
index 095e5de..e685628 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
@@ -605,8 +605,10 @@ static inline void cn10k_aura_freeptr(void *dev, int aura, u64 buf)
u64 ptrs[2];
ptrs[1] = buf;
+ get_cpu();
/* Free only one buffer at time during init and teardown */
__cn10k_aura_freeptr(pfvf, aura, ptrs, 2);
+ put_cpu();
}
/* Alloc pointer from pool/aura */
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
index 77a13fb..6388944 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
@@ -748,7 +748,7 @@ static int otx2_prepare_ipv6_flow(struct ethtool_rx_flow_spec *fsp,
/* NPC profile doesn't extract AH/ESP header fields */
if ((ah_esp_mask->spi & ah_esp_hdr->spi) ||
- (ah_esp_mask->tclass & ah_esp_mask->tclass))
+ (ah_esp_mask->tclass & ah_esp_hdr->tclass))
return -EOPNOTSUPP;
if (flow_type == AH_V6_FLOW)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
index 03b4ec6..9822db3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
@@ -704,6 +704,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
err_unreg_netdev:
unregister_netdev(netdev);
err_detach_rsrc:
+ free_percpu(vf->hw.lmt_info);
if (test_bit(CN10K_LMTST, &vf->hw.cap_flag))
qmem_free(vf->dev, vf->dync_lmt);
otx2_detach_resources(&vf->mbox);
@@ -738,6 +739,7 @@ static void otx2vf_remove(struct pci_dev *pdev)
destroy_workqueue(vf->otx2_wq);
otx2vf_disable_mbox_intr(vf);
otx2_detach_resources(&vf->mbox);
+ free_percpu(vf->hw.lmt_info);
if (test_bit(CN10K_LMTST, &vf->hw.cap_flag))
qmem_free(vf->dev, vf->dync_lmt);
otx2vf_vfaf_mbox_destroy(vf);
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
index cc6a5b2..bb1acdb 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -363,7 +363,8 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
mcr_new = mcr_cur;
mcr_new |= MAC_MCR_IPG_CFG | MAC_MCR_FORCE_MODE |
- MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK;
+ MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK |
+ MAC_MCR_RX_FIFO_CLR_DIS;
/* Only update control register when needed! */
if (mcr_new != mcr_cur)
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
index f2d9063..d60260e 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
@@ -369,6 +369,7 @@
#define MAC_MCR_FORCE_MODE BIT(15)
#define MAC_MCR_TX_EN BIT(14)
#define MAC_MCR_RX_EN BIT(13)
+#define MAC_MCR_RX_FIFO_CLR_DIS BIT(12)
#define MAC_MCR_BACKOFF_EN BIT(9)
#define MAC_MCR_BACKPR_EN BIT(8)
#define MAC_MCR_FORCE_RX_FC BIT(5)
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
index 7bb1f20..7c5403c 100644
--- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
@@ -462,6 +462,7 @@ mtk_eth_setup_tc_block(struct net_device *dev, struct flow_block_offload *f)
if (IS_ERR(block_cb))
return PTR_ERR(block_cb);
+ flow_block_cb_incref(block_cb);
flow_block_cb_add(block_cb, f);
list_add_tail(&block_cb->driver_list, &block_cb_list);
return 0;
@@ -470,7 +471,7 @@ mtk_eth_setup_tc_block(struct net_device *dev, struct flow_block_offload *f)
if (!block_cb)
return -ENOENT;
- if (flow_block_cb_decref(block_cb)) {
+ if (!flow_block_cb_decref(block_cb)) {
flow_block_cb_remove(block_cb, f);
list_del(&block_cb->driver_list);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
index 326e0b1..05c7c214 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
@@ -64,6 +64,7 @@ static int mlx5_query_mtrc_caps(struct mlx5_fw_tracer *tracer)
MLX5_GET(mtrc_cap, out, num_string_trace);
tracer->str_db.num_string_db = MLX5_GET(mtrc_cap, out, num_string_db);
tracer->owner = !!MLX5_GET(mtrc_cap, out, trace_owner);
+ tracer->str_db.loaded = false;
for (i = 0; i < tracer->str_db.num_string_db; i++) {
mtrc_cap_sp = MLX5_ADDR_OF(mtrc_cap, out, string_db_param[i]);
@@ -602,7 +603,7 @@ static int mlx5_tracer_handle_string_trace(struct mlx5_fw_tracer *tracer,
} else {
cur_string = mlx5_tracer_message_get(tracer, tracer_event);
if (!cur_string) {
- pr_debug("%s Got string event for unknown string tdsm: %d\n",
+ pr_debug("%s Got string event for unknown string tmsn: %d\n",
__func__, tracer_event->string_event.tmsn);
return -1;
}
@@ -756,6 +757,7 @@ static int mlx5_fw_tracer_set_mtrc_conf(struct mlx5_fw_tracer *tracer)
if (err)
mlx5_core_warn(dev, "FWTracer: Failed to set tracer configurations %d\n", err);
+ tracer->buff.consumer_index = 0;
return err;
}
@@ -820,7 +822,6 @@ static void mlx5_fw_tracer_ownership_change(struct work_struct *work)
mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner);
if (tracer->owner) {
tracer->owner = false;
- tracer->buff.consumer_index = 0;
return;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index c22a38e..c822c3a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -1002,7 +1002,7 @@ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv);
void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv);
int mlx5e_ptp_rx_manage_fs_ctx(struct mlx5e_priv *priv, void *ctx);
-int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state);
+int mlx5e_flush_rq(struct mlx5e_rq *rq, int curr_state);
void mlx5e_activate_rq(struct mlx5e_rq *rq);
void mlx5e_deactivate_rq(struct mlx5e_rq *rq);
void mlx5e_activate_icosq(struct mlx5e_icosq *icosq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
index 8e7177d..291bd59 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
@@ -432,10 +432,6 @@ static int mlx5_esw_bridge_switchdev_event(struct notifier_block *nb,
switch (event) {
case SWITCHDEV_FDB_ADD_TO_BRIDGE:
- /* only handle the event on native eswtich of representor */
- if (!mlx5_esw_bridge_is_local(dev, rep, esw))
- break;
-
fdb_info = container_of(info,
struct switchdev_notifier_fdb_info,
info);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
index e329158..899a9a73 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
@@ -129,44 +129,20 @@ static int mlx5e_rx_reporter_err_icosq_cqe_recover(void *ctx)
return err;
}
-static int mlx5e_rq_to_ready(struct mlx5e_rq *rq, int curr_state)
-{
- struct net_device *dev = rq->netdev;
- int err;
-
- err = mlx5e_modify_rq_state(rq, curr_state, MLX5_RQC_STATE_RST);
- if (err) {
- netdev_err(dev, "Failed to move rq 0x%x to reset\n", rq->rqn);
- return err;
- }
- err = mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY);
- if (err) {
- netdev_err(dev, "Failed to move rq 0x%x to ready\n", rq->rqn);
- return err;
- }
-
- return 0;
-}
-
static int mlx5e_rx_reporter_err_rq_cqe_recover(void *ctx)
{
struct mlx5e_rq *rq = ctx;
int err;
mlx5e_deactivate_rq(rq);
- mlx5e_free_rx_descs(rq);
-
- err = mlx5e_rq_to_ready(rq, MLX5_RQC_STATE_ERR);
- if (err)
- goto out;
-
+ err = mlx5e_flush_rq(rq, MLX5_RQC_STATE_ERR);
clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
+ if (err)
+ return err;
+
mlx5e_activate_rq(rq);
rq->stats->recover++;
return 0;
-out:
- clear_bit(MLX5E_RQ_STATE_RECOVERING, &rq->state);
- return err;
}
static int mlx5e_rx_reporter_timeout_recover(void *ctx)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
index 72e0855..f286210 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
@@ -117,12 +117,14 @@ static int mlx5e_dcbnl_ieee_getets(struct net_device *netdev,
if (!MLX5_CAP_GEN(priv->mdev, ets))
return -EOPNOTSUPP;
- ets->ets_cap = mlx5_max_tc(priv->mdev) + 1;
- for (i = 0; i < ets->ets_cap; i++) {
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
err = mlx5_query_port_prio_tc(mdev, i, &ets->prio_tc[i]);
if (err)
return err;
+ }
+ ets->ets_cap = mlx5_max_tc(priv->mdev) + 1;
+ for (i = 0; i < ets->ets_cap; i++) {
err = mlx5_query_port_tc_group(mdev, i, &tc_group[i]);
if (err)
return err;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index be19f5c..ec1c667 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -392,7 +392,8 @@ static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *param
rq->icosq = &c->icosq;
rq->ix = c->ix;
rq->mdev = mdev;
- rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu);
+ rq->hw_mtu =
+ MLX5E_SW2HW_MTU(params, params->sw_mtu) - ETH_FCS_LEN * !params->scatter_fcs_en;
rq->xdpsq = &c->rq_xdpsq;
rq->stats = &c->priv->channel_stats[c->ix].rq;
rq->ptp_cyc2time = mlx5_rq_ts_translator(mdev);
@@ -672,7 +673,7 @@ int mlx5e_create_rq(struct mlx5e_rq *rq, struct mlx5e_rq_param *param)
return err;
}
-int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state)
+static int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state)
{
struct mlx5_core_dev *mdev = rq->mdev;
@@ -701,33 +702,30 @@ int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state, int next_state)
return err;
}
-static int mlx5e_modify_rq_scatter_fcs(struct mlx5e_rq *rq, bool enable)
+static int mlx5e_rq_to_ready(struct mlx5e_rq *rq, int curr_state)
{
- struct mlx5_core_dev *mdev = rq->mdev;
-
- void *in;
- void *rqc;
- int inlen;
+ struct net_device *dev = rq->netdev;
int err;
- inlen = MLX5_ST_SZ_BYTES(modify_rq_in);
- in = kvzalloc(inlen, GFP_KERNEL);
- if (!in)
- return -ENOMEM;
+ err = mlx5e_modify_rq_state(rq, curr_state, MLX5_RQC_STATE_RST);
+ if (err) {
+ netdev_err(dev, "Failed to move rq 0x%x to reset\n", rq->rqn);
+ return err;
+ }
+ err = mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY);
+ if (err) {
+ netdev_err(dev, "Failed to move rq 0x%x to ready\n", rq->rqn);
+ return err;
+ }
- rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+ return 0;
+}
- MLX5_SET(modify_rq_in, in, rq_state, MLX5_RQC_STATE_RDY);
- MLX5_SET64(modify_rq_in, in, modify_bitmask,
- MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_SCATTER_FCS);
- MLX5_SET(rqc, rqc, scatter_fcs, enable);
- MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RDY);
+int mlx5e_flush_rq(struct mlx5e_rq *rq, int curr_state)
+{
+ mlx5e_free_rx_descs(rq);
- err = mlx5_core_modify_rq(mdev, rq->rqn, in);
-
- kvfree(in);
-
- return err;
+ return mlx5e_rq_to_ready(rq, curr_state);
}
static int mlx5e_modify_rq_vsd(struct mlx5e_rq *rq, bool vsd)
@@ -2850,20 +2848,6 @@ static void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv)
mlx5e_destroy_tises(priv);
}
-static int mlx5e_modify_channels_scatter_fcs(struct mlx5e_channels *chs, bool enable)
-{
- int err = 0;
- int i;
-
- for (i = 0; i < chs->num; i++) {
- err = mlx5e_modify_rq_scatter_fcs(&chs->c[i]->rq, enable);
- if (err)
- return err;
- }
-
- return 0;
-}
-
static int mlx5e_modify_channels_vsd(struct mlx5e_channels *chs, bool vsd)
{
int err;
@@ -3375,41 +3359,27 @@ static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable)
return mlx5_set_ports_check(mdev, in, sizeof(in));
}
+static int mlx5e_set_rx_port_ts_wrap(struct mlx5e_priv *priv, void *ctx)
+{
+ struct mlx5_core_dev *mdev = priv->mdev;
+ bool enable = *(bool *)ctx;
+
+ return mlx5e_set_rx_port_ts(mdev, enable);
+}
+
static int set_feature_rx_fcs(struct net_device *netdev, bool enable)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5e_channels *chs = &priv->channels;
- struct mlx5_core_dev *mdev = priv->mdev;
+ struct mlx5e_params new_params;
int err;
mutex_lock(&priv->state_lock);
- if (enable) {
- err = mlx5e_set_rx_port_ts(mdev, false);
- if (err)
- goto out;
-
- chs->params.scatter_fcs_en = true;
- err = mlx5e_modify_channels_scatter_fcs(chs, true);
- if (err) {
- chs->params.scatter_fcs_en = false;
- mlx5e_set_rx_port_ts(mdev, true);
- }
- } else {
- chs->params.scatter_fcs_en = false;
- err = mlx5e_modify_channels_scatter_fcs(chs, false);
- if (err) {
- chs->params.scatter_fcs_en = true;
- goto out;
- }
- err = mlx5e_set_rx_port_ts(mdev, true);
- if (err) {
- mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err);
- err = 0;
- }
- }
-
-out:
+ new_params = chs->params;
+ new_params.scatter_fcs_en = enable;
+ err = mlx5e_safe_switch_params(priv, &new_params, mlx5e_set_rx_port_ts_wrap,
+ &new_params.scatter_fcs_en, true);
mutex_unlock(&priv->state_lock);
return err;
}
@@ -3557,8 +3527,12 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev,
netdev_warn(netdev, "Disabling rxhash, not supported when CQE compress is active\n");
}
- if (mlx5e_is_uplink_rep(priv))
+ if (mlx5e_is_uplink_rep(priv)) {
features = mlx5e_fix_uplink_rep_features(netdev, features);
+ features |= NETIF_F_NETNS_LOCAL;
+ } else {
+ features &= ~NETIF_F_NETNS_LOCAL;
+ }
mutex_unlock(&priv->state_lock);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
index 39e948b..34a6542 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/acl/ingress_ofld.c
@@ -301,8 +301,7 @@ int mlx5_esw_acl_ingress_vport_bond_update(struct mlx5_eswitch *esw, u16 vport_n
if (WARN_ON_ONCE(IS_ERR(vport))) {
esw_warn(esw->dev, "vport(%d) invalid!\n", vport_num);
- err = PTR_ERR(vport);
- goto out;
+ return PTR_ERR(vport);
}
esw_acl_ingress_ofld_rules_destroy(esw, vport);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
index 317d76b..aec0f67 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c
@@ -1270,7 +1270,7 @@ void mlx5_esw_bridge_fdb_update_used(struct net_device *dev, u16 vport_num, u16
struct mlx5_esw_bridge *bridge;
port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads);
- if (!port || port->flags & MLX5_ESW_BRIDGE_PORT_FLAG_PEER)
+ if (!port)
return;
bridge = port->bridge;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
index d377ddc..65c8f1f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
@@ -22,15 +22,13 @@ struct mlx5_esw_rate_group {
};
static int esw_qos_tsar_config(struct mlx5_core_dev *dev, u32 *sched_ctx,
- u32 parent_ix, u32 tsar_ix,
- u32 max_rate, u32 bw_share)
+ u32 tsar_ix, u32 max_rate, u32 bw_share)
{
u32 bitmask = 0;
if (!MLX5_CAP_GEN(dev, qos) || !MLX5_CAP_QOS(dev, esw_scheduling))
return -EOPNOTSUPP;
- MLX5_SET(scheduling_context, sched_ctx, parent_element_id, parent_ix);
MLX5_SET(scheduling_context, sched_ctx, max_average_bw, max_rate);
MLX5_SET(scheduling_context, sched_ctx, bw_share, bw_share);
bitmask |= MODIFY_SCHEDULING_ELEMENT_IN_MODIFY_BITMASK_MAX_AVERAGE_BW;
@@ -51,7 +49,7 @@ static int esw_qos_group_config(struct mlx5_eswitch *esw, struct mlx5_esw_rate_g
int err;
err = esw_qos_tsar_config(dev, sched_ctx,
- esw->qos.root_tsar_ix, group->tsar_ix,
+ group->tsar_ix,
max_rate, bw_share);
if (err)
NL_SET_ERR_MSG_MOD(extack, "E-Switch modify group TSAR element failed");
@@ -67,23 +65,13 @@ static int esw_qos_vport_config(struct mlx5_eswitch *esw,
struct netlink_ext_ack *extack)
{
u32 sched_ctx[MLX5_ST_SZ_DW(scheduling_context)] = {};
- struct mlx5_esw_rate_group *group = vport->qos.group;
struct mlx5_core_dev *dev = esw->dev;
- u32 parent_tsar_ix;
- void *vport_elem;
int err;
if (!vport->qos.enabled)
return -EIO;
- parent_tsar_ix = group ? group->tsar_ix : esw->qos.root_tsar_ix;
- MLX5_SET(scheduling_context, sched_ctx, element_type,
- SCHEDULING_CONTEXT_ELEMENT_TYPE_VPORT);
- vport_elem = MLX5_ADDR_OF(scheduling_context, sched_ctx,
- element_attributes);
- MLX5_SET(vport_element, vport_elem, vport_number, vport->vport);
-
- err = esw_qos_tsar_config(dev, sched_ctx, parent_tsar_ix, vport->qos.esw_tsar_ix,
+ err = esw_qos_tsar_config(dev, sched_ctx, vport->qos.esw_tsar_ix,
max_rate, bw_share);
if (err) {
esw_warn(esw->dev,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index 2b92780..7315bf4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -918,6 +918,7 @@ void mlx5_esw_vport_disable(struct mlx5_eswitch *esw, u16 vport_num)
*/
esw_vport_change_handle_locked(vport);
vport->enabled_events = 0;
+ esw_apply_vport_rx_mode(esw, vport, false, false);
esw_vport_cleanup(esw, vport);
esw->enabled_vports--;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index 3194cdc..0025677 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -962,7 +962,8 @@ mlx5_eswitch_add_send_to_vport_rule(struct mlx5_eswitch *on_esw,
dest.vport.flags |= MLX5_FLOW_DEST_VPORT_VHCA_ID;
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
- if (rep->vport == MLX5_VPORT_UPLINK)
+ if (MLX5_CAP_ESW_FLOWTABLE(on_esw->dev, flow_source) &&
+ rep->vport == MLX5_VPORT_UPLINK)
spec->flow_context.flow_source = MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT;
flow_rule = mlx5_add_flow_rules(on_esw->fdb_table.offloads.slow_fdb,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
index 3dceab4..1504856 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
@@ -617,6 +617,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work)
mutex_lock(&dev->intf_state_mutex);
if (test_bit(MLX5_DROP_NEW_HEALTH_WORK, &health->flags)) {
mlx5_core_err(dev, "health works are not permitted at this stage\n");
+ mutex_unlock(&dev->intf_state_mutex);
return;
}
mutex_unlock(&dev->intf_state_mutex);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
index 0c8594c..908e5ee 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c
@@ -172,16 +172,16 @@ static inline int mlx5_ptys_rate_enum_to_int(enum mlx5_ptys_rate rate)
}
}
-static int mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper)
+static u32 mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper)
{
int rate, width;
rate = mlx5_ptys_rate_enum_to_int(ib_proto_oper);
if (rate < 0)
- return -EINVAL;
+ return SPEED_UNKNOWN;
width = mlx5_ptys_width_enum_to_int(ib_link_width_oper);
if (width < 0)
- return -EINVAL;
+ return SPEED_UNKNOWN;
return rate * width;
}
@@ -204,16 +204,13 @@ static int mlx5i_get_link_ksettings(struct net_device *netdev,
ethtool_link_ksettings_zero_link_mode(link_ksettings, advertising);
speed = mlx5i_get_speed_settings(ib_link_width_oper, ib_proto_oper);
- if (speed < 0)
- return -EINVAL;
+ link_ksettings->base.speed = speed;
+ link_ksettings->base.duplex = speed == SPEED_UNKNOWN ? DUPLEX_UNKNOWN : DUPLEX_FULL;
- link_ksettings->base.duplex = DUPLEX_FULL;
link_ksettings->base.port = PORT_OTHER;
link_ksettings->base.autoneg = AUTONEG_DISABLE;
- link_ksettings->base.speed = speed;
-
return 0;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
index 23361a9..6dc83e8 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/geneve.c
@@ -105,6 +105,7 @@ int mlx5_geneve_tlv_option_add(struct mlx5_geneve *geneve, struct geneve_opt *op
geneve->opt_type = opt->type;
geneve->obj_id = res;
geneve->refcount++;
+ res = 0;
}
unlock:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 145e56f..485a6a6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -1849,7 +1849,7 @@ static void mlx5_core_verify_params(void)
}
}
-static int __init init(void)
+static int __init mlx5_init(void)
{
int err;
@@ -1862,7 +1862,7 @@ static int __init init(void)
mlx5_fpga_ipsec_build_fs_cmds();
mlx5_register_debugfs();
- err = pci_register_driver(&mlx5_core_driver);
+ err = mlx5e_init();
if (err)
goto err_debug;
@@ -1870,28 +1870,28 @@ static int __init init(void)
if (err)
goto err_sf;
- err = mlx5e_init();
+ err = pci_register_driver(&mlx5_core_driver);
if (err)
- goto err_en;
+ goto err_pci;
return 0;
-err_en:
+err_pci:
mlx5_sf_driver_unregister();
err_sf:
- pci_unregister_driver(&mlx5_core_driver);
+ mlx5e_cleanup();
err_debug:
mlx5_unregister_debugfs();
return err;
}
-static void __exit cleanup(void)
+static void __exit mlx5_cleanup(void)
{
- mlx5e_cleanup();
- mlx5_sf_driver_unregister();
pci_unregister_driver(&mlx5_core_driver);
+ mlx5_sf_driver_unregister();
+ mlx5e_cleanup();
mlx5_unregister_debugfs();
}
-module_init(init);
-module_exit(cleanup);
+module_init(mlx5_init);
+module_exit(mlx5_cleanup);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
index 110c083..ae6ac51 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
@@ -216,7 +216,8 @@ static int alloc_4k(struct mlx5_core_dev *dev, u64 *addr, u32 function)
n = find_first_bit(&fp->bitmask, 8 * sizeof(fp->bitmask));
if (n >= MLX5_NUM_4K_IN_PAGE) {
- mlx5_core_warn(dev, "alloc 4k bug\n");
+ mlx5_core_warn(dev, "alloc 4k bug: fw page = 0x%llx, n = %u, bitmask: %lu, max num of 4K pages: %d\n",
+ fp->addr, n, fp->bitmask, MLX5_NUM_4K_IN_PAGE);
return -ENOENT;
}
clear_bit(n, &fp->bitmask);
diff --git a/drivers/net/ethernet/microsoft/mana/gdma.h b/drivers/net/ethernet/microsoft/mana/gdma.h
index 1038bdf..f74f416 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma.h
+++ b/drivers/net/ethernet/microsoft/mana/gdma.h
@@ -324,9 +324,12 @@ struct gdma_queue_spec {
};
};
+#define MANA_IRQ_NAME_SZ 32
+
struct gdma_irq_context {
void (*handler)(void *arg);
void *arg;
+ char name[MANA_IRQ_NAME_SZ];
};
struct gdma_context {
diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index f577507..0fb4219 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -1195,13 +1195,20 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
gic->handler = NULL;
gic->arg = NULL;
+ if (!i)
+ snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_hwc@pci:%s",
+ pci_name(pdev));
+ else
+ snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
+ i - 1, pci_name(pdev));
+
irq = pci_irq_vector(pdev, i);
if (irq < 0) {
err = irq;
goto free_irq;
}
- err = request_irq(irq, mana_gd_intr, 0, "mana_intr", gic);
+ err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
if (err)
goto free_irq;
}
diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c
index a3a5ad5..b7e7bd7 100644
--- a/drivers/net/ethernet/mscc/ocelot_flower.c
+++ b/drivers/net/ethernet/mscc/ocelot_flower.c
@@ -473,6 +473,18 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
flow_rule_match_control(rule, &match);
}
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
+ struct flow_match_vlan match;
+
+ flow_rule_match_vlan(rule, &match);
+ filter->key_type = OCELOT_VCAP_KEY_ANY;
+ filter->vlan.vid.value = match.key->vlan_id;
+ filter->vlan.vid.mask = match.mask->vlan_id;
+ filter->vlan.pcp.value[0] = match.key->vlan_priority;
+ filter->vlan.pcp.mask[0] = match.mask->vlan_priority;
+ match_protocol = false;
+ }
+
if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
struct flow_match_eth_addrs match;
@@ -605,18 +617,6 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress,
match_protocol = false;
}
- if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) {
- struct flow_match_vlan match;
-
- flow_rule_match_vlan(rule, &match);
- filter->key_type = OCELOT_VCAP_KEY_ANY;
- filter->vlan.vid.value = match.key->vlan_id;
- filter->vlan.vid.mask = match.mask->vlan_id;
- filter->vlan.pcp.value[0] = match.key->vlan_priority;
- filter->vlan.pcp.mask[0] = match.mask->vlan_priority;
- match_protocol = false;
- }
-
finished_key_parsing:
if (match_protocol && proto != ETH_P_ALL) {
if (filter->block_id == VCAP_ES0) {
diff --git a/drivers/net/ethernet/natsemi/sonic.c b/drivers/net/ethernet/natsemi/sonic.c
index d17d1b4..825356e 100644
--- a/drivers/net/ethernet/natsemi/sonic.c
+++ b/drivers/net/ethernet/natsemi/sonic.c
@@ -292,7 +292,7 @@ static int sonic_send_packet(struct sk_buff *skb, struct net_device *dev)
*/
laddr = dma_map_single(lp->device, skb->data, length, DMA_TO_DEVICE);
- if (!laddr) {
+ if (dma_mapping_error(lp->device, laddr)) {
pr_err_ratelimited("%s: failed to map tx DMA buffer.\n", dev->name);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
@@ -509,7 +509,7 @@ static bool sonic_alloc_rb(struct net_device *dev, struct sonic_local *lp,
*new_addr = dma_map_single(lp->device, skb_put(*new_skb, SONIC_RBSIZE),
SONIC_RBSIZE, DMA_FROM_DEVICE);
- if (!*new_addr) {
+ if (dma_mapping_error(lp->device, *new_addr)) {
dev_kfree_skb(*new_skb);
*new_skb = NULL;
return false;
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
index 886c997..6fbd2a5 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
@@ -268,6 +268,7 @@ static int ionic_qcq_enable(struct ionic_qcq *qcq)
.oper = IONIC_Q_ENABLE,
},
};
+ int ret;
idev = &lif->ionic->idev;
dev = lif->ionic->dev;
@@ -275,16 +276,24 @@ static int ionic_qcq_enable(struct ionic_qcq *qcq)
dev_dbg(dev, "q_enable.index %d q_enable.qtype %d\n",
ctx.cmd.q_control.index, ctx.cmd.q_control.type);
+ if (qcq->flags & IONIC_QCQ_F_INTR)
+ ionic_intr_clean(idev->intr_ctrl, qcq->intr.index);
+
+ ret = ionic_adminq_post_wait(lif, &ctx);
+ if (ret)
+ return ret;
+
+ if (qcq->napi.poll)
+ napi_enable(&qcq->napi);
+
if (qcq->flags & IONIC_QCQ_F_INTR) {
irq_set_affinity_hint(qcq->intr.vector,
&qcq->intr.affinity_mask);
- napi_enable(&qcq->napi);
- ionic_intr_clean(idev->intr_ctrl, qcq->intr.index);
ionic_intr_mask(idev->intr_ctrl, qcq->intr.index,
IONIC_INTR_MASK_CLEAR);
}
- return ionic_adminq_post_wait(lif, &ctx);
+ return 0;
}
static int ionic_qcq_disable(struct ionic_qcq *qcq, bool send_to_hw)
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
index 37c3958..376f97b 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
@@ -353,16 +353,25 @@ void ionic_rx_fill(struct ionic_queue *q)
struct ionic_rxq_sg_desc *sg_desc;
struct ionic_rxq_sg_elem *sg_elem;
struct ionic_buf_info *buf_info;
+ unsigned int fill_threshold;
struct ionic_rxq_desc *desc;
unsigned int remain_len;
unsigned int frag_len;
unsigned int nfrags;
+ unsigned int n_fill;
unsigned int i, j;
unsigned int len;
+ n_fill = ionic_q_space_avail(q);
+
+ fill_threshold = min_t(unsigned int, IONIC_RX_FILL_THRESHOLD,
+ q->num_descs / IONIC_RX_FILL_DIV);
+ if (n_fill < fill_threshold)
+ return;
+
len = netdev->mtu + ETH_HLEN + VLAN_HLEN;
- for (i = ionic_q_space_avail(q); i; i--) {
+ for (i = n_fill; i; i--) {
nfrags = 0;
remain_len = len;
desc_info = &q->info[q->head_idx];
@@ -518,7 +527,6 @@ int ionic_rx_napi(struct napi_struct *napi, int budget)
struct ionic_cq *cq = napi_to_cq(napi);
struct ionic_dev *idev;
struct ionic_lif *lif;
- u16 rx_fill_threshold;
u32 work_done = 0;
u32 flags = 0;
@@ -528,10 +536,7 @@ int ionic_rx_napi(struct napi_struct *napi, int budget)
work_done = ionic_cq_service(cq, budget,
ionic_rx_service, NULL, NULL);
- rx_fill_threshold = min_t(u16, IONIC_RX_FILL_THRESHOLD,
- cq->num_descs / IONIC_RX_FILL_DIV);
- if (work_done && ionic_q_space_avail(cq->bound_q) >= rx_fill_threshold)
- ionic_rx_fill(cq->bound_q);
+ ionic_rx_fill(cq->bound_q);
if (work_done < budget && napi_complete_done(napi, work_done)) {
ionic_dim_update(qcq, IONIC_LIF_F_RX_DIM_INTR);
@@ -559,7 +564,6 @@ int ionic_txrx_napi(struct napi_struct *napi, int budget)
struct ionic_dev *idev;
struct ionic_lif *lif;
struct ionic_cq *txcq;
- u16 rx_fill_threshold;
u32 rx_work_done = 0;
u32 tx_work_done = 0;
u32 flags = 0;
@@ -574,10 +578,7 @@ int ionic_txrx_napi(struct napi_struct *napi, int budget)
rx_work_done = ionic_cq_service(rxcq, budget,
ionic_rx_service, NULL, NULL);
- rx_fill_threshold = min_t(u16, IONIC_RX_FILL_THRESHOLD,
- rxcq->num_descs / IONIC_RX_FILL_DIV);
- if (rx_work_done && ionic_q_space_avail(rxcq->bound_q) >= rx_fill_threshold)
- ionic_rx_fill(rxcq->bound_q);
+ ionic_rx_fill(rxcq->bound_q);
if (rx_work_done < budget && napi_complete_done(napi, rx_work_done)) {
ionic_dim_update(qcq, 0);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
index 0410c36..ba44572 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
@@ -5022,6 +5022,11 @@ static int qed_init_wfq_param(struct qed_hwfn *p_hwfn,
num_vports = p_hwfn->qm_info.num_vports;
+ if (num_vports < 2) {
+ DP_NOTICE(p_hwfn, "Unexpected num_vports: %d\n", num_vports);
+ return -EINVAL;
+ }
+
/* Accounting for the vports which are configured for WFQ explicitly */
for (i = 0; i < num_vports; i++) {
u32 tmp_speed;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c b/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
index 6190adf..f55eed0 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_mng_tlv.c
@@ -422,7 +422,7 @@ qed_mfw_get_tlv_time_value(struct qed_mfw_tlv_time *p_time,
if (p_time->hour > 23)
p_time->hour = 0;
if (p_time->min > 59)
- p_time->hour = 0;
+ p_time->min = 0;
if (p_time->msec > 999)
p_time->msec = 0;
if (p_time->usec > 999)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
index 3eb0537..bf0ba38 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_sriov.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
@@ -4378,6 +4378,9 @@ qed_iov_configure_min_tx_rate(struct qed_dev *cdev, int vfid, u32 rate)
}
vf = qed_iov_get_vf_info(QED_LEADING_HWFN(cdev), (u16)vfid, true);
+ if (!vf)
+ return -EINVAL;
+
vport_id = vf->vport_id;
return qed_configure_vport_wfq(cdev, vport_id, rate);
@@ -5124,7 +5127,7 @@ static void qed_iov_handle_trust_change(struct qed_hwfn *hwfn)
/* Validate that the VF has a configured vport */
vf = qed_iov_get_vf_info(hwfn, i, true);
- if (!vf->vport_instance)
+ if (!vf || !vf->vport_instance)
continue;
memset(¶ms, 0, sizeof(params));
diff --git a/drivers/net/ethernet/qlogic/qede/qede_fp.c b/drivers/net/ethernet/qlogic/qede/qede_fp.c
index 17f8952..d67d4e7 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_fp.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_fp.c
@@ -1439,6 +1439,10 @@ int qede_poll(struct napi_struct *napi, int budget)
rx_work_done = (likely(fp->type & QEDE_FASTPATH_RX) &&
qede_has_rx_work(fp->rxq)) ?
qede_rx_int(fp, budget) : 0;
+
+ if (fp->xdp_xmit & QEDE_XDP_REDIRECT)
+ xdp_do_flush();
+
/* Handle case where we are called by netpoll with a budget of 0 */
if (rx_work_done < budget || !budget) {
if (!qede_poll_is_more_work(fp)) {
@@ -1458,9 +1462,6 @@ int qede_poll(struct napi_struct *napi, int budget)
qede_update_tx_producer(fp->xdp_tx);
}
- if (fp->xdp_xmit & QEDE_XDP_REDIRECT)
- xdp_do_flush_map();
-
return rx_work_done;
}
diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
index ee4c3bd..2d3f0ae 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
@@ -899,7 +899,6 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
{
u8 fp_combined, fp_rx = edev->fp_num_rx;
struct qede_fastpath *fp;
- void *mem;
int i;
edev->fp_array = kcalloc(QEDE_QUEUE_CNT(edev),
@@ -909,14 +908,15 @@ static int qede_alloc_fp_array(struct qede_dev *edev)
goto err;
}
- mem = krealloc(edev->coal_entry, QEDE_QUEUE_CNT(edev) *
- sizeof(*edev->coal_entry), GFP_KERNEL);
- if (!mem) {
- DP_ERR(edev, "coalesce entry allocation failed\n");
- kfree(edev->coal_entry);
- goto err;
+ if (!edev->coal_entry) {
+ edev->coal_entry = kcalloc(QEDE_MAX_RSS_CNT(edev),
+ sizeof(*edev->coal_entry),
+ GFP_KERNEL);
+ if (!edev->coal_entry) {
+ DP_ERR(edev, "coalesce entry allocation failed\n");
+ goto err;
+ }
}
- edev->coal_entry = mem;
fp_combined = QEDE_QUEUE_CNT(edev) - fp_rx - edev->fp_num_tx;
diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
index 9015a38..bb7f328 100644
--- a/drivers/net/ethernet/qualcomm/emac/emac.c
+++ b/drivers/net/ethernet/qualcomm/emac/emac.c
@@ -728,9 +728,15 @@ static int emac_remove(struct platform_device *pdev)
struct net_device *netdev = dev_get_drvdata(&pdev->dev);
struct emac_adapter *adpt = netdev_priv(netdev);
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
unregister_netdev(netdev);
netif_napi_del(&adpt->rx_q.napi);
+ free_irq(adpt->irq.irq, &adpt->irq);
+ cancel_work_sync(&adpt->work_thread);
+
emac_clks_teardown(adpt);
put_device(&adpt->phydev->mdio.dev);
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
index 2af4c76..264bb3e 100644
--- a/drivers/net/ethernet/realtek/r8169_main.c
+++ b/drivers/net/ethernet/realtek/r8169_main.c
@@ -2251,28 +2251,6 @@ static int rtl_set_mac_address(struct net_device *dev, void *p)
return 0;
}
-static void rtl_wol_enable_rx(struct rtl8169_private *tp)
-{
- if (tp->mac_version >= RTL_GIGA_MAC_VER_25)
- RTL_W32(tp, RxConfig, RTL_R32(tp, RxConfig) |
- AcceptBroadcast | AcceptMulticast | AcceptMyPhys);
-}
-
-static void rtl_prepare_power_down(struct rtl8169_private *tp)
-{
- if (tp->dash_type != RTL_DASH_NONE)
- return;
-
- if (tp->mac_version == RTL_GIGA_MAC_VER_32 ||
- tp->mac_version == RTL_GIGA_MAC_VER_33)
- rtl_ephy_write(tp, 0x19, 0xff64);
-
- if (device_may_wakeup(tp_to_dev(tp))) {
- phy_speed_down(tp->phydev, false);
- rtl_wol_enable_rx(tp);
- }
-}
-
static void rtl_init_rxcfg(struct rtl8169_private *tp)
{
switch (tp->mac_version) {
@@ -2492,6 +2470,28 @@ static void rtl_enable_rxdvgate(struct rtl8169_private *tp)
rtl_wait_txrx_fifo_empty(tp);
}
+static void rtl_wol_enable_rx(struct rtl8169_private *tp)
+{
+ if (tp->mac_version >= RTL_GIGA_MAC_VER_25)
+ RTL_W32(tp, RxConfig, RTL_R32(tp, RxConfig) |
+ AcceptBroadcast | AcceptMulticast | AcceptMyPhys);
+}
+
+static void rtl_prepare_power_down(struct rtl8169_private *tp)
+{
+ if (tp->dash_type != RTL_DASH_NONE)
+ return;
+
+ if (tp->mac_version == RTL_GIGA_MAC_VER_32 ||
+ tp->mac_version == RTL_GIGA_MAC_VER_33)
+ rtl_ephy_write(tp, 0x19, 0xff64);
+
+ if (device_may_wakeup(tp_to_dev(tp))) {
+ phy_speed_down(tp->phydev, false);
+ rtl_wol_enable_rx(tp);
+ }
+}
+
static void rtl_set_tx_config_registers(struct rtl8169_private *tp)
{
u32 val = TX_DMA_BURST << TxDMAShift |
diff --git a/drivers/net/ethernet/realtek/r8169_phy_config.c b/drivers/net/ethernet/realtek/r8169_phy_config.c
index 50f0f62..a84fd85 100644
--- a/drivers/net/ethernet/realtek/r8169_phy_config.c
+++ b/drivers/net/ethernet/realtek/r8169_phy_config.c
@@ -970,6 +970,9 @@ static void rtl8168h_2_hw_phy_config(struct rtl8169_private *tp,
/* disable phy pfm mode */
phy_modify_paged(phydev, 0x0a44, 0x11, BIT(7), 0);
+ /* disable 10m pll off */
+ phy_modify_paged(phydev, 0x0a43, 0x10, BIT(0), 0);
+
rtl8168g_disable_aldps(phydev);
rtl8168g_config_eee_phy(phydev);
}
diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h
index 47c5377..a475f54 100644
--- a/drivers/net/ethernet/renesas/ravb.h
+++ b/drivers/net/ethernet/renesas/ravb.h
@@ -1000,8 +1000,8 @@ struct ravb_hw_info {
unsigned internal_delay:1; /* AVB-DMAC has internal delays */
unsigned tx_counters:1; /* E-MAC has TX counters */
unsigned multi_irqs:1; /* AVB-DMAC and E-MAC has multiple irqs */
- unsigned no_ptp_cfg_active:1; /* AVB-DMAC does not support gPTP active in config mode */
- unsigned ptp_cfg_active:1; /* AVB-DMAC has gPTP support active in config mode */
+ unsigned gptp:1; /* AVB-DMAC has gPTP support */
+ unsigned ccc_gac:1; /* AVB-DMAC has gPTP support active in config mode */
};
struct ravb_private {
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
index c89bcdd..12548ee 100644
--- a/drivers/net/ethernet/renesas/ravb_main.c
+++ b/drivers/net/ethernet/renesas/ravb_main.c
@@ -792,14 +792,14 @@ static void ravb_error_interrupt(struct net_device *ndev)
ravb_write(ndev, ~(EIS_QFS | EIS_RESERVED), EIS);
if (eis & EIS_QFS) {
ris2 = ravb_read(ndev, RIS2);
- ravb_write(ndev, ~(RIS2_QFF0 | RIS2_RFFF | RIS2_RESERVED),
+ ravb_write(ndev, ~(RIS2_QFF0 | RIS2_QFF1 | RIS2_RFFF | RIS2_RESERVED),
RIS2);
/* Receive Descriptor Empty int */
if (ris2 & RIS2_QFF0)
priv->stats[RAVB_BE].rx_over_errors++;
- /* Receive Descriptor Empty int */
+ /* Receive Descriptor Empty int */
if (ris2 & RIS2_QFF1)
priv->stats[RAVB_NC].rx_over_errors++;
@@ -1115,8 +1115,6 @@ static int ravb_phy_init(struct net_device *ndev)
phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
- /* Indicate that the MAC is responsible for managing PHY PM */
- phydev->mac_managed_pm = true;
phy_attached_info(phydev);
return 0;
@@ -1275,7 +1273,7 @@ static int ravb_set_ringparam(struct net_device *ndev,
if (netif_running(ndev)) {
netif_device_detach(ndev);
/* Stop PTP Clock driver */
- if (info->no_ptp_cfg_active)
+ if (info->gptp)
ravb_ptp_stop(ndev);
/* Wait for DMA stopping */
error = ravb_stop_dma(ndev);
@@ -1307,7 +1305,7 @@ static int ravb_set_ringparam(struct net_device *ndev,
ravb_emac_init(ndev);
/* Initialise PTP Clock driver */
- if (info->no_ptp_cfg_active)
+ if (info->gptp)
ravb_ptp_init(ndev, priv->pdev);
netif_device_attach(ndev);
@@ -1447,7 +1445,7 @@ static int ravb_open(struct net_device *ndev)
ravb_emac_init(ndev);
/* Initialise PTP Clock driver */
- if (info->no_ptp_cfg_active)
+ if (info->gptp)
ravb_ptp_init(ndev, priv->pdev);
netif_tx_start_all_queues(ndev);
@@ -1461,7 +1459,7 @@ static int ravb_open(struct net_device *ndev)
out_ptp_stop:
/* Stop PTP Clock driver */
- if (info->no_ptp_cfg_active)
+ if (info->gptp)
ravb_ptp_stop(ndev);
out_free_irq_nc_tx:
if (!info->multi_irqs)
@@ -1509,7 +1507,7 @@ static void ravb_tx_timeout_work(struct work_struct *work)
netif_tx_stop_all_queues(ndev);
/* Stop PTP Clock driver */
- if (info->no_ptp_cfg_active)
+ if (info->gptp)
ravb_ptp_stop(ndev);
/* Wait for DMA stopping */
@@ -1544,7 +1542,7 @@ static void ravb_tx_timeout_work(struct work_struct *work)
out:
/* Initialise PTP Clock driver */
- if (info->no_ptp_cfg_active)
+ if (info->gptp)
ravb_ptp_init(ndev, priv->pdev);
netif_tx_start_all_queues(ndev);
@@ -1753,7 +1751,7 @@ static int ravb_close(struct net_device *ndev)
ravb_write(ndev, 0, TIC);
/* Stop PTP Clock driver */
- if (info->no_ptp_cfg_active)
+ if (info->gptp)
ravb_ptp_stop(ndev);
/* Set the config mode to stop the AVB-DMAC's processes */
@@ -1961,6 +1959,8 @@ static int ravb_mdio_init(struct ravb_private *priv)
{
struct platform_device *pdev = priv->pdev;
struct device *dev = &pdev->dev;
+ struct phy_device *phydev;
+ struct device_node *pn;
int error;
/* Bitbang init */
@@ -1982,6 +1982,14 @@ static int ravb_mdio_init(struct ravb_private *priv)
if (error)
goto out_free_bus;
+ pn = of_parse_phandle(dev->of_node, "phy-handle", 0);
+ phydev = of_phy_find_device(pn);
+ if (phydev) {
+ phydev->mac_managed_pm = true;
+ put_device(&phydev->mdio.dev);
+ }
+ of_node_put(pn);
+
return 0;
out_free_bus:
@@ -2019,7 +2027,7 @@ static const struct ravb_hw_info ravb_gen3_hw_info = {
.internal_delay = 1,
.tx_counters = 1,
.multi_irqs = 1,
- .ptp_cfg_active = 1,
+ .ccc_gac = 1,
};
static const struct ravb_hw_info ravb_gen2_hw_info = {
@@ -2038,7 +2046,7 @@ static const struct ravb_hw_info ravb_gen2_hw_info = {
.stats_len = ARRAY_SIZE(ravb_gstrings_stats),
.max_rx_len = RX_BUF_SZ + RAVB_ALIGN - 1,
.aligned_tx = 1,
- .no_ptp_cfg_active = 1,
+ .gptp = 1,
};
static const struct of_device_id ravb_match_table[] = {
@@ -2080,7 +2088,7 @@ static void ravb_set_config_mode(struct net_device *ndev)
struct ravb_private *priv = netdev_priv(ndev);
const struct ravb_hw_info *info = priv->info;
- if (info->no_ptp_cfg_active) {
+ if (info->gptp) {
ravb_modify(ndev, CCC, CCC_OPC, CCC_OPC_CONFIG);
/* Set CSEL value */
ravb_modify(ndev, CCC, CCC_CSEL, CCC_CSEL_HPB);
@@ -2301,7 +2309,7 @@ static int ravb_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&priv->ts_skb_list);
/* Initialise PTP Clock driver */
- if (info->ptp_cfg_active)
+ if (info->ccc_gac)
ravb_ptp_init(ndev, pdev);
/* Debug message level */
@@ -2349,7 +2357,7 @@ static int ravb_probe(struct platform_device *pdev)
priv->desc_bat_dma);
/* Stop PTP Clock driver */
- if (info->ptp_cfg_active)
+ if (info->ccc_gac)
ravb_ptp_stop(ndev);
out_disable_refclk:
clk_disable_unprepare(priv->refclk);
@@ -2369,7 +2377,7 @@ static int ravb_remove(struct platform_device *pdev)
const struct ravb_hw_info *info = priv->info;
/* Stop PTP Clock driver */
- if (info->ptp_cfg_active)
+ if (info->ccc_gac)
ravb_ptp_stop(ndev);
clk_disable_unprepare(priv->refclk);
@@ -2446,6 +2454,9 @@ static int __maybe_unused ravb_suspend(struct device *dev)
else
ret = ravb_close(ndev);
+ if (priv->info->ccc_gac)
+ ravb_ptp_stop(ndev);
+
return ret;
}
@@ -2482,6 +2493,9 @@ static int __maybe_unused ravb_resume(struct device *dev)
/* Restore descriptor base address table */
ravb_write(ndev, priv->desc_bat_dma, DBAT);
+ if (priv->info->ccc_gac)
+ ravb_ptp_init(ndev, priv->pdev);
+
if (netif_running(ndev)) {
if (priv->wol_enabled) {
ret = ravb_wol_restore(ndev);
diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
index 4e190f5e..b6e426d 100644
--- a/drivers/net/ethernet/renesas/sh_eth.c
+++ b/drivers/net/ethernet/renesas/sh_eth.c
@@ -2033,8 +2033,6 @@ static int sh_eth_phy_init(struct net_device *ndev)
}
}
- /* Indicate that the MAC is responsible for managing PHY PM */
- phydev->mac_managed_pm = true;
phy_attached_info(phydev);
return 0;
@@ -3074,6 +3072,8 @@ static int sh_mdio_init(struct sh_eth_private *mdp,
struct bb_info *bitbang;
struct platform_device *pdev = mdp->pdev;
struct device *dev = &mdp->pdev->dev;
+ struct phy_device *phydev;
+ struct device_node *pn;
/* create bit control struct for PHY */
bitbang = devm_kzalloc(dev, sizeof(struct bb_info), GFP_KERNEL);
@@ -3108,6 +3108,14 @@ static int sh_mdio_init(struct sh_eth_private *mdp,
if (ret)
goto out_free_bus;
+ pn = of_parse_phandle(dev->of_node, "phy-handle", 0);
+ phydev = of_phy_find_device(pn);
+ if (phydev) {
+ phydev->mac_managed_pm = true;
+ put_device(&phydev->mdio.dev);
+ }
+ of_node_put(pn);
+
return 0;
out_free_bus:
diff --git a/drivers/net/ethernet/sfc/ef10.c b/drivers/net/ethernet/sfc/ef10.c
index c316a9e..302b97c 100644
--- a/drivers/net/ethernet/sfc/ef10.c
+++ b/drivers/net/ethernet/sfc/ef10.c
@@ -1304,7 +1304,8 @@ static void efx_ef10_fini_nic(struct efx_nic *efx)
static int efx_ef10_init_nic(struct efx_nic *efx)
{
struct efx_ef10_nic_data *nic_data = efx->nic_data;
- netdev_features_t hw_enc_features = 0;
+ struct net_device *net_dev = efx->net_dev;
+ netdev_features_t tun_feats, tso_feats;
int rc;
if (nic_data->must_check_datapath_caps) {
@@ -1349,20 +1350,30 @@ static int efx_ef10_init_nic(struct efx_nic *efx)
nic_data->must_restore_piobufs = false;
}
- /* add encapsulated checksum offload features */
+ /* encap features might change during reset if fw variant changed */
if (efx_has_cap(efx, VXLAN_NVGRE) && !efx_ef10_is_vf(efx))
- hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
- /* add encapsulated TSO features */
+ net_dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
+ else
+ net_dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);
+
+ tun_feats = NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM;
+ tso_feats = NETIF_F_TSO | NETIF_F_TSO6;
+
if (efx_has_cap(efx, TX_TSO_V2_ENCAP)) {
- netdev_features_t encap_tso_features;
-
- encap_tso_features = NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
- NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM;
-
- hw_enc_features |= encap_tso_features | NETIF_F_TSO;
- efx->net_dev->features |= encap_tso_features;
+ /* If this is first nic_init, or if it is a reset and a new fw
+ * variant has added new features, enable them by default.
+ * If the features are not new, maintain their current value.
+ */
+ if (!(net_dev->hw_features & tun_feats))
+ net_dev->features |= tun_feats;
+ net_dev->hw_enc_features |= tun_feats | tso_feats;
+ net_dev->hw_features |= tun_feats;
+ } else {
+ net_dev->hw_enc_features &= ~(tun_feats | tso_feats);
+ net_dev->hw_features &= ~tun_feats;
+ net_dev->features &= ~tun_feats;
}
- efx->net_dev->hw_enc_features = hw_enc_features;
/* don't fail init if RSS setup doesn't work */
rc = efx->type->rx_push_rss_config(efx, false,
@@ -3993,7 +4004,10 @@ static unsigned int ef10_check_caps(const struct efx_nic *efx,
NETIF_F_HW_VLAN_CTAG_FILTER | \
NETIF_F_IPV6_CSUM | \
NETIF_F_RXHASH | \
- NETIF_F_NTUPLE)
+ NETIF_F_NTUPLE | \
+ NETIF_F_SG | \
+ NETIF_F_RXCSUM | \
+ NETIF_F_RXALL)
const struct efx_nic_type efx_hunt_a0_vf_nic_type = {
.is_vf = true,
diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c
index 43ef4f5..16a89636 100644
--- a/drivers/net/ethernet/sfc/efx.c
+++ b/drivers/net/ethernet/sfc/efx.c
@@ -1003,18 +1003,18 @@ static int efx_pci_probe_post_io(struct efx_nic *efx)
}
/* Determine netdevice features */
- net_dev->features |= (efx->type->offload_features | NETIF_F_SG |
- NETIF_F_TSO | NETIF_F_RXCSUM | NETIF_F_RXALL);
- if (efx->type->offload_features & (NETIF_F_IPV6_CSUM | NETIF_F_HW_CSUM))
- net_dev->features |= NETIF_F_TSO6;
- /* Check whether device supports TSO */
- if (!efx->type->tso_versions || !efx->type->tso_versions(efx))
- net_dev->features &= ~NETIF_F_ALL_TSO;
+ net_dev->features |= efx->type->offload_features;
+
+ /* Add TSO features */
+ if (efx->type->tso_versions && efx->type->tso_versions(efx))
+ net_dev->features |= NETIF_F_TSO | NETIF_F_TSO6;
+
/* Mask for features that also apply to VLAN devices */
net_dev->vlan_features |= (NETIF_F_HW_CSUM | NETIF_F_SG |
NETIF_F_HIGHDMA | NETIF_F_ALL_TSO |
NETIF_F_RXCSUM);
+ /* Determine user configurable features */
net_dev->hw_features |= net_dev->features & ~efx->fixed_features;
/* Disable receiving frames with bad FCS, by default. */
diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c
index 63b99dd..b330dcb 100644
--- a/drivers/net/ethernet/smsc/smsc911x.c
+++ b/drivers/net/ethernet/smsc/smsc911x.c
@@ -1037,8 +1037,6 @@ static int smsc911x_mii_probe(struct net_device *dev)
return ret;
}
- /* Indicate that the MAC is responsible for managing PHY PM */
- phydev->mac_managed_pm = true;
phy_attached_info(phydev);
phy_set_max_speed(phydev, SPEED_100);
@@ -1066,6 +1064,7 @@ static int smsc911x_mii_init(struct platform_device *pdev,
struct net_device *dev)
{
struct smsc911x_data *pdata = netdev_priv(dev);
+ struct phy_device *phydev;
int err = -ENXIO;
pdata->mii_bus = mdiobus_alloc();
@@ -1108,6 +1107,10 @@ static int smsc911x_mii_init(struct platform_device *pdev,
goto err_out_free_bus_2;
}
+ phydev = phy_find_first(pdata->mii_bus);
+ if (phydev)
+ phydev->mac_managed_pm = true;
+
return 0;
err_out_free_bus_2:
diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
index b6d945e..c113ec5 100644
--- a/drivers/net/ethernet/stmicro/stmmac/common.h
+++ b/drivers/net/ethernet/stmicro/stmmac/common.h
@@ -530,7 +530,6 @@ struct mac_device_info {
unsigned int xlgmac;
unsigned int num_vlan;
u32 vlan_filter[32];
- unsigned int promisc;
bool vlan_fail_q_en;
u8 vlan_fail_q;
};
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
index 6b1d9e8..d0c7f22 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
@@ -505,6 +505,8 @@ static int qcom_ethqos_probe(struct platform_device *pdev)
plat_dat->has_gmac4 = 1;
plat_dat->pmt = 1;
plat_dat->tso_en = of_property_read_bool(np, "snps,tso");
+ if (of_device_is_compatible(np, "qcom,qcs404-ethqos"))
+ plat_dat->rx_clk_runs_in_lpi = 1;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
index 60638bf..cd85a2d 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
@@ -478,12 +478,6 @@ static int dwmac4_add_hw_vlan_rx_fltr(struct net_device *dev,
if (vid > 4095)
return -EINVAL;
- if (hw->promisc) {
- netdev_err(dev,
- "Adding VLAN in promisc mode not supported\n");
- return -EPERM;
- }
-
/* Single Rx VLAN Filter */
if (hw->num_vlan == 1) {
/* For single VLAN filter, VID 0 means VLAN promiscuous */
@@ -533,12 +527,6 @@ static int dwmac4_del_hw_vlan_rx_fltr(struct net_device *dev,
{
int i, ret = 0;
- if (hw->promisc) {
- netdev_err(dev,
- "Deleting VLAN in promisc mode not supported\n");
- return -EPERM;
- }
-
/* Single Rx VLAN Filter */
if (hw->num_vlan == 1) {
if ((hw->vlan_filter[0] & GMAC_VLAN_TAG_VID) == vid) {
@@ -563,39 +551,6 @@ static int dwmac4_del_hw_vlan_rx_fltr(struct net_device *dev,
return ret;
}
-static void dwmac4_vlan_promisc_enable(struct net_device *dev,
- struct mac_device_info *hw)
-{
- void __iomem *ioaddr = hw->pcsr;
- u32 value;
- u32 hash;
- u32 val;
- int i;
-
- /* Single Rx VLAN Filter */
- if (hw->num_vlan == 1) {
- dwmac4_write_single_vlan(dev, 0);
- return;
- }
-
- /* Extended Rx VLAN Filter Enable */
- for (i = 0; i < hw->num_vlan; i++) {
- if (hw->vlan_filter[i] & GMAC_VLAN_TAG_DATA_VEN) {
- val = hw->vlan_filter[i] & ~GMAC_VLAN_TAG_DATA_VEN;
- dwmac4_write_vlan_filter(dev, hw, i, val);
- }
- }
-
- hash = readl(ioaddr + GMAC_VLAN_HASH_TABLE);
- if (hash & GMAC_VLAN_VLHT) {
- value = readl(ioaddr + GMAC_VLAN_TAG);
- if (value & GMAC_VLAN_VTHM) {
- value &= ~GMAC_VLAN_VTHM;
- writel(value, ioaddr + GMAC_VLAN_TAG);
- }
- }
-}
-
static void dwmac4_restore_hw_vlan_rx_fltr(struct net_device *dev,
struct mac_device_info *hw)
{
@@ -715,22 +670,12 @@ static void dwmac4_set_filter(struct mac_device_info *hw,
}
/* VLAN filtering */
- if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
+ if (dev->flags & IFF_PROMISC && !hw->vlan_fail_q_en)
+ value &= ~GMAC_PACKET_FILTER_VTFE;
+ else if (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)
value |= GMAC_PACKET_FILTER_VTFE;
writel(value, ioaddr + GMAC_PACKET_FILTER);
-
- if (dev->flags & IFF_PROMISC && !hw->vlan_fail_q_en) {
- if (!hw->promisc) {
- hw->promisc = 1;
- dwmac4_vlan_promisc_enable(dev, hw);
- }
- } else {
- if (hw->promisc) {
- hw->promisc = 0;
- dwmac4_restore_hw_vlan_rx_fltr(dev, hw);
- }
- }
}
static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
index 9c2d40f..e95d35f 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
@@ -186,11 +186,25 @@ static void dwmac5_handle_dma_err(struct net_device *ndev,
int dwmac5_safety_feat_config(void __iomem *ioaddr, unsigned int asp,
struct stmmac_safety_feature_cfg *safety_feat_cfg)
{
+ struct stmmac_safety_feature_cfg all_safety_feats = {
+ .tsoee = 1,
+ .mrxpee = 1,
+ .mestee = 1,
+ .mrxee = 1,
+ .mtxee = 1,
+ .epsi = 1,
+ .edpp = 1,
+ .prtyen = 1,
+ .tmouten = 1,
+ };
u32 value;
if (!asp)
return -EINVAL;
+ if (!safety_feat_cfg)
+ safety_feat_cfg = &all_safety_feats;
+
/* 1. Enable Safety Features */
value = readl(ioaddr + MTL_ECC_CONTROL);
value |= MEEAO; /* MTL ECC Error Addr Status Override */
@@ -527,9 +541,9 @@ int dwmac5_flex_pps_config(void __iomem *ioaddr, int index,
return 0;
}
- val |= PPSCMDx(index, 0x2);
val |= TRGTMODSELx(index, 0x2);
val |= PPSEN0;
+ writel(val, ioaddr + MAC_PPS_CONTROL);
writel(cfg->start.tv_sec, ioaddr + MAC_PPSx_TARGET_TIME_SEC(index));
@@ -554,6 +568,7 @@ int dwmac5_flex_pps_config(void __iomem *ioaddr, int index,
writel(period - 1, ioaddr + MAC_PPSx_WIDTH(index));
/* Finally, activate it */
+ val |= PPSCMDx(index, 0x2);
writel(val, ioaddr + MAC_PPS_CONTROL);
return 0;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
index dc31501f..9e8ae43 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
@@ -548,16 +548,16 @@ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
p = (char *)priv + offsetof(struct stmmac_priv,
xstats.txq_stats[q].tx_pkt_n);
for (stat = 0; stat < STMMAC_TXQ_STATS; stat++) {
- *data++ = (*(u64 *)p);
- p += sizeof(u64 *);
+ *data++ = (*(unsigned long *)p);
+ p += sizeof(unsigned long);
}
}
for (q = 0; q < rx_cnt; q++) {
p = (char *)priv + offsetof(struct stmmac_priv,
xstats.rxq_stats[q].rx_pkt_n);
for (stat = 0; stat < STMMAC_RXQ_STATS; stat++) {
- *data++ = (*(u64 *)p);
- p += sizeof(u64 *);
+ *data++ = (*(unsigned long *)p);
+ p += sizeof(unsigned long);
}
}
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 15b0daf..a3bd539 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -1174,7 +1174,8 @@ static void stmmac_mac_link_up(struct phylink_config *config,
stmmac_mac_set(priv, priv->ioaddr, true);
if (phy && priv->dma_cap.eee) {
- priv->eee_active = phy_init_eee(phy, 1) >= 0;
+ priv->eee_active =
+ phy_init_eee(phy, !priv->plat->rx_clk_runs_in_lpi) >= 0;
priv->eee_enabled = stmmac_eee_init(priv);
priv->tx_lpi_enabled = priv->eee_enabled;
stmmac_set_eee_pls(priv, priv->hw, true);
@@ -1242,6 +1243,11 @@ static int stmmac_init_phy(struct net_device *dev)
int addr = priv->plat->phy_addr;
struct phy_device *phydev;
+ if (addr < 0) {
+ netdev_err(priv->dev, "no phy found\n");
+ return -ENODEV;
+ }
+
phydev = mdiobus_get_phy(priv->mii, addr);
if (!phydev) {
netdev_err(priv->dev, "no phy at addr %d\n", addr);
@@ -1256,6 +1262,7 @@ static int stmmac_init_phy(struct net_device *dev)
phylink_ethtool_get_wol(priv->phylink, &wol);
device_set_wakeup_capable(priv->device, !!wol.supported);
+ device_set_wakeup_enable(priv->device, !!wol.wolopts);
}
return ret;
@@ -6886,7 +6893,7 @@ static void stmmac_napi_del(struct net_device *dev)
int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt)
{
struct stmmac_priv *priv = netdev_priv(dev);
- int ret = 0;
+ int ret = 0, i;
if (netif_running(dev))
stmmac_release(dev);
@@ -6895,6 +6902,10 @@ int stmmac_reinit_queues(struct net_device *dev, u32 rx_cnt, u32 tx_cnt)
priv->plat->rx_queues_to_use = rx_cnt;
priv->plat->tx_queues_to_use = tx_cnt;
+ if (!netif_is_rxfh_configured(dev))
+ for (i = 0; i < ARRAY_SIZE(priv->rss.table); i++)
+ priv->rss.table[i] = ethtool_rxfh_indir_default(i,
+ rx_cnt);
stmmac_napi_add(dev);
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
index 5c234a8..e12df9d 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
@@ -558,7 +558,7 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
dma_cfg->mixed_burst = of_property_read_bool(np, "snps,mixed-burst");
plat->force_thresh_dma_mode = of_property_read_bool(np, "snps,force_thresh_dma_mode");
- if (plat->force_thresh_dma_mode) {
+ if (plat->force_thresh_dma_mode && plat->force_sf_dma_mode) {
plat->force_sf_dma_mode = 0;
dev_warn(&pdev->dev,
"force_sf_dma_mode is ignored if force_thresh_dma_mode is set.\n");
diff --git a/drivers/net/ethernet/sun/ldmvsw.c b/drivers/net/ethernet/sun/ldmvsw.c
index 50bd4e3..cde65f7 100644
--- a/drivers/net/ethernet/sun/ldmvsw.c
+++ b/drivers/net/ethernet/sun/ldmvsw.c
@@ -290,6 +290,9 @@ static int vsw_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
hp = mdesc_grab();
+ if (!hp)
+ return -ENODEV;
+
rmac = mdesc_get_property(hp, vdev->mp, remote_macaddr_prop, &len);
err = -ENODEV;
if (!rmac) {
diff --git a/drivers/net/ethernet/sun/sunvnet.c b/drivers/net/ethernet/sun/sunvnet.c
index 58ee892..dcdfc1f 100644
--- a/drivers/net/ethernet/sun/sunvnet.c
+++ b/drivers/net/ethernet/sun/sunvnet.c
@@ -431,6 +431,9 @@ static int vnet_port_probe(struct vio_dev *vdev, const struct vio_device_id *id)
hp = mdesc_grab();
+ if (!hp)
+ return -ENODEV;
+
vp = vnet_find_parent(hp, vdev->mp, vdev);
if (IS_ERR(vp)) {
pr_err("Cannot find port parent vnet\n");
diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
index 2298b3c..692c291 100644
--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
@@ -564,7 +564,15 @@ static int am65_cpsw_nuss_common_stop(struct am65_cpsw_common *common)
k3_udma_glue_disable_tx_chn(common->tx_chns[i].tx_chn);
}
+ reinit_completion(&common->tdown_complete);
k3_udma_glue_tdown_rx_chn(common->rx_chns.rx_chn, true);
+
+ if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ) {
+ i = wait_for_completion_timeout(&common->tdown_complete, msecs_to_jiffies(1000));
+ if (!i)
+ dev_err(common->dev, "rx teardown timeout\n");
+ }
+
napi_disable(&common->napi_rx);
for (i = 0; i < AM65_CPSW_MAX_RX_FLOWS; i++)
@@ -786,6 +794,8 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common,
if (cppi5_desc_is_tdcm(desc_dma)) {
dev_dbg(dev, "%s RX tdown flow: %u\n", __func__, flow_idx);
+ if (common->pdata.quirks & AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ)
+ complete(&common->tdown_complete);
return 0;
}
@@ -2609,7 +2619,7 @@ static const struct am65_cpsw_pdata j721e_pdata = {
};
static const struct am65_cpsw_pdata am64x_cpswxg_pdata = {
- .quirks = 0,
+ .quirks = AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ,
.ale_dev_id = "am64-cpswxg",
.fdqring_mode = K3_RINGACC_RING_MODE_RING,
};
@@ -2774,7 +2784,8 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
return 0;
err_of_clear:
- of_platform_device_destroy(common->mdio_dev, NULL);
+ if (common->mdio_dev)
+ of_platform_device_destroy(common->mdio_dev, NULL);
err_pm_clear:
pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
@@ -2803,7 +2814,8 @@ static int am65_cpsw_nuss_remove(struct platform_device *pdev)
*/
am65_cpsw_nuss_cleanup_ndev(common);
- of_platform_device_destroy(common->mdio_dev, NULL);
+ if (common->mdio_dev)
+ of_platform_device_destroy(common->mdio_dev, NULL);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.h b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
index 048ed10..74569c8 100644
--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.h
+++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.h
@@ -84,6 +84,7 @@ struct am65_cpsw_rx_chn {
};
#define AM65_CPSW_QUIRK_I2027_NO_TX_CSUM BIT(0)
+#define AM64_CPSW_QUIRK_DMA_RX_TDOWN_IRQ BIT(1)
struct am65_cpsw_pdata {
u32 quirks;
diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_net.c b/drivers/net/ethernet/toshiba/ps3_gelic_net.c
index 55e6526..78e484e 100644
--- a/drivers/net/ethernet/toshiba/ps3_gelic_net.c
+++ b/drivers/net/ethernet/toshiba/ps3_gelic_net.c
@@ -317,15 +317,17 @@ static int gelic_card_init_chain(struct gelic_card *card,
/* set up the hardware pointers in each descriptor */
for (i = 0; i < no; i++, descr++) {
- gelic_descr_set_status(descr, GELIC_DESCR_DMA_NOT_IN_USE);
- descr->bus_addr =
- dma_map_single(ctodev(card), descr,
- GELIC_DESCR_SIZE,
- DMA_BIDIRECTIONAL);
+ dma_addr_t cpu_addr;
- if (!descr->bus_addr)
+ gelic_descr_set_status(descr, GELIC_DESCR_DMA_NOT_IN_USE);
+
+ cpu_addr = dma_map_single(ctodev(card), descr,
+ GELIC_DESCR_SIZE, DMA_BIDIRECTIONAL);
+
+ if (dma_mapping_error(ctodev(card), cpu_addr))
goto iommu_error;
+ descr->bus_addr = cpu_to_be32(cpu_addr);
descr->next = descr + 1;
descr->prev = descr - 1;
}
@@ -365,26 +367,28 @@ static int gelic_card_init_chain(struct gelic_card *card,
*
* allocates a new rx skb, iommu-maps it and attaches it to the descriptor.
* Activate the descriptor state-wise
+ *
+ * Gelic RX sk_buffs must be aligned to GELIC_NET_RXBUF_ALIGN and the length
+ * must be a multiple of GELIC_NET_RXBUF_ALIGN.
*/
static int gelic_descr_prepare_rx(struct gelic_card *card,
struct gelic_descr *descr)
{
+ static const unsigned int rx_skb_size =
+ ALIGN(GELIC_NET_MAX_FRAME, GELIC_NET_RXBUF_ALIGN) +
+ GELIC_NET_RXBUF_ALIGN - 1;
+ dma_addr_t cpu_addr;
int offset;
- unsigned int bufsize;
if (gelic_descr_get_status(descr) != GELIC_DESCR_DMA_NOT_IN_USE)
dev_info(ctodev(card), "%s: ERROR status\n", __func__);
- /* we need to round up the buffer size to a multiple of 128 */
- bufsize = ALIGN(GELIC_NET_MAX_MTU, GELIC_NET_RXBUF_ALIGN);
- /* and we need to have it 128 byte aligned, therefore we allocate a
- * bit more */
- descr->skb = dev_alloc_skb(bufsize + GELIC_NET_RXBUF_ALIGN - 1);
+ descr->skb = netdev_alloc_skb(*card->netdev, rx_skb_size);
if (!descr->skb) {
descr->buf_addr = 0; /* tell DMAC don't touch memory */
return -ENOMEM;
}
- descr->buf_size = cpu_to_be32(bufsize);
+ descr->buf_size = cpu_to_be32(rx_skb_size);
descr->dmac_cmd_status = 0;
descr->result_size = 0;
descr->valid_size = 0;
@@ -395,11 +399,10 @@ static int gelic_descr_prepare_rx(struct gelic_card *card,
if (offset)
skb_reserve(descr->skb, GELIC_NET_RXBUF_ALIGN - offset);
/* io-mmu-map the skb */
- descr->buf_addr = cpu_to_be32(dma_map_single(ctodev(card),
- descr->skb->data,
- GELIC_NET_MAX_MTU,
- DMA_FROM_DEVICE));
- if (!descr->buf_addr) {
+ cpu_addr = dma_map_single(ctodev(card), descr->skb->data,
+ GELIC_NET_MAX_FRAME, DMA_FROM_DEVICE);
+ descr->buf_addr = cpu_to_be32(cpu_addr);
+ if (dma_mapping_error(ctodev(card), cpu_addr)) {
dev_kfree_skb_any(descr->skb);
descr->skb = NULL;
dev_info(ctodev(card),
@@ -779,7 +782,7 @@ static int gelic_descr_prepare_tx(struct gelic_card *card,
buf = dma_map_single(ctodev(card), skb->data, skb->len, DMA_TO_DEVICE);
- if (!buf) {
+ if (dma_mapping_error(ctodev(card), buf)) {
dev_err(ctodev(card),
"dma map 2 failed (%p, %i). Dropping packet\n",
skb->data, skb->len);
@@ -915,7 +918,7 @@ static void gelic_net_pass_skb_up(struct gelic_descr *descr,
data_error = be32_to_cpu(descr->data_error);
/* unmap skb buffer */
dma_unmap_single(ctodev(card), be32_to_cpu(descr->buf_addr),
- GELIC_NET_MAX_MTU,
+ GELIC_NET_MAX_FRAME,
DMA_FROM_DEVICE);
skb_put(skb, be32_to_cpu(descr->valid_size)?
diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_net.h b/drivers/net/ethernet/toshiba/ps3_gelic_net.h
index 68f324e..0d98def 100644
--- a/drivers/net/ethernet/toshiba/ps3_gelic_net.h
+++ b/drivers/net/ethernet/toshiba/ps3_gelic_net.h
@@ -19,8 +19,9 @@
#define GELIC_NET_RX_DESCRIPTORS 128 /* num of descriptors */
#define GELIC_NET_TX_DESCRIPTORS 128 /* num of descriptors */
-#define GELIC_NET_MAX_MTU VLAN_ETH_FRAME_LEN
-#define GELIC_NET_MIN_MTU VLAN_ETH_ZLEN
+#define GELIC_NET_MAX_FRAME 2312
+#define GELIC_NET_MAX_MTU 2294
+#define GELIC_NET_MIN_MTU 64
#define GELIC_NET_RXBUF_ALIGN 128
#define GELIC_CARD_RX_CSUM_DEFAULT 1 /* hw chksum */
#define GELIC_NET_WATCHDOG_TIMEOUT 5*HZ
diff --git a/drivers/net/ethernet/xircom/xirc2ps_cs.c b/drivers/net/ethernet/xircom/xirc2ps_cs.c
index ae611e4..f8bbd14 100644
--- a/drivers/net/ethernet/xircom/xirc2ps_cs.c
+++ b/drivers/net/ethernet/xircom/xirc2ps_cs.c
@@ -503,6 +503,11 @@ static void
xirc2ps_detach(struct pcmcia_device *link)
{
struct net_device *dev = link->priv;
+ struct local_info *local = netdev_priv(dev);
+
+ netif_carrier_off(dev);
+ netif_tx_disable(dev);
+ cancel_work_sync(&local->tx_timeout_task);
dev_dbg(&link->dev, "detach\n");
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index fb2448f..4156299 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -814,6 +814,7 @@ static void netvsc_send_completion(struct net_device *ndev,
u32 msglen = hv_pkt_datalen(desc);
struct nvsp_message *pkt_rqst;
u64 cmd_rqst;
+ u32 status;
/* First check if this is a VMBUS completion without data payload */
if (!msglen) {
@@ -885,6 +886,23 @@ static void netvsc_send_completion(struct net_device *ndev,
break;
case NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE:
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_1_message_send_rndis_packet_complete)) {
+ if (net_ratelimit())
+ netdev_err(ndev, "nvsp_rndis_pkt_complete length too small: %u\n",
+ msglen);
+ return;
+ }
+
+ /* If status indicates an error, output a message so we know
+ * there's a problem. But process the completion anyway so the
+ * resources are released.
+ */
+ status = nvsp_packet->msg.v1_msg.send_rndis_pkt_complete.status;
+ if (status != NVSP_STAT_SUCCESS && net_ratelimit())
+ netdev_err(ndev, "nvsp_rndis_pkt_complete error status: %x\n",
+ status);
+
netvsc_send_tx_complete(ndev, net_device, incoming_channel,
desc, budget);
break;
diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
index 0362917..5834d3e 100644
--- a/drivers/net/ieee802154/ca8210.c
+++ b/drivers/net/ieee802154/ca8210.c
@@ -1945,10 +1945,9 @@ static int ca8210_skb_tx(
struct ca8210_priv *priv
)
{
- int status;
struct ieee802154_hdr header = { };
struct secspec secspec;
- unsigned int mac_len;
+ int mac_len, status;
dev_dbg(&priv->spi->dev, "%s called\n", __func__);
@@ -1956,6 +1955,8 @@ static int ca8210_skb_tx(
* packet
*/
mac_len = ieee802154_hdr_peek_addrs(skb, &header);
+ if (mac_len < 0)
+ return mac_len;
secspec.security_level = header.sec.level;
secspec.key_id_mode = header.sec.key_id_mode;
diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c
index 1544564b..d709e69 100644
--- a/drivers/net/ipa/gsi_trans.c
+++ b/drivers/net/ipa/gsi_trans.c
@@ -155,7 +155,7 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
* gsi_trans_pool_exit_dma() can assume the total allocated
* size is exactly (count * size).
*/
- total_size = get_order(total_size) << PAGE_SHIFT;
+ total_size = PAGE_SIZE << get_order(total_size);
virt = dma_alloc_coherent(dev, total_size, &addr, GFP_KERNEL);
if (!virt)
diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
index b35170a..0c9ff8c 100644
--- a/drivers/net/ipa/ipa_interrupt.c
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -122,6 +122,16 @@ static irqreturn_t ipa_isr_thread(int irq, void *dev_id)
return IRQ_HANDLED;
}
+void ipa_interrupt_irq_disable(struct ipa *ipa)
+{
+ disable_irq(ipa->interrupt->irq);
+}
+
+void ipa_interrupt_irq_enable(struct ipa *ipa)
+{
+ enable_irq(ipa->interrupt->irq);
+}
+
/* Common function used to enable/disable TX_SUSPEND for an endpoint */
static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
u32 endpoint_id, bool enable)
diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h
index 231390ce..16aa84e 100644
--- a/drivers/net/ipa/ipa_interrupt.h
+++ b/drivers/net/ipa/ipa_interrupt.h
@@ -86,6 +86,22 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt);
void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt);
/**
+ * ipa_interrupt_irq_enable() - Enable IPA interrupts
+ * @ipa: IPA pointer
+ *
+ * This enables the IPA interrupt line
+ */
+void ipa_interrupt_irq_enable(struct ipa *ipa);
+
+/**
+ * ipa_interrupt_irq_disable() - Disable IPA interrupts
+ * @ipa: IPA pointer
+ *
+ * This disables the IPA interrupt line
+ */
+void ipa_interrupt_irq_disable(struct ipa *ipa);
+
+/**
* ipa_interrupt_config() - Configure the IPA interrupt framework
* @ipa: IPA pointer
*
diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
index f2989aa..07fb367 100644
--- a/drivers/net/ipa/ipa_power.c
+++ b/drivers/net/ipa/ipa_power.c
@@ -277,6 +277,17 @@ static int ipa_suspend(struct device *dev)
__set_bit(IPA_POWER_FLAG_SYSTEM, ipa->power->flags);
+ /* Increment the disable depth to ensure that the IRQ won't
+ * be re-enabled until the matching _enable call in
+ * ipa_resume(). We do this to ensure that the interrupt
+ * handler won't run whilst PM runtime is disabled.
+ *
+ * Note that disabling the IRQ is NOT the same as disabling
+ * irq wake. If wakeup is enabled for the IPA then the IRQ
+ * will still cause the system to wake up, see irq_set_irq_wake().
+ */
+ ipa_interrupt_irq_disable(ipa);
+
return pm_runtime_force_suspend(dev);
}
@@ -289,6 +300,12 @@ static int ipa_resume(struct device *dev)
__clear_bit(IPA_POWER_FLAG_SYSTEM, ipa->power->flags);
+ /* Now that PM runtime is enabled again it's safe
+ * to turn the IRQ back on and process any data
+ * that was received during suspend.
+ */
+ ipa_interrupt_irq_enable(ipa);
+
return ret;
}
diff --git a/drivers/net/ipvlan/ipvlan_l3s.c b/drivers/net/ipvlan/ipvlan_l3s.c
index 943d26c..71712ea 100644
--- a/drivers/net/ipvlan/ipvlan_l3s.c
+++ b/drivers/net/ipvlan/ipvlan_l3s.c
@@ -101,6 +101,7 @@ static unsigned int ipvlan_nf_input(void *priv, struct sk_buff *skb,
goto out;
skb->dev = addr->master->dev;
+ skb->skb_iif = skb->dev->ifindex;
len = skb->len + ETH_HLEN;
ipvlan_count_rx(addr->master, len, true, false);
out:
diff --git a/drivers/net/mdio/acpi_mdio.c b/drivers/net/mdio/acpi_mdio.c
index d77c987..4630dde 100644
--- a/drivers/net/mdio/acpi_mdio.c
+++ b/drivers/net/mdio/acpi_mdio.c
@@ -18,16 +18,18 @@ MODULE_AUTHOR("Calvin Johnson <calvin.johnson@oss.nxp.com>");
MODULE_LICENSE("GPL");
/**
- * acpi_mdiobus_register - Register mii_bus and create PHYs from the ACPI ASL.
+ * __acpi_mdiobus_register - Register mii_bus and create PHYs from the ACPI ASL.
* @mdio: pointer to mii_bus structure
* @fwnode: pointer to fwnode of MDIO bus. This fwnode is expected to represent
+ * @owner: module owning this @mdio object.
* an ACPI device object corresponding to the MDIO bus and its children are
* expected to correspond to the PHY devices on that bus.
*
* This function registers the mii_bus structure and registers a phy_device
* for each child node of @fwnode.
*/
-int acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode)
+int __acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode,
+ struct module *owner)
{
struct fwnode_handle *child;
u32 addr;
@@ -35,7 +37,7 @@ int acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode)
/* Mask out all PHYs from auto probing. */
mdio->phy_mask = GENMASK(31, 0);
- ret = mdiobus_register(mdio);
+ ret = __mdiobus_register(mdio, owner);
if (ret)
return ret;
@@ -55,4 +57,4 @@ int acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode)
}
return 0;
}
-EXPORT_SYMBOL(acpi_mdiobus_register);
+EXPORT_SYMBOL(__acpi_mdiobus_register);
diff --git a/drivers/net/mdio/mdio-mux-meson-g12a.c b/drivers/net/mdio/mdio-mux-meson-g12a.c
index b8866bc..917c8a10 100644
--- a/drivers/net/mdio/mdio-mux-meson-g12a.c
+++ b/drivers/net/mdio/mdio-mux-meson-g12a.c
@@ -4,6 +4,7 @@
*/
#include <linux/bitfield.h>
+#include <linux/delay.h>
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/device.h>
@@ -150,6 +151,7 @@ static const struct clk_ops g12a_ephy_pll_ops = {
static int g12a_enable_internal_mdio(struct g12a_mdio_mux *priv)
{
+ u32 value;
int ret;
/* Enable the phy clock */
@@ -163,18 +165,25 @@ static int g12a_enable_internal_mdio(struct g12a_mdio_mux *priv)
/* Initialize ephy control */
writel(EPHY_G12A_ID, priv->regs + ETH_PHY_CNTL0);
- writel(FIELD_PREP(PHY_CNTL1_ST_MODE, 3) |
- FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) |
- FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) |
- PHY_CNTL1_CLK_EN |
- PHY_CNTL1_CLKFREQ |
- PHY_CNTL1_PHY_ENB,
- priv->regs + ETH_PHY_CNTL1);
+
+ /* Make sure we get a 0 -> 1 transition on the enable bit */
+ value = FIELD_PREP(PHY_CNTL1_ST_MODE, 3) |
+ FIELD_PREP(PHY_CNTL1_ST_PHYADD, EPHY_DFLT_ADD) |
+ FIELD_PREP(PHY_CNTL1_MII_MODE, EPHY_MODE_RMII) |
+ PHY_CNTL1_CLK_EN |
+ PHY_CNTL1_CLKFREQ;
+ writel(value, priv->regs + ETH_PHY_CNTL1);
writel(PHY_CNTL2_USE_INTERNAL |
PHY_CNTL2_SMI_SRC_MAC |
PHY_CNTL2_RX_CLK_EPHY,
priv->regs + ETH_PHY_CNTL2);
+ value |= PHY_CNTL1_PHY_ENB;
+ writel(value, priv->regs + ETH_PHY_CNTL1);
+
+ /* The phy needs a bit of time to power up */
+ mdelay(10);
+
return 0;
}
diff --git a/drivers/net/mdio/mdio-thunder.c b/drivers/net/mdio/mdio-thunder.c
index 822d2cd..394b864 100644
--- a/drivers/net/mdio/mdio-thunder.c
+++ b/drivers/net/mdio/mdio-thunder.c
@@ -104,6 +104,7 @@ static int thunder_mdiobus_pci_probe(struct pci_dev *pdev,
if (i >= ARRAY_SIZE(nexus->buses))
break;
}
+ fwnode_handle_put(fwn);
return 0;
err_release_regions:
diff --git a/drivers/net/mdio/of_mdio.c b/drivers/net/mdio/of_mdio.c
index 510822d..1e46e39 100644
--- a/drivers/net/mdio/of_mdio.c
+++ b/drivers/net/mdio/of_mdio.c
@@ -139,21 +139,23 @@ bool of_mdiobus_child_is_phy(struct device_node *child)
EXPORT_SYMBOL(of_mdiobus_child_is_phy);
/**
- * of_mdiobus_register - Register mii_bus and create PHYs from the device tree
+ * __of_mdiobus_register - Register mii_bus and create PHYs from the device tree
* @mdio: pointer to mii_bus structure
* @np: pointer to device_node of MDIO bus.
+ * @owner: module owning the @mdio object.
*
* This function registers the mii_bus structure and registers a phy_device
* for each child node of @np.
*/
-int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
+int __of_mdiobus_register(struct mii_bus *mdio, struct device_node *np,
+ struct module *owner)
{
struct device_node *child;
bool scanphys = false;
int addr, rc;
if (!np)
- return mdiobus_register(mdio);
+ return __mdiobus_register(mdio, owner);
/* Do not continue if the node is disabled */
if (!of_device_is_available(np))
@@ -172,7 +174,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
of_property_read_u32(np, "reset-post-delay-us", &mdio->reset_post_delay_us);
/* Register the MDIO bus */
- rc = mdiobus_register(mdio);
+ rc = __mdiobus_register(mdio, owner);
if (rc)
return rc;
@@ -236,7 +238,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
mdiobus_unregister(mdio);
return rc;
}
-EXPORT_SYMBOL(of_mdiobus_register);
+EXPORT_SYMBOL(__of_mdiobus_register);
/**
* of_mdio_find_device - Given a device tree node, find the mdio_device
diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
index 2a48924..16b36e9 100644
--- a/drivers/net/net_failover.c
+++ b/drivers/net/net_failover.c
@@ -130,14 +130,10 @@ static u16 net_failover_select_queue(struct net_device *dev,
txq = ops->ndo_select_queue(primary_dev, skb, sb_dev);
else
txq = netdev_pick_tx(primary_dev, skb, NULL);
-
- qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping;
-
- return txq;
+ } else {
+ txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0;
}
- txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) : 0;
-
/* Save the original txq to restore before passing to the driver */
qdisc_skb_cb(skb)->slave_dev_queue_mapping = skb->queue_mapping;
diff --git a/drivers/net/phy/dp83822.c b/drivers/net/phy/dp83822.c
index 0b511ab..f070aa97 100644
--- a/drivers/net/phy/dp83822.c
+++ b/drivers/net/phy/dp83822.c
@@ -232,7 +232,8 @@ static int dp83822_config_intr(struct phy_device *phydev)
DP83822_ENERGY_DET_INT_EN |
DP83822_LINK_QUAL_INT_EN);
- if (!dp83822->fx_enabled)
+ /* Private data pointer is NULL on DP83825/26 */
+ if (!dp83822 || !dp83822->fx_enabled)
misr_status |= DP83822_ANEG_COMPLETE_INT_EN |
DP83822_DUP_MODE_CHANGE_INT_EN |
DP83822_SPEED_CHANGED_INT_EN;
@@ -252,7 +253,8 @@ static int dp83822_config_intr(struct phy_device *phydev)
DP83822_PAGE_RX_INT_EN |
DP83822_EEE_ERROR_CHANGE_INT_EN);
- if (!dp83822->fx_enabled)
+ /* Private data pointer is NULL on DP83825/26 */
+ if (!dp83822 || !dp83822->fx_enabled)
misr_status |= DP83822_ANEG_ERR_INT_EN |
DP83822_WOL_PKT_INT_EN;
diff --git a/drivers/net/phy/dp83869.c b/drivers/net/phy/dp83869.c
index 755220c..cdf4e22 100644
--- a/drivers/net/phy/dp83869.c
+++ b/drivers/net/phy/dp83869.c
@@ -587,15 +587,13 @@ static int dp83869_of_init(struct phy_device *phydev)
&dp83869_internal_delay[0],
delay_size, true);
if (dp83869->rx_int_delay < 0)
- dp83869->rx_int_delay =
- dp83869_internal_delay[DP83869_CLK_DELAY_DEF];
+ dp83869->rx_int_delay = DP83869_CLK_DELAY_DEF;
dp83869->tx_int_delay = phy_get_internal_delay(phydev, dev,
&dp83869_internal_delay[0],
delay_size, false);
if (dp83869->tx_int_delay < 0)
- dp83869->tx_int_delay =
- dp83869_internal_delay[DP83869_CLK_DELAY_DEF];
+ dp83869->tx_int_delay = DP83869_CLK_DELAY_DEF;
return ret;
}
diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
index dd7739b..5f89828 100644
--- a/drivers/net/phy/mdio_bus.c
+++ b/drivers/net/phy/mdio_bus.c
@@ -108,7 +108,12 @@ EXPORT_SYMBOL(mdiobus_unregister_device);
struct phy_device *mdiobus_get_phy(struct mii_bus *bus, int addr)
{
- struct mdio_device *mdiodev = bus->mdio_map[addr];
+ struct mdio_device *mdiodev;
+
+ if (addr < 0 || addr >= ARRAY_SIZE(bus->mdio_map))
+ return NULL;
+
+ mdiodev = bus->mdio_map[addr];
if (!mdiodev)
return NULL;
diff --git a/drivers/net/phy/mdio_devres.c b/drivers/net/phy/mdio_devres.c
index b560e99..69b829e 100644
--- a/drivers/net/phy/mdio_devres.c
+++ b/drivers/net/phy/mdio_devres.c
@@ -98,13 +98,14 @@ EXPORT_SYMBOL(__devm_mdiobus_register);
#if IS_ENABLED(CONFIG_OF_MDIO)
/**
- * devm_of_mdiobus_register - Resource managed variant of of_mdiobus_register()
+ * __devm_of_mdiobus_register - Resource managed variant of of_mdiobus_register()
* @dev: Device to register mii_bus for
* @mdio: MII bus structure to register
* @np: Device node to parse
+ * @owner: Owning module
*/
-int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
- struct device_node *np)
+int __devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
+ struct device_node *np, struct module *owner)
{
struct mdiobus_devres *dr;
int ret;
@@ -117,7 +118,7 @@ int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
if (!dr)
return -ENOMEM;
- ret = of_mdiobus_register(mdio, np);
+ ret = __of_mdiobus_register(mdio, np, owner);
if (ret) {
devres_free(dr);
return ret;
@@ -127,7 +128,7 @@ int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
devres_add(dev, dr);
return 0;
}
-EXPORT_SYMBOL(devm_of_mdiobus_register);
+EXPORT_SYMBOL(__devm_of_mdiobus_register);
#endif /* CONFIG_OF_MDIO */
MODULE_LICENSE("GPL");
diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
index c49062a..a6015cd 100644
--- a/drivers/net/phy/meson-gxl.c
+++ b/drivers/net/phy/meson-gxl.c
@@ -261,6 +261,8 @@ static struct phy_driver meson_gxl_phy[] = {
.handle_interrupt = meson_gxl_handle_interrupt,
.suspend = genphy_suspend,
.resume = genphy_resume,
+ .read_mmd = genphy_read_mmd_unsupported,
+ .write_mmd = genphy_write_mmd_unsupported,
}, {
PHY_ID_MATCH_EXACT(0x01803301),
.name = "Meson G12A Internal PHY",
@@ -271,6 +273,8 @@ static struct phy_driver meson_gxl_phy[] = {
.handle_interrupt = meson_gxl_handle_interrupt,
.suspend = genphy_suspend,
.resume = genphy_resume,
+ .read_mmd = genphy_read_mmd_unsupported,
+ .write_mmd = genphy_write_mmd_unsupported,
},
};
diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
index 9f1f2b6..230f2fc 100644
--- a/drivers/net/phy/microchip.c
+++ b/drivers/net/phy/microchip.c
@@ -342,6 +342,37 @@ static int lan88xx_config_aneg(struct phy_device *phydev)
return genphy_config_aneg(phydev);
}
+static void lan88xx_link_change_notify(struct phy_device *phydev)
+{
+ int temp;
+
+ /* At forced 100 F/H mode, chip may fail to set mode correctly
+ * when cable is switched between long(~50+m) and short one.
+ * As workaround, set to 10 before setting to 100
+ * at forced 100 F/H mode.
+ */
+ if (!phydev->autoneg && phydev->speed == 100) {
+ /* disable phy interrupt */
+ temp = phy_read(phydev, LAN88XX_INT_MASK);
+ temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
+ phy_write(phydev, LAN88XX_INT_MASK, temp);
+
+ temp = phy_read(phydev, MII_BMCR);
+ temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000);
+ phy_write(phydev, MII_BMCR, temp); /* set to 10 first */
+ temp |= BMCR_SPEED100;
+ phy_write(phydev, MII_BMCR, temp); /* set to 100 later */
+
+ /* clear pending interrupt generated while workaround */
+ temp = phy_read(phydev, LAN88XX_INT_STS);
+
+ /* enable phy interrupt back */
+ temp = phy_read(phydev, LAN88XX_INT_MASK);
+ temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
+ phy_write(phydev, LAN88XX_INT_MASK, temp);
+ }
+}
+
static struct phy_driver microchip_phy_driver[] = {
{
.phy_id = 0x0007c130,
@@ -355,6 +386,7 @@ static struct phy_driver microchip_phy_driver[] = {
.config_init = lan88xx_config_init,
.config_aneg = lan88xx_config_aneg,
+ .link_change_notify = lan88xx_link_change_notify,
.config_intr = lan88xx_phy_config_intr,
.handle_interrupt = lan88xx_handle_interrupt,
diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
index 91a327f..fbb64aa 100644
--- a/drivers/net/phy/nxp-c45-tja11xx.c
+++ b/drivers/net/phy/nxp-c45-tja11xx.c
@@ -79,7 +79,7 @@
#define SGMII_ABILITY BIT(0)
#define VEND1_MII_BASIC_CONFIG 0xAFC6
-#define MII_BASIC_CONFIG_REV BIT(8)
+#define MII_BASIC_CONFIG_REV BIT(4)
#define MII_BASIC_CONFIG_SGMII 0x9
#define MII_BASIC_CONFIG_RGMII 0x7
#define MII_BASIC_CONFIG_RMII 0x5
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index 2fc85108..1135e63 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -57,6 +57,18 @@ static const char *phy_state_to_str(enum phy_state st)
return NULL;
}
+static void phy_process_state_change(struct phy_device *phydev,
+ enum phy_state old_state)
+{
+ if (old_state != phydev->state) {
+ phydev_dbg(phydev, "PHY state change %s -> %s\n",
+ phy_state_to_str(old_state),
+ phy_state_to_str(phydev->state));
+ if (phydev->drv && phydev->drv->link_change_notify)
+ phydev->drv->link_change_notify(phydev);
+ }
+}
+
static void phy_link_up(struct phy_device *phydev)
{
phydev->phy_link_change(phydev, true);
@@ -1061,6 +1073,7 @@ EXPORT_SYMBOL(phy_free_interrupt);
void phy_stop(struct phy_device *phydev)
{
struct net_device *dev = phydev->attached_dev;
+ enum phy_state old_state;
if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) {
WARN(1, "called from state %s\n",
@@ -1069,6 +1082,7 @@ void phy_stop(struct phy_device *phydev)
}
mutex_lock(&phydev->lock);
+ old_state = phydev->state;
if (phydev->state == PHY_CABLETEST) {
phy_abort_cable_test(phydev);
@@ -1079,6 +1093,7 @@ void phy_stop(struct phy_device *phydev)
sfp_upstream_stop(phydev->sfp_bus);
phydev->state = PHY_HALTED;
+ phy_process_state_change(phydev, old_state);
mutex_unlock(&phydev->lock);
@@ -1196,13 +1211,7 @@ void phy_state_machine(struct work_struct *work)
if (err < 0)
phy_error(phydev);
- if (old_state != phydev->state) {
- phydev_dbg(phydev, "PHY state change %s -> %s\n",
- phy_state_to_str(old_state),
- phy_state_to_str(phydev->state));
- if (phydev->drv && phydev->drv->link_change_notify)
- phydev->drv->link_change_notify(phydev);
- }
+ phy_process_state_change(phydev, old_state);
/* Only re-schedule a PHY state machine change if we are polling the
* PHY, if PHY_MAC_INTERRUPT is set, then we will be moving
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
index 996842a..7348538 100644
--- a/drivers/net/phy/phy_device.c
+++ b/drivers/net/phy/phy_device.c
@@ -3052,8 +3052,6 @@ static int phy_probe(struct device *dev)
if (phydrv->flags & PHY_IS_INTERNAL)
phydev->is_internal = true;
- mutex_lock(&phydev->lock);
-
/* Deassert the reset signal */
phy_device_reset(phydev, 0);
@@ -3121,12 +3119,10 @@ static int phy_probe(struct device *dev)
phydev->state = PHY_READY;
out:
- /* Assert the reset signal */
+ /* Re-assert the reset signal on error */
if (err)
phy_device_reset(phydev, 1);
- mutex_unlock(&phydev->lock);
-
return err;
}
@@ -3136,9 +3132,7 @@ static int phy_remove(struct device *dev)
cancel_delayed_work_sync(&phydev->state_queue);
- mutex_lock(&phydev->lock);
phydev->state = PHY_DOWN;
- mutex_unlock(&phydev->lock);
sfp_bus_del_upstream(phydev->sfp_bus);
phydev->sfp_bus = NULL;
diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
index 7afcf63..422dc92 100644
--- a/drivers/net/phy/phylink.c
+++ b/drivers/net/phy/phylink.c
@@ -1166,10 +1166,9 @@ int phylink_fwnode_phy_connect(struct phylink *pl,
ret = phy_attach_direct(pl->netdev, phy_dev, flags,
pl->link_interface);
- if (ret) {
- phy_device_free(phy_dev);
+ phy_device_free(phy_dev);
+ if (ret)
return ret;
- }
ret = phylink_bringup_phy(pl, phy_dev, pl->link_config.interface);
if (ret)
diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
index 636b090..36dcf6c 100644
--- a/drivers/net/phy/smsc.c
+++ b/drivers/net/phy/smsc.c
@@ -57,8 +57,6 @@ static int smsc_phy_ack_interrupt(struct phy_device *phydev)
static int smsc_phy_config_intr(struct phy_device *phydev)
{
- struct smsc_phy_priv *priv = phydev->priv;
- u16 intmask = 0;
int rc;
if (phydev->interrupts == PHY_INTERRUPT_ENABLED) {
@@ -66,12 +64,10 @@ static int smsc_phy_config_intr(struct phy_device *phydev)
if (rc)
return rc;
- intmask = MII_LAN83C185_ISF_INT4 | MII_LAN83C185_ISF_INT6;
- if (priv->energy_enable)
- intmask |= MII_LAN83C185_ISF_INT7;
- rc = phy_write(phydev, MII_LAN83C185_IM, intmask);
+ rc = phy_write(phydev, MII_LAN83C185_IM,
+ MII_LAN83C185_ISF_INT_PHYLIB_EVENTS);
} else {
- rc = phy_write(phydev, MII_LAN83C185_IM, intmask);
+ rc = phy_write(phydev, MII_LAN83C185_IM, 0);
if (rc)
return rc;
@@ -83,13 +79,7 @@ static int smsc_phy_config_intr(struct phy_device *phydev)
static irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev)
{
- int irq_status, irq_enabled;
-
- irq_enabled = phy_read(phydev, MII_LAN83C185_IM);
- if (irq_enabled < 0) {
- phy_error(phydev);
- return IRQ_NONE;
- }
+ int irq_status;
irq_status = phy_read(phydev, MII_LAN83C185_ISF);
if (irq_status < 0) {
@@ -97,7 +87,7 @@ static irqreturn_t smsc_phy_handle_interrupt(struct phy_device *phydev)
return IRQ_NONE;
}
- if (!(irq_status & irq_enabled))
+ if (!(irq_status & MII_LAN83C185_ISF_INT_PHYLIB_EVENTS))
return IRQ_NONE;
phy_trigger_machine(phydev);
@@ -216,8 +206,11 @@ static int lan95xx_config_aneg_ext(struct phy_device *phydev)
static int lan87xx_read_status(struct phy_device *phydev)
{
struct smsc_phy_priv *priv = phydev->priv;
+ int err;
- int err = genphy_read_status(phydev);
+ err = genphy_read_status(phydev);
+ if (err)
+ return err;
if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) {
/* Disable EDPD to wake up PHY */
diff --git a/drivers/net/tap.c b/drivers/net/tap.c
index ba2ef54..854ed2f2 100644
--- a/drivers/net/tap.c
+++ b/drivers/net/tap.c
@@ -523,7 +523,7 @@ static int tap_open(struct inode *inode, struct file *file)
q->sock.state = SS_CONNECTED;
q->sock.file = file;
q->sock.ops = &tap_socket_ops;
- sock_init_data(&q->sock, &q->sk);
+ sock_init_data_uid(&q->sock, &q->sk, inode->i_uid);
q->sk.sk_write_space = tap_sock_write_space;
q->sk.sk_destruct = tap_sock_destruct;
q->flags = IFF_VNET_HDR | IFF_NO_PI | IFF_TAP;
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index a1dda57..30eea82 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -3411,7 +3411,7 @@ static int tun_chr_open(struct inode *inode, struct file * file)
tfile->socket.file = file;
tfile->socket.ops = &tun_socket_ops;
- sock_init_data(&tfile->socket, &tfile->sk);
+ sock_init_data_uid(&tfile->socket, &tfile->sk, inode->i_uid);
tfile->sk.sk_write_space = tun_sock_write_space;
tfile->sk.sk_sndbuf = INT_MAX;
diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
index c6b0de1..3497b5a 100644
--- a/drivers/net/usb/cdc_ether.c
+++ b/drivers/net/usb/cdc_ether.c
@@ -1001,6 +1001,12 @@ static const struct usb_device_id products[] = {
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
+ /* Cinterion PLS62-W modem by GEMALTO/THALES */
+ USB_DEVICE_AND_INTERFACE_INFO(0x1e2d, 0x005b, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&wwan_info,
+}, {
/* Cinterion PLS83/PLS63 modem by GEMALTO/THALES */
USB_DEVICE_AND_INTERFACE_INFO(0x1e2d, 0x0069, USB_CLASS_COMM,
USB_CDC_SUBCLASS_ETHERNET,
diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c
index c0b8b4a..a3ccf0c 100644
--- a/drivers/net/usb/cdc_mbim.c
+++ b/drivers/net/usb/cdc_mbim.c
@@ -664,6 +664,11 @@ static const struct usb_device_id mbim_devs[] = {
.driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
},
+ /* Telit FE990 */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1081, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle,
+ },
+
/* default entry */
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&cdc_mbim_info_zlp,
diff --git a/drivers/net/usb/kalmia.c b/drivers/net/usb/kalmia.c
index fc5895f..a552bb1 100644
--- a/drivers/net/usb/kalmia.c
+++ b/drivers/net/usb/kalmia.c
@@ -65,8 +65,8 @@ kalmia_send_init_packet(struct usbnet *dev, u8 *init_msg, u8 init_msg_len,
init_msg, init_msg_len, &act_len, KALMIA_USB_TIMEOUT);
if (status != 0) {
netdev_err(dev->net,
- "Error sending init packet. Status %i, length %i\n",
- status, act_len);
+ "Error sending init packet. Status %i\n",
+ status);
return status;
}
else if (act_len != init_msg_len) {
@@ -83,8 +83,8 @@ kalmia_send_init_packet(struct usbnet *dev, u8 *init_msg, u8 init_msg_len,
if (status != 0)
netdev_err(dev->net,
- "Error receiving init result. Status %i, length %i\n",
- status, act_len);
+ "Error receiving init result. Status %i\n",
+ status);
else if (act_len != expected_len)
netdev_err(dev->net, "Unexpected init result length: %i\n",
act_len);
diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
index 3e1a83a..5700c9d 100644
--- a/drivers/net/usb/lan78xx.c
+++ b/drivers/net/usb/lan78xx.c
@@ -1950,33 +1950,8 @@ static void lan78xx_remove_mdio(struct lan78xx_net *dev)
static void lan78xx_link_status_change(struct net_device *net)
{
struct phy_device *phydev = net->phydev;
- int temp;
- /* At forced 100 F/H mode, chip may fail to set mode correctly
- * when cable is switched between long(~50+m) and short one.
- * As workaround, set to 10 before setting to 100
- * at forced 100 F/H mode.
- */
- if (!phydev->autoneg && (phydev->speed == 100)) {
- /* disable phy interrupt */
- temp = phy_read(phydev, LAN88XX_INT_MASK);
- temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
- phy_write(phydev, LAN88XX_INT_MASK, temp);
-
- temp = phy_read(phydev, MII_BMCR);
- temp &= ~(BMCR_SPEED100 | BMCR_SPEED1000);
- phy_write(phydev, MII_BMCR, temp); /* set to 10 first */
- temp |= BMCR_SPEED100;
- phy_write(phydev, MII_BMCR, temp); /* set to 100 later */
-
- /* clear pending interrupt generated while workaround */
- temp = phy_read(phydev, LAN88XX_INT_STS);
-
- /* enable phy interrupt back */
- temp = phy_read(phydev, LAN88XX_INT_MASK);
- temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
- phy_write(phydev, LAN88XX_INT_MASK, temp);
- }
+ phy_print_status(phydev);
}
static int irq_map(struct irq_domain *d, unsigned int irq,
diff --git a/drivers/net/usb/plusb.c b/drivers/net/usb/plusb.c
index 17c9c63..ce7862d 100644
--- a/drivers/net/usb/plusb.c
+++ b/drivers/net/usb/plusb.c
@@ -57,9 +57,7 @@
static inline int
pl_vendor_req(struct usbnet *dev, u8 req, u8 val, u8 index)
{
- return usbnet_read_cmd(dev, req,
- USB_DIR_IN | USB_TYPE_VENDOR |
- USB_RECIP_DEVICE,
+ return usbnet_write_cmd(dev, req, USB_TYPE_VENDOR | USB_RECIP_DEVICE,
val, index, NULL, 0);
}
diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
index 7b358b8..8646c4d 100644
--- a/drivers/net/usb/qmi_wwan.c
+++ b/drivers/net/usb/qmi_wwan.c
@@ -1358,6 +1358,7 @@ static const struct usb_device_id products[] = {
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)}, /* Telit FN980 */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1060, 2)}, /* Telit LN920 */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1070, 2)}, /* Telit FN990 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1080, 2)}, /* Telit FE990 */
{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index 109c288..cf6941b 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -9809,6 +9809,7 @@ static const struct usb_device_id rtl8152_table[] = {
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07ab),
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07c6),
REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927),
+ REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0c5e),
REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101),
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f),
REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x3054),
diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
index 76f7af1..7c3e866 100644
--- a/drivers/net/usb/smsc75xx.c
+++ b/drivers/net/usb/smsc75xx.c
@@ -2199,6 +2199,13 @@ static int smsc75xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
size = (rx_cmd_a & RX_CMD_A_LEN) - RXW_PADDING;
align_count = (4 - ((size + RXW_PADDING) % 4)) % 4;
+ if (unlikely(size > skb->len)) {
+ netif_dbg(dev, rx_err, dev->net,
+ "size err rx_cmd_a=0x%08x\n",
+ rx_cmd_a);
+ return 0;
+ }
+
if (unlikely(rx_cmd_a & RX_CMD_A_RED)) {
netif_dbg(dev, rx_err, dev->net,
"Error rx_cmd_a=0x%08x\n", rx_cmd_a);
diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
index 7cf9206..649d9f9 100644
--- a/drivers/net/usb/smsc95xx.c
+++ b/drivers/net/usb/smsc95xx.c
@@ -1808,6 +1808,12 @@ static int smsc95xx_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
size = (u16)((header & RX_STS_FL_) >> 16);
align_count = (4 - ((size + NET_IP_ALIGN) % 4)) % 4;
+ if (unlikely(size > skb->len)) {
+ netif_dbg(dev, rx_err, dev->net,
+ "size err header=0x%08x\n", header);
+ return 0;
+ }
+
if (unlikely(header & RX_STS_ES_)) {
netif_dbg(dev, rx_err, dev->net,
"Error header=0x%08x\n", header);
diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
index 0c50f24..1fac6ee 100644
--- a/drivers/net/usb/sr9700.c
+++ b/drivers/net/usb/sr9700.c
@@ -410,7 +410,7 @@ static int sr9700_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
/* ignore the CRC length */
len = (skb->data[1] | (skb->data[2] << 8)) - 4;
- if (len > ETH_FRAME_LEN || len > skb->len)
+ if (len > ETH_FRAME_LEN || len > skb->len || len < 0)
return 0;
/* the last packet of current skb */
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 48fb7bd..66ca2ea 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1580,13 +1580,13 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
received = virtnet_receive(rq, budget, &xdp_xmit);
+ if (xdp_xmit & VIRTIO_XDP_REDIR)
+ xdp_do_flush();
+
/* Out of packets? */
if (received < budget)
virtqueue_napi_complete(napi, rq->vq, received);
- if (xdp_xmit & VIRTIO_XDP_REDIR)
- xdp_do_flush();
-
if (xdp_xmit & VIRTIO_XDP_TX) {
sq = virtnet_xdp_get_sq(vi);
if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {
@@ -1780,8 +1780,10 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
*/
if (sq->vq->num_free < 2+MAX_SKB_FRAGS) {
netif_stop_subqueue(dev, qnum);
- if (!use_napi &&
- unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
+ if (use_napi) {
+ if (unlikely(!virtqueue_enable_cb_delayed(sq->vq)))
+ virtqueue_napi_schedule(&sq->napi, sq->vq);
+ } else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
/* More just got used, free them then recheck. */
free_old_xmit_skbs(sq, false);
if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
@@ -1993,8 +1995,8 @@ static int virtnet_close(struct net_device *dev)
cancel_delayed_work_sync(&vi->refill);
for (i = 0; i < vi->max_queue_pairs; i++) {
- xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
napi_disable(&vi->rq[i].napi);
+ xdp_rxq_info_unreg(&vi->rq[i].xdp_rxq);
virtnet_napi_tx_disable(&vi->sq[i].napi);
}
diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
index cda1b4c..8305df1 100644
--- a/drivers/net/wan/fsl_ucc_hdlc.c
+++ b/drivers/net/wan/fsl_ucc_hdlc.c
@@ -1241,9 +1241,11 @@ static int ucc_hdlc_probe(struct platform_device *pdev)
free_dev:
free_netdev(dev);
undo_uhdlc_init:
- iounmap(utdm->siram);
+ if (utdm)
+ iounmap(utdm->siram);
unmap_si_regs:
- iounmap(utdm->si_regs);
+ if (utdm)
+ iounmap(utdm->si_regs);
free_utdm:
if (uhdlc_priv->tsa)
kfree(utdm);
diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
index caa8f6e..fda1c2d 100644
--- a/drivers/net/wireless/ath/ath11k/core.h
+++ b/drivers/net/wireless/ath/ath11k/core.h
@@ -731,7 +731,6 @@ struct ath11k_base {
enum ath11k_dfs_region dfs_region;
#ifdef CONFIG_ATH11K_DEBUGFS
struct dentry *debugfs_soc;
- struct dentry *debugfs_ath11k;
#endif
struct ath11k_soc_dp_stats soc_stats;
diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
index 554feaf..f827035 100644
--- a/drivers/net/wireless/ath/ath11k/debugfs.c
+++ b/drivers/net/wireless/ath/ath11k/debugfs.c
@@ -836,10 +836,6 @@ int ath11k_debugfs_pdev_create(struct ath11k_base *ab)
if (test_bit(ATH11K_FLAG_REGISTERED, &ab->dev_flags))
return 0;
- ab->debugfs_soc = debugfs_create_dir(ab->hw_params.name, ab->debugfs_ath11k);
- if (IS_ERR(ab->debugfs_soc))
- return PTR_ERR(ab->debugfs_soc);
-
debugfs_create_file("simulate_fw_crash", 0600, ab->debugfs_soc, ab,
&fops_simulate_fw_crash);
@@ -857,15 +853,51 @@ void ath11k_debugfs_pdev_destroy(struct ath11k_base *ab)
int ath11k_debugfs_soc_create(struct ath11k_base *ab)
{
- ab->debugfs_ath11k = debugfs_create_dir("ath11k", NULL);
+ struct dentry *root;
+ bool dput_needed;
+ char name[64];
+ int ret;
- return PTR_ERR_OR_ZERO(ab->debugfs_ath11k);
+ root = debugfs_lookup("ath11k", NULL);
+ if (!root) {
+ root = debugfs_create_dir("ath11k", NULL);
+ if (IS_ERR_OR_NULL(root))
+ return PTR_ERR(root);
+
+ dput_needed = false;
+ } else {
+ /* a dentry from lookup() needs dput() after we don't use it */
+ dput_needed = true;
+ }
+
+ scnprintf(name, sizeof(name), "%s-%s", ath11k_bus_str(ab->hif.bus),
+ dev_name(ab->dev));
+
+ ab->debugfs_soc = debugfs_create_dir(name, root);
+ if (IS_ERR_OR_NULL(ab->debugfs_soc)) {
+ ret = PTR_ERR(ab->debugfs_soc);
+ goto out;
+ }
+
+ ret = 0;
+
+out:
+ if (dput_needed)
+ dput(root);
+
+ return ret;
}
void ath11k_debugfs_soc_destroy(struct ath11k_base *ab)
{
- debugfs_remove_recursive(ab->debugfs_ath11k);
- ab->debugfs_ath11k = NULL;
+ debugfs_remove_recursive(ab->debugfs_soc);
+ ab->debugfs_soc = NULL;
+
+ /* We are not removing ath11k directory on purpose, even if it
+ * would be empty. This simplifies the directory handling and it's
+ * a minor cosmetic issue to leave an empty ath11k directory to
+ * debugfs.
+ */
}
EXPORT_SYMBOL(ath11k_debugfs_soc_destroy);
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
index 0ae6beb..3c64d33 100644
--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
+++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
@@ -3056,6 +3056,7 @@ int ath11k_peer_rx_frag_setup(struct ath11k *ar, const u8 *peer_mac, int vdev_id
if (!peer) {
ath11k_warn(ab, "failed to find the peer to set up fragment info\n");
spin_unlock_bh(&ab->base_lock);
+ crypto_free_shash(tfm);
return -ENOENT;
}
diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
index 7d0be93..bfa7f8d 100644
--- a/drivers/net/wireless/ath/ath11k/pci.c
+++ b/drivers/net/wireless/ath/ath11k/pci.c
@@ -1395,7 +1395,7 @@ static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev)
if (ret)
ath11k_warn(ab, "failed to suspend core: %d\n", ret);
- return ret;
+ return 0;
}
static __maybe_unused int ath11k_pci_pm_resume(struct device *dev)
diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
index f938ac1..f521dfa 100644
--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
+++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
@@ -244,11 +244,11 @@ static inline void ath9k_skb_queue_complete(struct hif_device_usb *hif_dev,
ath9k_htc_txcompletion_cb(hif_dev->htc_handle,
skb, txok);
if (txok) {
- TX_STAT_INC(skb_success);
- TX_STAT_ADD(skb_success_bytes, ln);
+ TX_STAT_INC(hif_dev, skb_success);
+ TX_STAT_ADD(hif_dev, skb_success_bytes, ln);
}
else
- TX_STAT_INC(skb_failed);
+ TX_STAT_INC(hif_dev, skb_failed);
}
}
@@ -302,7 +302,7 @@ static void hif_usb_tx_cb(struct urb *urb)
hif_dev->tx.tx_buf_cnt++;
if (!(hif_dev->tx.flags & HIF_USB_TX_STOP))
__hif_usb_tx(hif_dev); /* Check for pending SKBs */
- TX_STAT_INC(buf_completed);
+ TX_STAT_INC(hif_dev, buf_completed);
spin_unlock(&hif_dev->tx.tx_lock);
}
@@ -353,7 +353,7 @@ static int __hif_usb_tx(struct hif_device_usb *hif_dev)
tx_buf->len += tx_buf->offset;
__skb_queue_tail(&tx_buf->skb_queue, nskb);
- TX_STAT_INC(skb_queued);
+ TX_STAT_INC(hif_dev, skb_queued);
}
usb_fill_bulk_urb(tx_buf->urb, hif_dev->udev,
@@ -368,11 +368,10 @@ static int __hif_usb_tx(struct hif_device_usb *hif_dev)
__skb_queue_head_init(&tx_buf->skb_queue);
list_move_tail(&tx_buf->list, &hif_dev->tx.tx_buf);
hif_dev->tx.tx_buf_cnt++;
+ } else {
+ TX_STAT_INC(hif_dev, buf_queued);
}
- if (!ret)
- TX_STAT_INC(buf_queued);
-
return ret;
}
@@ -515,7 +514,7 @@ static void hif_usb_sta_drain(void *hif_handle, u8 idx)
ath9k_htc_txcompletion_cb(hif_dev->htc_handle,
skb, false);
hif_dev->tx.tx_skb_cnt--;
- TX_STAT_INC(skb_failed);
+ TX_STAT_INC(hif_dev, skb_failed);
}
}
@@ -562,11 +561,11 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
memcpy(ptr, skb->data, rx_remain_len);
rx_pkt_len += rx_remain_len;
- hif_dev->rx_remain_len = 0;
skb_put(remain_skb, rx_pkt_len);
skb_pool[pool_index++] = remain_skb;
-
+ hif_dev->remain_skb = NULL;
+ hif_dev->rx_remain_len = 0;
} else {
index = rx_remain_len;
}
@@ -585,16 +584,21 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
pkt_len = get_unaligned_le16(ptr + index);
pkt_tag = get_unaligned_le16(ptr + index + 2);
+ /* It is supposed that if we have an invalid pkt_tag or
+ * pkt_len then the whole input SKB is considered invalid
+ * and dropped; the associated packets already in skb_pool
+ * are dropped, too.
+ */
if (pkt_tag != ATH_USB_RX_STREAM_MODE_TAG) {
- RX_STAT_INC(skb_dropped);
- return;
+ RX_STAT_INC(hif_dev, skb_dropped);
+ goto invalid_pkt;
}
if (pkt_len > 2 * MAX_RX_BUF_SIZE) {
dev_err(&hif_dev->udev->dev,
"ath9k_htc: invalid pkt_len (%x)\n", pkt_len);
- RX_STAT_INC(skb_dropped);
- return;
+ RX_STAT_INC(hif_dev, skb_dropped);
+ goto invalid_pkt;
}
pad_len = 4 - (pkt_len & 0x3);
@@ -606,11 +610,6 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
if (index > MAX_RX_BUF_SIZE) {
spin_lock(&hif_dev->rx_lock);
- hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
- hif_dev->rx_transfer_len =
- MAX_RX_BUF_SIZE - chk_idx - 4;
- hif_dev->rx_pad_len = pad_len;
-
nskb = __dev_alloc_skb(pkt_len + 32, GFP_ATOMIC);
if (!nskb) {
dev_err(&hif_dev->udev->dev,
@@ -618,8 +617,14 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
spin_unlock(&hif_dev->rx_lock);
goto err;
}
+
+ hif_dev->rx_remain_len = index - MAX_RX_BUF_SIZE;
+ hif_dev->rx_transfer_len =
+ MAX_RX_BUF_SIZE - chk_idx - 4;
+ hif_dev->rx_pad_len = pad_len;
+
skb_reserve(nskb, 32);
- RX_STAT_INC(skb_allocated);
+ RX_STAT_INC(hif_dev, skb_allocated);
memcpy(nskb->data, &(skb->data[chk_idx+4]),
hif_dev->rx_transfer_len);
@@ -640,7 +645,7 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
goto err;
}
skb_reserve(nskb, 32);
- RX_STAT_INC(skb_allocated);
+ RX_STAT_INC(hif_dev, skb_allocated);
memcpy(nskb->data, &(skb->data[chk_idx+4]), pkt_len);
skb_put(nskb, pkt_len);
@@ -650,11 +655,18 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
err:
for (i = 0; i < pool_index; i++) {
- RX_STAT_ADD(skb_completed_bytes, skb_pool[i]->len);
+ RX_STAT_ADD(hif_dev, skb_completed_bytes, skb_pool[i]->len);
ath9k_htc_rx_msg(hif_dev->htc_handle, skb_pool[i],
skb_pool[i]->len, USB_WLAN_RX_PIPE);
- RX_STAT_INC(skb_completed);
+ RX_STAT_INC(hif_dev, skb_completed);
}
+ return;
+invalid_pkt:
+ for (i = 0; i < pool_index; i++) {
+ dev_kfree_skb_any(skb_pool[i]);
+ RX_STAT_INC(hif_dev, skb_dropped);
+ }
+ return;
}
static void ath9k_hif_usb_rx_cb(struct urb *urb)
@@ -1412,8 +1424,6 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
if (hif_dev->flags & HIF_USB_READY) {
ath9k_htc_hw_deinit(hif_dev->htc_handle, unplugged);
- ath9k_hif_usb_dev_deinit(hif_dev);
- ath9k_destroy_wmi(hif_dev->htc_handle->drv_priv);
ath9k_htc_hw_free(hif_dev->htc_handle);
}
diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
index e3d546e..237f4ec 100644
--- a/drivers/net/wireless/ath/ath9k/htc.h
+++ b/drivers/net/wireless/ath/ath9k/htc.h
@@ -327,14 +327,18 @@ static inline struct ath9k_htc_tx_ctl *HTC_SKB_CB(struct sk_buff *skb)
}
#ifdef CONFIG_ATH9K_HTC_DEBUGFS
-#define __STAT_SAFE(expr) (hif_dev->htc_handle->drv_priv ? (expr) : 0)
-#define TX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c++)
-#define TX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.tx_stats.c += a)
-#define RX_STAT_INC(c) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c++)
-#define RX_STAT_ADD(c, a) __STAT_SAFE(hif_dev->htc_handle->drv_priv->debug.skbrx_stats.c += a)
-#define CAB_STAT_INC priv->debug.tx_stats.cab_queued++
+#define __STAT_SAFE(hif_dev, expr) do { ((hif_dev)->htc_handle->drv_priv ? (expr) : 0); } while (0)
+#define CAB_STAT_INC(priv) do { ((priv)->debug.tx_stats.cab_queued++); } while (0)
+#define TX_QSTAT_INC(priv, q) do { ((priv)->debug.tx_stats.queue_stats[q]++); } while (0)
-#define TX_QSTAT_INC(q) (priv->debug.tx_stats.queue_stats[q]++)
+#define TX_STAT_INC(hif_dev, c) \
+ __STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.tx_stats.c++)
+#define TX_STAT_ADD(hif_dev, c, a) \
+ __STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.tx_stats.c += a)
+#define RX_STAT_INC(hif_dev, c) \
+ __STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.skbrx_stats.c++)
+#define RX_STAT_ADD(hif_dev, c, a) \
+ __STAT_SAFE((hif_dev), (hif_dev)->htc_handle->drv_priv->debug.skbrx_stats.c += a)
void ath9k_htc_err_stat_rx(struct ath9k_htc_priv *priv,
struct ath_rx_status *rs);
@@ -374,13 +378,13 @@ void ath9k_htc_get_et_stats(struct ieee80211_hw *hw,
struct ethtool_stats *stats, u64 *data);
#else
-#define TX_STAT_INC(c) do { } while (0)
-#define TX_STAT_ADD(c, a) do { } while (0)
-#define RX_STAT_INC(c) do { } while (0)
-#define RX_STAT_ADD(c, a) do { } while (0)
-#define CAB_STAT_INC do { } while (0)
+#define TX_STAT_INC(hif_dev, c) do { } while (0)
+#define TX_STAT_ADD(hif_dev, c, a) do { } while (0)
+#define RX_STAT_INC(hif_dev, c) do { } while (0)
+#define RX_STAT_ADD(hif_dev, c, a) do { } while (0)
-#define TX_QSTAT_INC(c) do { } while (0)
+#define CAB_STAT_INC(priv)
+#define TX_QSTAT_INC(priv, c)
static inline void ath9k_htc_err_stat_rx(struct ath9k_htc_priv *priv,
struct ath_rx_status *rs)
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
index 07ac88f..96a3185 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
@@ -988,6 +988,8 @@ void ath9k_htc_disconnect_device(struct htc_target *htc_handle, bool hotunplug)
ath9k_deinit_device(htc_handle->drv_priv);
ath9k_stop_wmi(htc_handle->drv_priv);
+ ath9k_hif_usb_dealloc_urbs((struct hif_device_usb *)htc_handle->hif_dev);
+ ath9k_destroy_wmi(htc_handle->drv_priv);
ieee80211_free_hw(htc_handle->drv_priv->hw);
}
}
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
index a23eaca..672789e 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
@@ -106,20 +106,20 @@ static inline enum htc_endpoint_id get_htc_epid(struct ath9k_htc_priv *priv,
switch (qnum) {
case 0:
- TX_QSTAT_INC(IEEE80211_AC_VO);
+ TX_QSTAT_INC(priv, IEEE80211_AC_VO);
epid = priv->data_vo_ep;
break;
case 1:
- TX_QSTAT_INC(IEEE80211_AC_VI);
+ TX_QSTAT_INC(priv, IEEE80211_AC_VI);
epid = priv->data_vi_ep;
break;
case 2:
- TX_QSTAT_INC(IEEE80211_AC_BE);
+ TX_QSTAT_INC(priv, IEEE80211_AC_BE);
epid = priv->data_be_ep;
break;
case 3:
default:
- TX_QSTAT_INC(IEEE80211_AC_BK);
+ TX_QSTAT_INC(priv, IEEE80211_AC_BK);
epid = priv->data_bk_ep;
break;
}
@@ -328,7 +328,7 @@ static void ath9k_htc_tx_data(struct ath9k_htc_priv *priv,
memcpy(tx_fhdr, (u8 *) &tx_hdr, sizeof(tx_hdr));
if (is_cab) {
- CAB_STAT_INC;
+ CAB_STAT_INC(priv);
tx_ctl->epid = priv->cab_ep;
return;
}
diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
index ca05b07..fe62ff6 100644
--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
+++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
@@ -391,7 +391,7 @@ static void ath9k_htc_fw_panic_report(struct htc_target *htc_handle,
* HTC Messages are handled directly here and the obtained SKB
* is freed.
*
- * Service messages (Data, WMI) passed to the corresponding
+ * Service messages (Data, WMI) are passed to the corresponding
* endpoint RX handlers, which have to free the SKB.
*/
void ath9k_htc_rx_msg(struct htc_target *htc_handle,
@@ -478,6 +478,8 @@ void ath9k_htc_rx_msg(struct htc_target *htc_handle,
if (endpoint->ep_callbacks.rx)
endpoint->ep_callbacks.rx(endpoint->ep_callbacks.priv,
skb, epid);
+ else
+ goto invalid;
}
}
diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
index f315c54..19345b8f7 100644
--- a/drivers/net/wireless/ath/ath9k/wmi.c
+++ b/drivers/net/wireless/ath/ath9k/wmi.c
@@ -341,6 +341,7 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
if (!time_left) {
ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
wmi_cmd_to_name(cmd_id));
+ wmi->last_seq_id = 0;
mutex_unlock(&wmi->op_mutex);
return -ETIMEDOUT;
}
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
index 9db12ff..fc622e6 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
@@ -90,6 +90,9 @@
#define BRCMF_ASSOC_PARAMS_FIXED_SIZE \
(sizeof(struct brcmf_assoc_params_le) - sizeof(u16))
+#define BRCMF_MAX_CHANSPEC_LIST \
+ (BRCMF_DCMD_MEDLEN / sizeof(__le32) - 1)
+
static bool check_vif_up(struct brcmf_cfg80211_vif *vif)
{
if (!test_bit(BRCMF_VIF_STATUS_READY, &vif->sme_state)) {
@@ -6557,6 +6560,13 @@ static int brcmf_construct_chaninfo(struct brcmf_cfg80211_info *cfg,
band->channels[i].flags = IEEE80211_CHAN_DISABLED;
total = le32_to_cpu(list->count);
+ if (total > BRCMF_MAX_CHANSPEC_LIST) {
+ bphy_err(drvr, "Invalid count of channel Spec. (%u)\n",
+ total);
+ err = -EINVAL;
+ goto fail_pbuf;
+ }
+
for (i = 0; i < total; i++) {
ch.chspec = (u16)le32_to_cpu(list->element[i]);
cfg->d11inf.decchspec(&ch);
@@ -6702,6 +6712,13 @@ static int brcmf_enable_bw40_2g(struct brcmf_cfg80211_info *cfg)
band = cfg_to_wiphy(cfg)->bands[NL80211_BAND_2GHZ];
list = (struct brcmf_chanspec_list *)pbuf;
num_chan = le32_to_cpu(list->count);
+ if (num_chan > BRCMF_MAX_CHANSPEC_LIST) {
+ bphy_err(drvr, "Invalid count of channel Spec. (%u)\n",
+ num_chan);
+ kfree(pbuf);
+ return -EINVAL;
+ }
+
for (i = 0; i < num_chan; i++) {
ch.chspec = (u16)le32_to_cpu(list->element[i]);
cfg->d11inf.decchspec(&ch);
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
index e3758bd..f29de63 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c
@@ -264,6 +264,7 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
err);
goto done;
}
+ buf[sizeof(buf) - 1] = '\0';
ptr = (char *)buf;
strsep(&ptr, "\n");
@@ -280,15 +281,17 @@ int brcmf_c_preinit_dcmds(struct brcmf_if *ifp)
if (err) {
brcmf_dbg(TRACE, "retrieving clmver failed, %d\n", err);
} else {
+ buf[sizeof(buf) - 1] = '\0';
clmver = (char *)buf;
- /* store CLM version for adding it to revinfo debugfs file */
- memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
/* Replace all newline/linefeed characters with space
* character
*/
strreplace(clmver, '\n', ' ');
+ /* store CLM version for adding it to revinfo debugfs file */
+ memcpy(ifp->drvr->clmver, clmver, sizeof(ifp->drvr->clmver));
+
brcmf_dbg(INFO, "CLM version = %s\n", clmver);
}
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
index e5bae62..f03fc6f 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
@@ -338,6 +338,7 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
bphy_err(drvr, "%s: failed to expand headroom\n",
brcmf_ifname(ifp));
atomic_inc(&drvr->bus_if->stats.pktcow_failed);
+ dev_kfree_skb(skb);
goto done;
}
}
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
index 7c8e08e..bd3b234 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c
@@ -346,8 +346,11 @@ brcmf_msgbuf_alloc_pktid(struct device *dev,
count++;
} while (count < pktids->array_size);
- if (count == pktids->array_size)
+ if (count == pktids->array_size) {
+ dma_unmap_single(dev, *physaddr, skb->len - data_offset,
+ pktids->direction);
return -ENOMEM;
+ }
array[*idx].data_offset = data_offset;
array[*idx].physaddr = *physaddr;
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
index f5f5a00..6d8a042 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c
@@ -1118,7 +1118,7 @@ static int brcmf_pcie_init_ringbuffers(struct brcmf_pciedev_info *devinfo)
BRCMF_NROF_H2D_COMMON_MSGRINGS;
max_completionrings = BRCMF_NROF_D2H_COMMON_MSGRINGS;
}
- if (max_flowrings > 256) {
+ if (max_flowrings > 512) {
brcmf_err(bus, "invalid max_flowrings(%d)\n", max_flowrings);
return -EIO;
}
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
index ada6ce3..bb728fb 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
@@ -3444,7 +3444,7 @@ static void ipw_rx_queue_reset(struct ipw_priv *priv,
dma_unmap_single(&priv->pci_dev->dev,
rxq->pool[i].dma_addr,
IPW_RX_BUF_SIZE, DMA_FROM_DEVICE);
- dev_kfree_skb(rxq->pool[i].skb);
+ dev_kfree_skb_irq(rxq->pool[i].skb);
rxq->pool[i].skb = NULL;
}
list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
@@ -11400,9 +11400,14 @@ static int ipw_wdev_init(struct net_device *dev)
set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
/* With that information in place, we can now register the wiphy... */
- if (wiphy_register(wdev->wiphy))
- rc = -EIO;
+ rc = wiphy_register(wdev->wiphy);
+ if (rc)
+ goto out;
+
+ return 0;
out:
+ kfree(priv->ieee->a_band.channels);
+ kfree(priv->ieee->bg_band.channels);
return rc;
}
diff --git a/drivers/net/wireless/intel/iwlegacy/3945-mac.c b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
index 45abb25..04c149f 100644
--- a/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+++ b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
@@ -3378,10 +3378,12 @@ static DEVICE_ATTR(dump_errors, 0200, NULL, il3945_dump_error_log);
*
*****************************************************************************/
-static void
+static int
il3945_setup_deferred_work(struct il_priv *il)
{
il->workqueue = create_singlethread_workqueue(DRV_NAME);
+ if (!il->workqueue)
+ return -ENOMEM;
init_waitqueue_head(&il->wait_command_queue);
@@ -3398,6 +3400,8 @@ il3945_setup_deferred_work(struct il_priv *il)
timer_setup(&il->watchdog, il_bg_watchdog, 0);
tasklet_setup(&il->irq_tasklet, il3945_irq_tasklet);
+
+ return 0;
}
static void
@@ -3717,7 +3721,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
il_set_rxon_channel(il, &il->bands[NL80211_BAND_2GHZ].channels[5]);
- il3945_setup_deferred_work(il);
+ err = il3945_setup_deferred_work(il);
+ if (err)
+ goto out_remove_sysfs;
+
il3945_setup_handlers(il);
il_power_initialize(il);
@@ -3729,7 +3736,7 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
err = il3945_setup_mac(il);
if (err)
- goto out_remove_sysfs;
+ goto out_destroy_workqueue;
il_dbgfs_register(il, DRV_NAME);
@@ -3738,9 +3745,10 @@ il3945_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
return 0;
-out_remove_sysfs:
+out_destroy_workqueue:
destroy_workqueue(il->workqueue);
il->workqueue = NULL;
+out_remove_sysfs:
sysfs_remove_group(&pdev->dev.kobj, &il3945_attribute_group);
out_release_irq:
free_irq(il->pci_dev->irq, il);
diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
index 0223532..ff04282 100644
--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
@@ -6211,10 +6211,12 @@ il4965_bg_txpower_work(struct work_struct *work)
mutex_unlock(&il->mutex);
}
-static void
+static int
il4965_setup_deferred_work(struct il_priv *il)
{
il->workqueue = create_singlethread_workqueue(DRV_NAME);
+ if (!il->workqueue)
+ return -ENOMEM;
init_waitqueue_head(&il->wait_command_queue);
@@ -6233,6 +6235,8 @@ il4965_setup_deferred_work(struct il_priv *il)
timer_setup(&il->watchdog, il_bg_watchdog, 0);
tasklet_setup(&il->irq_tasklet, il4965_irq_tasklet);
+
+ return 0;
}
static void
@@ -6617,7 +6621,10 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_disable_msi;
}
- il4965_setup_deferred_work(il);
+ err = il4965_setup_deferred_work(il);
+ if (err)
+ goto out_free_irq;
+
il4965_setup_handlers(il);
/*********************************************
@@ -6655,6 +6662,7 @@ il4965_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
out_destroy_workqueue:
destroy_workqueue(il->workqueue);
il->workqueue = NULL;
+out_free_irq:
free_irq(il->pci_dev->irq, il);
out_disable_msi:
pci_disable_msi(il->pci_dev);
diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
index 683b632..83c1ff0 100644
--- a/drivers/net/wireless/intel/iwlegacy/common.c
+++ b/drivers/net/wireless/intel/iwlegacy/common.c
@@ -5173,7 +5173,7 @@ il_mac_reset_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
memset(&il->current_ht_config, 0, sizeof(struct il_ht_config));
/* new association get rid of ibss beacon skb */
- dev_kfree_skb(il->beacon_skb);
+ dev_consume_skb_irq(il->beacon_skb);
il->beacon_skb = NULL;
il->timestamp = 0;
@@ -5292,7 +5292,7 @@ il_beacon_update(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
}
spin_lock_irqsave(&il->lock, flags);
- dev_kfree_skb(il->beacon_skb);
+ dev_consume_skb_irq(il->beacon_skb);
il->beacon_skb = skb;
timestamp = ((struct ieee80211_mgmt *)skb->data)->u.beacon.timestamp;
diff --git a/drivers/net/wireless/intersil/orinoco/hw.c b/drivers/net/wireless/intersil/orinoco/hw.c
index 0aea35c..4fcca08 100644
--- a/drivers/net/wireless/intersil/orinoco/hw.c
+++ b/drivers/net/wireless/intersil/orinoco/hw.c
@@ -931,6 +931,8 @@ int __orinoco_hw_setup_enc(struct orinoco_private *priv)
err = hermes_write_wordrec(hw, USER_BAP,
HERMES_RID_CNFAUTHENTICATION_AGERE,
auth_flag);
+ if (err)
+ return err;
}
err = hermes_write_wordrec(hw, USER_BAP,
HERMES_RID_CNFWEPENABLED_AGERE,
diff --git a/drivers/net/wireless/marvell/libertas/cmdresp.c b/drivers/net/wireless/marvell/libertas/cmdresp.c
index cb515c5..74cb755 100644
--- a/drivers/net/wireless/marvell/libertas/cmdresp.c
+++ b/drivers/net/wireless/marvell/libertas/cmdresp.c
@@ -48,7 +48,7 @@ void lbs_mac_event_disconnected(struct lbs_private *priv,
/* Free Tx and Rx packets */
spin_lock_irqsave(&priv->driver_lock, flags);
- kfree_skb(priv->currenttxskb);
+ dev_kfree_skb_irq(priv->currenttxskb);
priv->currenttxskb = NULL;
priv->tx_pending_len = 0;
spin_unlock_irqrestore(&priv->driver_lock, flags);
diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
index 32fdc41..2240b4db 100644
--- a/drivers/net/wireless/marvell/libertas/if_usb.c
+++ b/drivers/net/wireless/marvell/libertas/if_usb.c
@@ -637,7 +637,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
priv->resp_len[i] = (recvlength - MESSAGE_HEADER_LEN);
memcpy(priv->resp_buf[i], recvbuff + MESSAGE_HEADER_LEN,
priv->resp_len[i]);
- kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
lbs_notify_command_response(priv, i);
spin_unlock_irqrestore(&priv->driver_lock, flags);
diff --git a/drivers/net/wireless/marvell/libertas/main.c b/drivers/net/wireless/marvell/libertas/main.c
index 64fc5e4..4687777 100644
--- a/drivers/net/wireless/marvell/libertas/main.c
+++ b/drivers/net/wireless/marvell/libertas/main.c
@@ -217,7 +217,7 @@ int lbs_stop_iface(struct lbs_private *priv)
spin_lock_irqsave(&priv->driver_lock, flags);
priv->iface_running = false;
- kfree_skb(priv->currenttxskb);
+ dev_kfree_skb_irq(priv->currenttxskb);
priv->currenttxskb = NULL;
priv->tx_pending_len = 0;
spin_unlock_irqrestore(&priv->driver_lock, flags);
@@ -870,6 +870,7 @@ static int lbs_init_adapter(struct lbs_private *priv)
ret = kfifo_alloc(&priv->event_fifo, sizeof(u32) * 16, GFP_KERNEL);
if (ret) {
pr_err("Out of memory allocating event FIFO buffer\n");
+ lbs_free_cmd_buffer(priv);
goto out;
}
diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
index 75b5319..1750f5e 100644
--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c
+++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c
@@ -613,7 +613,7 @@ static inline void process_cmdrequest(int recvlength, uint8_t *recvbuff,
spin_lock_irqsave(&priv->driver_lock, flags);
memcpy(priv->cmd_resp_buff, recvbuff + MESSAGE_HEADER_LEN,
recvlength - MESSAGE_HEADER_LEN);
- kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
lbtf_cmd_response_rx(priv);
spin_unlock_irqrestore(&priv->driver_lock, flags);
}
diff --git a/drivers/net/wireless/marvell/mwifiex/11n.c b/drivers/net/wireless/marvell/mwifiex/11n.c
index cf08a4a..b99381e 100644
--- a/drivers/net/wireless/marvell/mwifiex/11n.c
+++ b/drivers/net/wireless/marvell/mwifiex/11n.c
@@ -890,7 +890,7 @@ mwifiex_send_delba_txbastream_tbl(struct mwifiex_private *priv, u8 tid)
*/
void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
{
- u8 i;
+ u8 i, j;
u32 tx_win_size;
struct mwifiex_private *priv;
@@ -921,8 +921,8 @@ void mwifiex_update_ampdu_txwinsize(struct mwifiex_adapter *adapter)
if (tx_win_size != priv->add_ba_param.tx_win_size) {
if (!priv->media_connected)
continue;
- for (i = 0; i < MAX_NUM_TID; i++)
- mwifiex_send_delba_txbastream_tbl(priv, i);
+ for (j = 0; j < MAX_NUM_TID; j++)
+ mwifiex_send_delba_txbastream_tbl(priv, j);
}
}
}
diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
index bde9e4b..7fb6eef 100644
--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
+++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
@@ -485,6 +485,7 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
};
static const struct of_device_id mwifiex_sdio_of_match_table[] = {
+ { .compatible = "marvell,sd8787" },
{ .compatible = "marvell,sd8897" },
{ .compatible = "marvell,sd8997" },
{ }
diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
index 7aecde3..1aa0479 100644
--- a/drivers/net/wireless/mediatek/mt76/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/dma.c
@@ -486,6 +486,7 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
bool more;
spin_lock_bh(&q->lock);
+
do {
buf = mt76_dma_dequeue(dev, q, true, NULL, NULL, &more);
if (!buf)
@@ -493,6 +494,12 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
skb_free_frag(buf);
} while (1);
+
+ if (q->rx_head) {
+ dev_kfree_skb(q->rx_head);
+ q->rx_head = NULL;
+ }
+
spin_unlock_bh(&q->lock);
if (!q->rx_page.va)
@@ -515,12 +522,6 @@ mt76_dma_rx_reset(struct mt76_dev *dev, enum mt76_rxq_id qid)
mt76_dma_rx_cleanup(dev, q);
mt76_dma_sync_idx(dev, q);
mt76_dma_rx_fill(dev, q);
-
- if (!q->rx_head)
- return;
-
- dev_kfree_skb(q->rx_head);
- q->rx_head = NULL;
}
static void
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
index e9d854e..1c90045 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
@@ -2689,7 +2689,7 @@ static int mt7915_driver_own(struct mt7915_dev *dev)
{
mt76_wr(dev, MT_TOP_LPCR_HOST_BAND0, MT_TOP_LPCR_HOST_DRV_OWN);
if (!mt76_poll_msec(dev, MT_TOP_LPCR_HOST_BAND0,
- MT_TOP_LPCR_HOST_FW_OWN, 0, 500)) {
+ MT_TOP_LPCR_HOST_FW_OWN_STAT, 0, 500)) {
dev_err(dev->mt76.dev, "Timeout for driver own\n");
return -EIO;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
index a213b5c..f4101cc 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
@@ -426,6 +426,7 @@
#define MT_TOP_LPCR_HOST_BAND0 MT_TOP(0x10)
#define MT_TOP_LPCR_HOST_FW_OWN BIT(0)
#define MT_TOP_LPCR_HOST_DRV_OWN BIT(1)
+#define MT_TOP_LPCR_HOST_FW_OWN_STAT BIT(2)
#define MT_TOP_MISC MT_TOP(0xf0)
#define MT_TOP_MISC_FW_STATE GENMASK(2, 0)
diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c
index ed78d2c..fd3b768 100644
--- a/drivers/net/wireless/mediatek/mt7601u/dma.c
+++ b/drivers/net/wireless/mediatek/mt7601u/dma.c
@@ -123,7 +123,8 @@ static u16 mt7601u_rx_next_seg_len(u8 *data, u32 data_len)
if (data_len < min_seg_len ||
WARN_ON_ONCE(!dma_len) ||
WARN_ON_ONCE(dma_len + MT_DMA_HDRS > data_len) ||
- WARN_ON_ONCE(dma_len & 0x3))
+ WARN_ON_ONCE(dma_len & 0x3) ||
+ WARN_ON_ONCE(dma_len < min_seg_len))
return 0;
return MT_DMA_HDRS + dma_len;
diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
index 9dfb1a2..5e3ec20 100644
--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
+++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
@@ -724,6 +724,7 @@ netdev_tx_t wilc_mac_xmit(struct sk_buff *skb, struct net_device *ndev)
if (skb->dev != ndev) {
netdev_err(ndev, "Packet not destined to this device\n");
+ dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
index b06508d..46767dc 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c
@@ -1669,6 +1669,11 @@ static void rtl8192e_enable_rf(struct rtl8xxxu_priv *priv)
val8 = rtl8xxxu_read8(priv, REG_PAD_CTRL1);
val8 &= ~BIT(0);
rtl8xxxu_write8(priv, REG_PAD_CTRL1, val8);
+
+ /*
+ * Fix transmission failure of rtl8192e.
+ */
+ rtl8xxxu_write8(priv, REG_TXPAUSE, 0x00);
}
struct rtl8xxxu_fileops rtl8192eu_fops = {
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
index 7370d92..8873070 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
@@ -4369,12 +4369,9 @@ void rtl8xxxu_gen1_report_connect(struct rtl8xxxu_priv *priv,
void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
u8 macid, bool connect)
{
-#ifdef RTL8XXXU_GEN2_REPORT_CONNECT
/*
- * Barry Day reports this causes issues with 8192eu and 8723bu
- * devices reconnecting. The reason for this is unclear, but
- * until it is better understood, leave the code in place but
- * disabled, so it is not lost.
+ * The firmware turns on the rate control when it knows it's
+ * connected to a network.
*/
struct h2c_cmd h2c;
@@ -4387,7 +4384,6 @@ void rtl8xxxu_gen2_report_connect(struct rtl8xxxu_priv *priv,
h2c.media_status_rpt.parm &= ~BIT(0);
rtl8xxxu_gen2_h2c_cmd(priv, &h2c, sizeof(h2c.media_status_rpt));
-#endif
}
void rtl8xxxu_gen1_init_aggregation(struct rtl8xxxu_priv *priv)
@@ -5188,7 +5184,7 @@ static void rtl8xxxu_queue_rx_urb(struct rtl8xxxu_priv *priv,
pending = priv->rx_urb_pending_count;
} else {
skb = (struct sk_buff *)rx_urb->urb.context;
- dev_kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
usb_free_urb(&rx_urb->urb);
}
@@ -5494,9 +5490,6 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
btcoex = &priv->bt_coex;
rarpt = &priv->ra_report;
- if (priv->rf_paths > 1)
- goto out;
-
while (!skb_queue_empty(&priv->c2hcmd_queue)) {
skb = skb_dequeue(&priv->c2hcmd_queue);
@@ -5548,10 +5541,9 @@ static void rtl8xxxu_c2hcmd_callback(struct work_struct *work)
default:
break;
}
- }
-out:
- dev_kfree_skb(skb);
+ dev_kfree_skb(skb);
+ }
}
static void rtl8723bu_handle_c2h(struct rtl8xxxu_priv *priv,
@@ -5917,7 +5909,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
{
struct rtl8xxxu_priv *priv = hw->priv;
struct device *dev = &priv->udev->dev;
- u16 val16;
int ret = 0, channel;
bool ht40;
@@ -5927,14 +5918,6 @@ static int rtl8xxxu_config(struct ieee80211_hw *hw, u32 changed)
__func__, hw->conf.chandef.chan->hw_value,
changed, hw->conf.chandef.width);
- if (changed & IEEE80211_CONF_CHANGE_RETRY_LIMITS) {
- val16 = ((hw->conf.long_frame_max_tx_count <<
- RETRY_LIMIT_LONG_SHIFT) & RETRY_LIMIT_LONG_MASK) |
- ((hw->conf.short_frame_max_tx_count <<
- RETRY_LIMIT_SHORT_SHIFT) & RETRY_LIMIT_SHORT_MASK);
- rtl8xxxu_write16(priv, REG_RETRY_LIMIT, val16);
- }
-
if (changed & IEEE80211_CONF_CHANGE_CHANNEL) {
switch (hw->conf.chandef.width) {
case NL80211_CHAN_WIDTH_20_NOHT:
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
index bf686a9..13e9717 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
@@ -68,8 +68,10 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
+ struct sk_buff_head free_list;
unsigned long flags;
+ skb_queue_head_init(&free_list);
spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
while (skb_queue_len(&ring->queue)) {
struct rtl_tx_desc *entry = &ring->desc[ring->idx];
@@ -79,10 +81,12 @@ static void _rtl88ee_return_beacon_queue_skb(struct ieee80211_hw *hw)
rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
true, HW_DESC_TXBUFF_ADDR),
skb->len, DMA_TO_DEVICE);
- kfree_skb(skb);
+ __skb_queue_tail(&free_list, skb);
ring->idx = (ring->idx + 1) % ring->entries;
}
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
+
+ __skb_queue_purge(&free_list);
}
static void _rtl88ee_disable_bcn_sub_func(struct ieee80211_hw *hw)
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
index 0748aed..ccbb082d 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
@@ -30,8 +30,10 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
+ struct sk_buff_head free_list;
unsigned long flags;
+ skb_queue_head_init(&free_list);
spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
while (skb_queue_len(&ring->queue)) {
struct rtl_tx_desc *entry = &ring->desc[ring->idx];
@@ -41,10 +43,12 @@ static void _rtl8723be_return_beacon_queue_skb(struct ieee80211_hw *hw)
rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
true, HW_DESC_TXBUFF_ADDR),
skb->len, DMA_TO_DEVICE);
- kfree_skb(skb);
+ __skb_queue_tail(&free_list, skb);
ring->idx = (ring->idx + 1) % ring->entries;
}
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
+
+ __skb_queue_purge(&free_list);
}
static void _rtl8723be_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
index 33ffc24..c4ee65c 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
@@ -26,8 +26,10 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[BEACON_QUEUE];
+ struct sk_buff_head free_list;
unsigned long flags;
+ skb_queue_head_init(&free_list);
spin_lock_irqsave(&rtlpriv->locks.irq_th_lock, flags);
while (skb_queue_len(&ring->queue)) {
struct rtl_tx_desc *entry = &ring->desc[ring->idx];
@@ -37,10 +39,12 @@ static void _rtl8821ae_return_beacon_queue_skb(struct ieee80211_hw *hw)
rtlpriv->cfg->ops->get_desc(hw, (u8 *)entry,
true, HW_DESC_TXBUFF_ADDR),
skb->len, DMA_TO_DEVICE);
- kfree_skb(skb);
+ __skb_queue_tail(&free_list, skb);
ring->idx = (ring->idx + 1) % ring->entries;
}
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
+
+ __skb_queue_purge(&free_list);
}
static void _rtl8821ae_set_bcn_ctrl_reg(struct ieee80211_hw *hw,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
index a29321e..5323ead 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
@@ -1598,18 +1598,6 @@ static bool _rtl8812ae_get_integer_from_string(const char *str, u8 *pint)
return true;
}
-static bool _rtl8812ae_eq_n_byte(const char *str1, const char *str2, u32 num)
-{
- if (num == 0)
- return false;
- while (num > 0) {
- num--;
- if (str1[num] != str2[num])
- return false;
- }
- return true;
-}
-
static s8 _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(struct ieee80211_hw *hw,
u8 band, u8 channel)
{
@@ -1659,42 +1647,42 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw,
power_limit = power_limit > MAX_POWER_INDEX ?
MAX_POWER_INDEX : power_limit;
- if (_rtl8812ae_eq_n_byte(pregulation, "FCC", 3))
+ if (strcmp(pregulation, "FCC") == 0)
regulation = 0;
- else if (_rtl8812ae_eq_n_byte(pregulation, "MKK", 3))
+ else if (strcmp(pregulation, "MKK") == 0)
regulation = 1;
- else if (_rtl8812ae_eq_n_byte(pregulation, "ETSI", 4))
+ else if (strcmp(pregulation, "ETSI") == 0)
regulation = 2;
- else if (_rtl8812ae_eq_n_byte(pregulation, "WW13", 4))
+ else if (strcmp(pregulation, "WW13") == 0)
regulation = 3;
- if (_rtl8812ae_eq_n_byte(prate_section, "CCK", 3))
+ if (strcmp(prate_section, "CCK") == 0)
rate_section = 0;
- else if (_rtl8812ae_eq_n_byte(prate_section, "OFDM", 4))
+ else if (strcmp(prate_section, "OFDM") == 0)
rate_section = 1;
- else if (_rtl8812ae_eq_n_byte(prate_section, "HT", 2) &&
- _rtl8812ae_eq_n_byte(prf_path, "1T", 2))
+ else if (strcmp(prate_section, "HT") == 0 &&
+ strcmp(prf_path, "1T") == 0)
rate_section = 2;
- else if (_rtl8812ae_eq_n_byte(prate_section, "HT", 2) &&
- _rtl8812ae_eq_n_byte(prf_path, "2T", 2))
+ else if (strcmp(prate_section, "HT") == 0 &&
+ strcmp(prf_path, "2T") == 0)
rate_section = 3;
- else if (_rtl8812ae_eq_n_byte(prate_section, "VHT", 3) &&
- _rtl8812ae_eq_n_byte(prf_path, "1T", 2))
+ else if (strcmp(prate_section, "VHT") == 0 &&
+ strcmp(prf_path, "1T") == 0)
rate_section = 4;
- else if (_rtl8812ae_eq_n_byte(prate_section, "VHT", 3) &&
- _rtl8812ae_eq_n_byte(prf_path, "2T", 2))
+ else if (strcmp(prate_section, "VHT") == 0 &&
+ strcmp(prf_path, "2T") == 0)
rate_section = 5;
- if (_rtl8812ae_eq_n_byte(pbandwidth, "20M", 3))
+ if (strcmp(pbandwidth, "20M") == 0)
bandwidth = 0;
- else if (_rtl8812ae_eq_n_byte(pbandwidth, "40M", 3))
+ else if (strcmp(pbandwidth, "40M") == 0)
bandwidth = 1;
- else if (_rtl8812ae_eq_n_byte(pbandwidth, "80M", 3))
+ else if (strcmp(pbandwidth, "80M") == 0)
bandwidth = 2;
- else if (_rtl8812ae_eq_n_byte(pbandwidth, "160M", 4))
+ else if (strcmp(pbandwidth, "160M") == 0)
bandwidth = 3;
- if (_rtl8812ae_eq_n_byte(pband, "2.4G", 4)) {
+ if (strcmp(pband, "2.4G") == 0) {
ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
BAND_ON_2_4G,
channel);
@@ -1718,7 +1706,7 @@ static void _rtl8812ae_phy_set_txpower_limit(struct ieee80211_hw *hw,
regulation, bandwidth, rate_section, channel_index,
rtlphy->txpwr_limit_2_4g[regulation][bandwidth]
[rate_section][channel_index][RF90_PATH_A]);
- } else if (_rtl8812ae_eq_n_byte(pband, "5G", 2)) {
+ } else if (strcmp(pband, "5G") == 0) {
ret = _rtl8812ae_phy_get_chnl_idx_of_txpwr_lmt(hw,
BAND_ON_5G,
channel);
diff --git a/drivers/net/wireless/rndis_wlan.c b/drivers/net/wireless/rndis_wlan.c
index 63ce244..70841d13 100644
--- a/drivers/net/wireless/rndis_wlan.c
+++ b/drivers/net/wireless/rndis_wlan.c
@@ -694,8 +694,8 @@ static int rndis_query_oid(struct usbnet *dev, u32 oid, void *data, int *len)
struct rndis_query *get;
struct rndis_query_c *get_c;
} u;
- int ret, buflen;
- int resplen, respoffs, copylen;
+ int ret;
+ size_t buflen, resplen, respoffs, copylen;
buflen = *len + sizeof(*u.get);
if (buflen < CONTROL_BUFFER_SIZE)
@@ -730,22 +730,15 @@ static int rndis_query_oid(struct usbnet *dev, u32 oid, void *data, int *len)
if (respoffs > buflen) {
/* Device returned data offset outside buffer, error. */
- netdev_dbg(dev->net, "%s(%s): received invalid "
- "data offset: %d > %d\n", __func__,
- oid_to_string(oid), respoffs, buflen);
+ netdev_dbg(dev->net,
+ "%s(%s): received invalid data offset: %zu > %zu\n",
+ __func__, oid_to_string(oid), respoffs, buflen);
ret = -EINVAL;
goto exit_unlock;
}
- if ((resplen + respoffs) > buflen) {
- /* Device would have returned more data if buffer would
- * have been big enough. Copy just the bits that we got.
- */
- copylen = buflen - respoffs;
- } else {
- copylen = resplen;
- }
+ copylen = min(resplen, buflen - respoffs);
if (copylen > *len)
copylen = *len;
diff --git a/drivers/net/wireless/rsi/rsi_91x_coex.c b/drivers/net/wireless/rsi/rsi_91x_coex.c
index a0c5d02..7395359 100644
--- a/drivers/net/wireless/rsi/rsi_91x_coex.c
+++ b/drivers/net/wireless/rsi/rsi_91x_coex.c
@@ -160,6 +160,7 @@ int rsi_coex_attach(struct rsi_common *common)
rsi_coex_scheduler_thread,
"Coex-Tx-Thread")) {
rsi_dbg(ERR_ZONE, "%s: Unable to init tx thrd\n", __func__);
+ kfree(coex_cb);
return -EINVAL;
}
return 0;
diff --git a/drivers/net/wireless/wl3501_cs.c b/drivers/net/wireless/wl3501_cs.c
index 672f5d5..cb71b73 100644
--- a/drivers/net/wireless/wl3501_cs.c
+++ b/drivers/net/wireless/wl3501_cs.c
@@ -1328,7 +1328,7 @@ static netdev_tx_t wl3501_hard_start_xmit(struct sk_buff *skb,
} else {
++dev->stats.tx_packets;
dev->stats.tx_bytes += skb->len;
- kfree_skb(skb);
+ dev_kfree_skb_irq(skb);
if (this->tx_buffer_cnt < 2)
netif_stop_queue(dev);
diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index adfd21a..f315bdd 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -166,7 +166,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
- struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
+ struct gnttab_copy tx_copy_ops[2 * MAX_PENDING_REQS];
struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
/* passed to gnttab_[un]map_refs with pages under (un)mapping */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 26428db8..303d8eb 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -334,6 +334,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
struct xenvif_tx_cb {
u16 copy_pending_idx[XEN_NETBK_LEGACY_SLOTS_MAX + 1];
u8 copy_count;
+ u32 split_mask;
};
#define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
@@ -361,6 +362,8 @@ static inline struct sk_buff *xenvif_alloc_skb(unsigned int size)
struct sk_buff *skb =
alloc_skb(size + NET_SKB_PAD + NET_IP_ALIGN,
GFP_ATOMIC | __GFP_NOWARN);
+
+ BUILD_BUG_ON(sizeof(*XENVIF_TX_CB(skb)) > sizeof(skb->cb));
if (unlikely(skb == NULL))
return NULL;
@@ -396,11 +399,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
nr_slots = shinfo->nr_frags + 1;
copy_count(skb) = 0;
+ XENVIF_TX_CB(skb)->split_mask = 0;
/* Create copy ops for exactly data_len bytes into the skb head. */
__skb_put(skb, data_len);
while (data_len > 0) {
int amount = data_len > txp->size ? txp->size : data_len;
+ bool split = false;
cop->source.u.ref = txp->gref;
cop->source.domid = queue->vif->domid;
@@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb)
- data_len);
+ /* Don't cross local page boundary! */
+ if (cop->dest.offset + amount > XEN_PAGE_SIZE) {
+ amount = XEN_PAGE_SIZE - cop->dest.offset;
+ XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb);
+ split = true;
+ }
+
cop->len = amount;
cop->flags = GNTCOPY_source_gref;
@@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
pending_idx = queue->pending_ring[index];
callback_param(queue, pending_idx).ctx = NULL;
copy_pending_idx(skb, copy_count(skb)) = pending_idx;
- copy_count(skb)++;
+ if (!split)
+ copy_count(skb)++;
cop++;
data_len -= amount;
@@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
nr_slots--;
} else {
/* The copy op partially covered the tx_request.
- * The remainder will be mapped.
+ * The remainder will be mapped or copied in the next
+ * iteration.
*/
txp->offset += amount;
txp->size -= amount;
@@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
pending_idx = copy_pending_idx(skb, i);
newerr = (*gopp_copy)->status;
+
+ /* Split copies need to be handled together. */
+ if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) {
+ (*gopp_copy)++;
+ if (!newerr)
+ newerr = (*gopp_copy)->status;
+ }
if (likely(!newerr)) {
/* The first frag might still have this slot mapped */
if (i < copy_count(skb) - 1 || !sharedslot)
diff --git a/drivers/nfc/fdp/i2c.c b/drivers/nfc/fdp/i2c.c
index 051c43a..5f97dcf 100644
--- a/drivers/nfc/fdp/i2c.c
+++ b/drivers/nfc/fdp/i2c.c
@@ -249,6 +249,9 @@ static void fdp_nci_i2c_read_device_properties(struct device *dev,
len, sizeof(**fw_vsc_cfg),
GFP_KERNEL);
+ if (!*fw_vsc_cfg)
+ goto alloc_err;
+
r = device_property_read_u8_array(dev, FDP_DP_FW_VSC_CFG_NAME,
*fw_vsc_cfg, len);
@@ -262,6 +265,7 @@ static void fdp_nci_i2c_read_device_properties(struct device *dev,
*fw_vsc_cfg = NULL;
}
+alloc_err:
dev_dbg(dev, "Clock type: %d, clock frequency: %d, VSC: %s",
*clock_type, *clock_freq, *fw_vsc_cfg != NULL ? "yes" : "no");
}
diff --git a/drivers/nfc/pn533/usb.c b/drivers/nfc/pn533/usb.c
index 62ad26e..47d423c 100644
--- a/drivers/nfc/pn533/usb.c
+++ b/drivers/nfc/pn533/usb.c
@@ -175,6 +175,7 @@ static int pn533_usb_send_frame(struct pn533 *dev,
print_hex_dump_debug("PN533 TX: ", DUMP_PREFIX_NONE, 16, 1,
out->data, out->len, false);
+ arg.phy = phy;
init_completion(&arg.done);
cntx = phy->out_urb->context;
phy->out_urb->context = &arg;
diff --git a/drivers/nfc/st-nci/ndlc.c b/drivers/nfc/st-nci/ndlc.c
index e9dc313..3564e33 100644
--- a/drivers/nfc/st-nci/ndlc.c
+++ b/drivers/nfc/st-nci/ndlc.c
@@ -286,13 +286,15 @@ EXPORT_SYMBOL(ndlc_probe);
void ndlc_remove(struct llt_ndlc *ndlc)
{
- st_nci_remove(ndlc->ndev);
-
/* cancel timers */
del_timer_sync(&ndlc->t1_timer);
del_timer_sync(&ndlc->t2_timer);
ndlc->t2_active = false;
ndlc->t1_active = false;
+ /* cancel work */
+ cancel_work_sync(&ndlc->sm_work);
+
+ st_nci_remove(ndlc->ndev);
skb_queue_purge(&ndlc->rcv_q);
skb_queue_purge(&ndlc->send_q);
diff --git a/drivers/nfc/st-nci/se.c b/drivers/nfc/st-nci/se.c
index 04a2cea..57d09db 100644
--- a/drivers/nfc/st-nci/se.c
+++ b/drivers/nfc/st-nci/se.c
@@ -674,6 +674,12 @@ int st_nci_se_io(struct nci_dev *ndev, u32 se_idx,
ST_NCI_EVT_TRANSMIT_DATA, apdu,
apdu_length);
default:
+ /* Need to free cb_context here as at the moment we can't
+ * clearly indicate to the caller if the callback function
+ * would be called (and free it) or not. In both cases a
+ * negative value may be returned to the caller.
+ */
+ kfree(cb_context);
return -ENODEV;
}
}
diff --git a/drivers/nfc/st21nfca/se.c b/drivers/nfc/st21nfca/se.c
index d416365..6a1d3b2 100644
--- a/drivers/nfc/st21nfca/se.c
+++ b/drivers/nfc/st21nfca/se.c
@@ -236,6 +236,12 @@ int st21nfca_hci_se_io(struct nfc_hci_dev *hdev, u32 se_idx,
ST21NFCA_EVT_TRANSMIT_DATA,
apdu, apdu_length);
default:
+ /* Need to free cb_context here as at the moment we can't
+ * clearly indicate to the caller if the callback function
+ * would be called (and free it) or not. In both cases a
+ * negative value may be returned to the caller.
+ */
+ kfree(cb_context);
return -ENODEV;
}
}
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 672f53d..ef9d7a7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -853,16 +853,26 @@ static blk_status_t nvme_setup_discard(struct nvme_ns *ns, struct request *req,
range = page_address(ns->ctrl->discard_page);
}
- __rq_for_each_bio(bio, req) {
- u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
- u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
+ if (queue_max_discard_segments(req->q) == 1) {
+ u64 slba = nvme_sect_to_lba(ns, blk_rq_pos(req));
+ u32 nlb = blk_rq_sectors(req) >> (ns->lba_shift - 9);
- if (n < segments) {
- range[n].cattr = cpu_to_le32(0);
- range[n].nlb = cpu_to_le32(nlb);
- range[n].slba = cpu_to_le64(slba);
+ range[0].cattr = cpu_to_le32(0);
+ range[0].nlb = cpu_to_le32(nlb);
+ range[0].slba = cpu_to_le64(slba);
+ n = 1;
+ } else {
+ __rq_for_each_bio(bio, req) {
+ u64 slba = nvme_sect_to_lba(ns, bio->bi_iter.bi_sector);
+ u32 nlb = bio->bi_iter.bi_size >> ns->lba_shift;
+
+ if (n < segments) {
+ range[n].cattr = cpu_to_le32(0);
+ range[n].nlb = cpu_to_le32(nlb);
+ range[n].slba = cpu_to_le64(slba);
+ }
+ n++;
}
- n++;
}
if (WARN_ON_ONCE(n != segments)) {
@@ -1132,7 +1142,7 @@ u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode)
if (ns) {
if (ns->head->effects)
effects = le32_to_cpu(ns->head->effects->iocs[opcode]);
- if (ns->head->ids.csi == NVME_CAP_CSS_NVM)
+ if (ns->head->ids.csi == NVME_CSI_NVM)
effects |= nvme_known_nvm_effects(opcode);
if (effects & ~(NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBCC))
dev_warn_once(ctrl->device,
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 0165e65c..6539332 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1280,7 +1280,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved)
else
nvme_poll_irqdisable(nvmeq);
- if (blk_mq_request_completed(req)) {
+ if (blk_mq_rq_state(req) != MQ_RQ_IN_FLIGHT) {
dev_warn(dev->ctrl.device,
"I/O %d QID %d timeout, completion polled\n",
req->tag, nvmeq->qid);
@@ -3378,6 +3378,8 @@ static const struct pci_device_id nvme_id_table[] = {
.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
{ PCI_DEVICE(0x2646, 0x501E), /* KINGSTON OM3PGP4xxxxQ OS21011 NVMe SSD */
.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE(0x1f40, 0x1202), /* Netac Technologies Co. NV3000 NVMe SSD */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
{ PCI_DEVICE(0x1f40, 0x5236), /* Netac Technologies Co. NV7000 NVMe SSD */
.driver_data = NVME_QUIRK_BOGUS_NID, },
{ PCI_DEVICE(0x1e4B, 0x1001), /* MAXIO MAP1001 */
diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index cfd0385..4c6d56d 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -756,8 +756,10 @@ static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
void nvmet_req_complete(struct nvmet_req *req, u16 status)
{
+ struct nvmet_sq *sq = req->sq;
+
__nvmet_req_complete(req, status);
- percpu_ref_put(&req->sq->ref);
+ percpu_ref_put(&sq->ref);
}
EXPORT_SYMBOL_GPL(nvmet_req_complete);
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index c43bc5e..00a2a59 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -1685,8 +1685,10 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport,
else {
queue = nvmet_fc_alloc_target_queue(iod->assoc, 0,
be16_to_cpu(rqst->assoc_cmd.sqsize));
- if (!queue)
+ if (!queue) {
ret = VERR_QUEUE_ALLOC_FAIL;
+ nvmet_fc_tgt_a_put(iod->assoc);
+ }
}
}
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 38bab84..47c1487 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -766,31 +766,32 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
return ERR_PTR(rval);
}
- if (config->wp_gpio)
- nvmem->wp_gpio = config->wp_gpio;
- else if (!config->ignore_wp)
+ nvmem->id = rval;
+
+ nvmem->dev.type = &nvmem_provider_type;
+ nvmem->dev.bus = &nvmem_bus_type;
+ nvmem->dev.parent = config->dev;
+
+ device_initialize(&nvmem->dev);
+
+ if (!config->ignore_wp)
nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp",
GPIOD_OUT_HIGH);
if (IS_ERR(nvmem->wp_gpio)) {
- ida_free(&nvmem_ida, nvmem->id);
rval = PTR_ERR(nvmem->wp_gpio);
- kfree(nvmem);
- return ERR_PTR(rval);
+ nvmem->wp_gpio = NULL;
+ goto err_put_device;
}
kref_init(&nvmem->refcnt);
INIT_LIST_HEAD(&nvmem->cells);
- nvmem->id = rval;
nvmem->owner = config->owner;
if (!nvmem->owner && config->dev->driver)
nvmem->owner = config->dev->driver->owner;
nvmem->stride = config->stride ?: 1;
nvmem->word_size = config->word_size ?: 1;
nvmem->size = config->size;
- nvmem->dev.type = &nvmem_provider_type;
- nvmem->dev.bus = &nvmem_bus_type;
- nvmem->dev.parent = config->dev;
nvmem->root_only = config->root_only;
nvmem->priv = config->priv;
nvmem->type = config->type;
@@ -805,18 +806,21 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
switch (config->id) {
case NVMEM_DEVID_NONE:
- dev_set_name(&nvmem->dev, "%s", config->name);
+ rval = dev_set_name(&nvmem->dev, "%s", config->name);
break;
case NVMEM_DEVID_AUTO:
- dev_set_name(&nvmem->dev, "%s%d", config->name, nvmem->id);
+ rval = dev_set_name(&nvmem->dev, "%s%d", config->name, nvmem->id);
break;
default:
- dev_set_name(&nvmem->dev, "%s%d",
+ rval = dev_set_name(&nvmem->dev, "%s%d",
config->name ? : "nvmem",
config->name ? config->id : nvmem->id);
break;
}
+ if (rval)
+ goto err_put_device;
+
nvmem->read_only = device_property_present(config->dev, "read-only") ||
config->read_only || !nvmem->reg_write;
@@ -824,28 +828,22 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
nvmem->dev.groups = nvmem_dev_groups;
#endif
- dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
-
- rval = device_register(&nvmem->dev);
- if (rval)
- goto err_put_device;
-
if (nvmem->nkeepout) {
rval = nvmem_validate_keepouts(nvmem);
if (rval)
- goto err_device_del;
+ goto err_put_device;
}
if (config->compat) {
rval = nvmem_sysfs_setup_compat(nvmem, config);
if (rval)
- goto err_device_del;
+ goto err_put_device;
}
if (config->cells) {
rval = nvmem_add_cells(nvmem, config->cells, config->ncells);
if (rval)
- goto err_teardown_compat;
+ goto err_remove_cells;
}
rval = nvmem_add_cells_from_table(nvmem);
@@ -856,17 +854,20 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
if (rval)
goto err_remove_cells;
+ dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name);
+
+ rval = device_add(&nvmem->dev);
+ if (rval)
+ goto err_remove_cells;
+
blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem);
return nvmem;
err_remove_cells:
nvmem_device_remove_all_cells(nvmem);
-err_teardown_compat:
if (config->compat)
nvmem_sysfs_remove_compat(nvmem, config);
-err_device_del:
- device_del(&nvmem->dev);
err_put_device:
put_device(&nvmem->dev);
diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c
index 4fcb635..8499892 100644
--- a/drivers/nvmem/qcom-spmi-sdam.c
+++ b/drivers/nvmem/qcom-spmi-sdam.c
@@ -166,6 +166,7 @@ static const struct of_device_id sdam_match_table[] = {
{ .compatible = "qcom,spmi-sdam" },
{},
};
+MODULE_DEVICE_TABLE(of, sdam_match_table);
static struct platform_driver sdam_driver = {
.driver = {
diff --git a/drivers/of/address.c b/drivers/of/address.c
index 94f017d8..586fb94 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -963,8 +963,19 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
}
of_dma_range_parser_init(&parser, node);
- for_each_of_range(&parser, &range)
+ for_each_of_range(&parser, &range) {
+ if (range.cpu_addr == OF_BAD_ADDR) {
+ pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n",
+ range.bus_addr, node);
+ continue;
+ }
num_ranges++;
+ }
+
+ if (!num_ranges) {
+ ret = -EINVAL;
+ goto out;
+ }
r = kcalloc(num_ranges + 1, sizeof(*r), GFP_KERNEL);
if (!r) {
@@ -973,18 +984,16 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map)
}
/*
- * Record all info in the generic DMA ranges array for struct device.
+ * Record all info in the generic DMA ranges array for struct device,
+ * returning an error if we don't find any parsable ranges.
*/
*map = r;
of_dma_range_parser_init(&parser, node);
for_each_of_range(&parser, &range) {
pr_debug("dma_addr(%llx) cpu_addr(%llx) size(%llx)\n",
range.bus_addr, range.cpu_addr, range.size);
- if (range.cpu_addr == OF_BAD_ADDR) {
- pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n",
- range.bus_addr, node);
+ if (range.cpu_addr == OF_BAD_ADDR)
continue;
- }
r->cpu_start = range.cpu_addr;
r->dma_start = range.bus_addr;
r->size = range.size;
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 9da8835..9e949dd 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -47,9 +47,10 @@ static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
err = memblock_mark_nomap(base, size);
if (err)
memblock_free(base, size);
- kmemleak_ignore_phys(base);
}
+ kmemleak_ignore_phys(base);
+
return err;
}
diff --git a/drivers/opp/debugfs.c b/drivers/opp/debugfs.c
index b5f2f9f..9eb71f4 100644
--- a/drivers/opp/debugfs.c
+++ b/drivers/opp/debugfs.c
@@ -209,7 +209,7 @@ static void opp_migrate_dentry(struct opp_device *opp_dev,
dentry = debugfs_rename(rootdir, opp_dev->dentry, rootdir,
opp_table->dentry_name);
- if (!dentry) {
+ if (IS_ERR(dentry)) {
dev_err(dev, "%s: Failed to rename link from: %s to %s\n",
__func__, dev_name(opp_dev->dev), dev_name(dev));
return;
diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c
index fd99735..93ea922 100644
--- a/drivers/parisc/iosapic.c
+++ b/drivers/parisc/iosapic.c
@@ -677,7 +677,7 @@ static int iosapic_set_affinity_irq(struct irq_data *d,
if (dest_cpu < 0)
return -1;
- cpumask_copy(irq_data_get_affinity_mask(d), cpumask_of(dest_cpu));
+ irq_data_update_affinity(d, cpumask_of(dest_cpu));
vi->txn_addr = txn_affinity_addr(d->irq, dest_cpu);
spin_lock_irqsave(&iosapic_lock, flags);
diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
index 3cef835..feafa37 100644
--- a/drivers/pci/bus.c
+++ b/drivers/pci/bus.c
@@ -76,6 +76,27 @@ struct resource *pci_bus_resource_n(const struct pci_bus *bus, int n)
}
EXPORT_SYMBOL_GPL(pci_bus_resource_n);
+void pci_bus_remove_resource(struct pci_bus *bus, struct resource *res)
+{
+ struct pci_bus_resource *bus_res, *tmp;
+ int i;
+
+ for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) {
+ if (bus->resource[i] == res) {
+ bus->resource[i] = NULL;
+ return;
+ }
+ }
+
+ list_for_each_entry_safe(bus_res, tmp, &bus->resources, list) {
+ if (bus_res->res == res) {
+ list_del(&bus_res->list);
+ kfree(bus_res);
+ return;
+ }
+ }
+}
+
void pci_bus_remove_resources(struct pci_bus *bus)
{
int i;
diff --git a/drivers/pci/controller/pci-loongson.c b/drivers/pci/controller/pci-loongson.c
index 48169b1..e73e18a 100644
--- a/drivers/pci/controller/pci-loongson.c
+++ b/drivers/pci/controller/pci-loongson.c
@@ -13,9 +13,14 @@
#include "../pci.h"
/* Device IDs */
-#define DEV_PCIE_PORT_0 0x7a09
-#define DEV_PCIE_PORT_1 0x7a19
-#define DEV_PCIE_PORT_2 0x7a29
+#define DEV_LS2K_PCIE_PORT0 0x1a05
+#define DEV_LS7A_PCIE_PORT0 0x7a09
+#define DEV_LS7A_PCIE_PORT1 0x7a19
+#define DEV_LS7A_PCIE_PORT2 0x7a29
+#define DEV_LS7A_PCIE_PORT3 0x7a39
+#define DEV_LS7A_PCIE_PORT4 0x7a49
+#define DEV_LS7A_PCIE_PORT5 0x7a59
+#define DEV_LS7A_PCIE_PORT6 0x7a69
#define DEV_LS2K_APB 0x7a02
#define DEV_LS7A_CONF 0x7a10
@@ -38,11 +43,11 @@ static void bridge_class_quirk(struct pci_dev *dev)
dev->class = PCI_CLASS_BRIDGE_PCI << 8;
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
- DEV_PCIE_PORT_0, bridge_class_quirk);
+ DEV_LS7A_PCIE_PORT0, bridge_class_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
- DEV_PCIE_PORT_1, bridge_class_quirk);
+ DEV_LS7A_PCIE_PORT1, bridge_class_quirk);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
- DEV_PCIE_PORT_2, bridge_class_quirk);
+ DEV_LS7A_PCIE_PORT2, bridge_class_quirk);
static void system_bus_quirk(struct pci_dev *pdev)
{
@@ -60,37 +65,33 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
DEV_LS7A_LPC, system_bus_quirk);
-static void loongson_mrrs_quirk(struct pci_dev *dev)
+static void loongson_mrrs_quirk(struct pci_dev *pdev)
{
- struct pci_bus *bus = dev->bus;
- struct pci_dev *bridge;
- static const struct pci_device_id bridge_devids[] = {
- { PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_0) },
- { PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_1) },
- { PCI_VDEVICE(LOONGSON, DEV_PCIE_PORT_2) },
- { 0, },
- };
+ /*
+ * Some Loongson PCIe ports have h/w limitations of maximum read
+ * request size. They can't handle anything larger than this. So
+ * force this limit on any devices attached under these ports.
+ */
+ struct pci_host_bridge *bridge = pci_find_host_bridge(pdev->bus);
- /* look for the matching bridge */
- while (!pci_is_root_bus(bus)) {
- bridge = bus->self;
- bus = bus->parent;
- /*
- * Some Loongson PCIe ports have a h/w limitation of
- * 256 bytes maximum read request size. They can't handle
- * anything larger than this. So force this limit on
- * any devices attached under these ports.
- */
- if (pci_match_id(bridge_devids, bridge)) {
- if (pcie_get_readrq(dev) > 256) {
- pci_info(dev, "limiting MRRS to 256\n");
- pcie_set_readrq(dev, 256);
- }
- break;
- }
- }
+ bridge->no_inc_mrrs = 1;
}
-DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS2K_PCIE_PORT0, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS7A_PCIE_PORT0, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS7A_PCIE_PORT1, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS7A_PCIE_PORT2, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS7A_PCIE_PORT3, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS7A_PCIE_PORT4, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS7A_PCIE_PORT5, loongson_mrrs_quirk);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_LOONGSON,
+ DEV_LS7A_PCIE_PORT6, loongson_mrrs_quirk);
static void __iomem *cfg1_map(struct loongson_pci *priv, int bus,
unsigned int devfn, int where)
diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
index dafdc65..ef71c1a 100644
--- a/drivers/pci/iov.c
+++ b/drivers/pci/iov.c
@@ -14,7 +14,7 @@
#include <linux/delay.h>
#include "pci.h"
-#define VIRTFN_ID_LEN 16
+#define VIRTFN_ID_LEN 17 /* "virtfn%u\0" for 2^32 - 1 */
int pci_iov_virtfn_bus(struct pci_dev *dev, int vf_id)
{
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 2761ab8..f44c066 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -925,7 +925,7 @@ static int pci_pm_resume_noirq(struct device *dev)
pcie_pme_root_status_cleanup(pci_dev);
if (!skip_bus_pm && prev_state == PCI_D3cold)
- pci_bridge_wait_for_secondary_bus(pci_dev);
+ pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT);
if (pci_has_legacy_pm_support(pci_dev))
return 0;
@@ -1312,7 +1312,7 @@ static int pci_pm_runtime_resume(struct device *dev)
pci_pm_default_resume(pci_dev);
if (prev_state == PCI_D3cold)
- pci_bridge_wait_for_secondary_bus(pci_dev);
+ pci_bridge_wait_for_secondary_bus(pci_dev, "resume", PCI_RESET_WAIT);
if (pm && pm->runtime_resume)
error = pm->runtime_resume(dev);
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index a0c6a9ee..649df29 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -163,9 +163,6 @@ static int __init pcie_port_pm_setup(char *str)
}
__setup("pcie_port_pm=", pcie_port_pm_setup);
-/* Time to wait after a reset for device to become responsive */
-#define PCIE_RESET_READY_POLL_MS 60000
-
/**
* pci_bus_max_busnr - returns maximum PCI bus number of given bus' children
* @bus: pointer to PCI bus structure to search
@@ -1255,7 +1252,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
return -ENOTTY;
}
- if (delay > 1000)
+ if (delay > PCI_RESET_WAIT)
pci_info(dev, "not ready %dms after %s; waiting\n",
delay - 1, reset_type);
@@ -1264,7 +1261,7 @@ static int pci_dev_wait(struct pci_dev *dev, char *reset_type, int timeout)
pci_read_config_dword(dev, PCI_COMMAND, &id);
}
- if (delay > 1000)
+ if (delay > PCI_RESET_WAIT)
pci_info(dev, "ready %dms after %s\n", delay - 1,
reset_type);
@@ -4886,24 +4883,31 @@ static int pci_bus_max_d3cold_delay(const struct pci_bus *bus)
/**
* pci_bridge_wait_for_secondary_bus - Wait for secondary bus to be accessible
* @dev: PCI bridge
+ * @reset_type: reset type in human-readable form
+ * @timeout: maximum time to wait for devices on secondary bus (milliseconds)
*
* Handle necessary delays before access to the devices on the secondary
- * side of the bridge are permitted after D3cold to D0 transition.
+ * side of the bridge are permitted after D3cold to D0 transition
+ * or Conventional Reset.
*
* For PCIe this means the delays in PCIe 5.0 section 6.6.1. For
* conventional PCI it means Tpvrh + Trhfa specified in PCI 3.0 section
* 4.3.2.
+ *
+ * Return 0 on success or -ENOTTY if the first device on the secondary bus
+ * failed to become accessible.
*/
-void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
+int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
+ int timeout)
{
struct pci_dev *child;
int delay;
if (pci_dev_is_disconnected(dev))
- return;
+ return 0;
- if (!pci_is_bridge(dev) || !dev->bridge_d3)
- return;
+ if (!pci_is_bridge(dev))
+ return 0;
down_read(&pci_bus_sem);
@@ -4915,14 +4919,14 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
*/
if (!dev->subordinate || list_empty(&dev->subordinate->devices)) {
up_read(&pci_bus_sem);
- return;
+ return 0;
}
/* Take d3cold_delay requirements into account */
delay = pci_bus_max_d3cold_delay(dev->subordinate);
if (!delay) {
up_read(&pci_bus_sem);
- return;
+ return 0;
}
child = list_first_entry(&dev->subordinate->devices, struct pci_dev,
@@ -4931,14 +4935,12 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
/*
* Conventional PCI and PCI-X we need to wait Tpvrh + Trhfa before
- * accessing the device after reset (that is 1000 ms + 100 ms). In
- * practice this should not be needed because we don't do power
- * management for them (see pci_bridge_d3_possible()).
+ * accessing the device after reset (that is 1000 ms + 100 ms).
*/
if (!pci_is_pcie(dev)) {
pci_dbg(dev, "waiting %d ms for secondary bus\n", 1000 + delay);
msleep(1000 + delay);
- return;
+ return 0;
}
/*
@@ -4955,11 +4957,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
* configuration requests if we only wait for 100 ms (see
* https://bugzilla.kernel.org/show_bug.cgi?id=203885).
*
- * Therefore we wait for 100 ms and check for the device presence.
- * If it is still not present give it an additional 100 ms.
+ * Therefore we wait for 100 ms and check for the device presence
+ * until the timeout expires.
*/
if (!pcie_downstream_port(dev))
- return;
+ return 0;
if (pcie_get_speed_cap(dev) <= PCIE_SPEED_5_0GT) {
pci_dbg(dev, "waiting %d ms for downstream link\n", delay);
@@ -4970,14 +4972,11 @@ void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev)
if (!pcie_wait_for_link_delay(dev, true, delay)) {
/* Did not train, no need to wait any further */
pci_info(dev, "Data Link Layer Link Active not set in 1000 msec\n");
- return;
+ return -ENOTTY;
}
}
- if (!pci_device_is_present(child)) {
- pci_dbg(child, "waiting additional %d ms to become accessible\n", delay);
- msleep(delay);
- }
+ return pci_dev_wait(child, reset_type, timeout - delay);
}
void pci_reset_secondary_bus(struct pci_dev *dev)
@@ -4996,15 +4995,6 @@ void pci_reset_secondary_bus(struct pci_dev *dev)
ctrl &= ~PCI_BRIDGE_CTL_BUS_RESET;
pci_write_config_word(dev, PCI_BRIDGE_CONTROL, ctrl);
-
- /*
- * Trhfa for conventional PCI is 2^25 clock cycles.
- * Assuming a minimum 33MHz clock this results in a 1s
- * delay before we can consider subordinate devices to
- * be re-initialized. PCIe has some ways to shorten this,
- * but we don't make use of them yet.
- */
- ssleep(1);
}
void __weak pcibios_reset_secondary_bus(struct pci_dev *dev)
@@ -5023,7 +5013,8 @@ int pci_bridge_secondary_bus_reset(struct pci_dev *dev)
{
pcibios_reset_secondary_bus(dev);
- return pci_dev_wait(dev, "bus reset", PCIE_RESET_READY_POLL_MS);
+ return pci_bridge_wait_for_secondary_bus(dev, "bus reset",
+ PCIE_RESET_READY_POLL_MS);
}
EXPORT_SYMBOL_GPL(pci_bridge_secondary_bus_reset);
@@ -5970,6 +5961,7 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
{
u16 v;
int ret;
+ struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
if (rq < 128 || rq > 4096 || !is_power_of_2(rq))
return -EINVAL;
@@ -5988,6 +5980,15 @@ int pcie_set_readrq(struct pci_dev *dev, int rq)
v = (ffs(rq) - 8) << 12;
+ if (bridge->no_inc_mrrs) {
+ int max_mrrs = pcie_get_readrq(dev);
+
+ if (rq > max_mrrs) {
+ pci_info(dev, "can't set Max_Read_Request_Size to %d; max is %d\n", rq, max_mrrs);
+ return -EINVAL;
+ }
+ }
+
ret = pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL,
PCI_EXP_DEVCTL_READRQ, v);
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index 1cce56c..72280e9 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -63,6 +63,19 @@ struct pci_cap_saved_state *pci_find_saved_ext_cap(struct pci_dev *dev,
#define PCI_PM_D3HOT_WAIT 10 /* msec */
#define PCI_PM_D3COLD_WAIT 100 /* msec */
+/*
+ * Following exit from Conventional Reset, devices must be ready within 1 sec
+ * (PCIe r6.0 sec 6.6.1). A D3cold to D0 transition implies a Conventional
+ * Reset (PCIe r6.0 sec 5.8).
+ */
+#define PCI_RESET_WAIT 1000 /* msec */
+/*
+ * Devices may extend the 1 sec period through Request Retry Status completions
+ * (PCIe r6.0 sec 2.3.1). The spec does not provide an upper limit, but 60 sec
+ * ought to be enough for any device to become responsive.
+ */
+#define PCIE_RESET_READY_POLL_MS 60000 /* msec */
+
/**
* struct pci_platform_pm_ops - Firmware PM callbacks
*
@@ -124,7 +137,8 @@ void pci_msi_init(struct pci_dev *dev);
void pci_msix_init(struct pci_dev *dev);
bool pci_bridge_d3_possible(struct pci_dev *dev);
void pci_bridge_d3_update(struct pci_dev *dev);
-void pci_bridge_wait_for_secondary_bus(struct pci_dev *dev);
+int pci_bridge_wait_for_secondary_bus(struct pci_dev *dev, char *reset_type,
+ int timeout);
static inline void pci_wakeup_event(struct pci_dev *dev)
{
@@ -347,53 +361,36 @@ struct pci_sriov {
* @dev: PCI device to set new error_state
* @new: the state we want dev to be in
*
- * Must be called with device_lock held.
+ * If the device is experiencing perm_failure, it has to remain in that state.
+ * Any other transition is allowed.
*
* Returns true if state has been changed to the requested state.
*/
static inline bool pci_dev_set_io_state(struct pci_dev *dev,
pci_channel_state_t new)
{
- bool changed = false;
+ pci_channel_state_t old;
- device_lock_assert(&dev->dev);
switch (new) {
case pci_channel_io_perm_failure:
- switch (dev->error_state) {
- case pci_channel_io_frozen:
- case pci_channel_io_normal:
- case pci_channel_io_perm_failure:
- changed = true;
- break;
- }
- break;
+ xchg(&dev->error_state, pci_channel_io_perm_failure);
+ return true;
case pci_channel_io_frozen:
- switch (dev->error_state) {
- case pci_channel_io_frozen:
- case pci_channel_io_normal:
- changed = true;
- break;
- }
- break;
+ old = cmpxchg(&dev->error_state, pci_channel_io_normal,
+ pci_channel_io_frozen);
+ return old != pci_channel_io_perm_failure;
case pci_channel_io_normal:
- switch (dev->error_state) {
- case pci_channel_io_frozen:
- case pci_channel_io_normal:
- changed = true;
- break;
- }
- break;
+ old = cmpxchg(&dev->error_state, pci_channel_io_frozen,
+ pci_channel_io_normal);
+ return old != pci_channel_io_perm_failure;
+ default:
+ return false;
}
- if (changed)
- dev->error_state = new;
- return changed;
}
static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
{
- device_lock(&dev->dev);
pci_dev_set_io_state(dev, pci_channel_io_perm_failure);
- device_unlock(&dev->dev);
return 0;
}
diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c
index c556e7b..f21d64a 100644
--- a/drivers/pci/pcie/dpc.c
+++ b/drivers/pci/pcie/dpc.c
@@ -170,8 +170,8 @@ pci_ers_result_t dpc_reset_link(struct pci_dev *pdev)
pci_write_config_word(pdev, cap + PCI_EXP_DPC_STATUS,
PCI_EXP_DPC_STATUS_TRIGGER);
- if (!pcie_wait_for_link(pdev, true)) {
- pci_info(pdev, "Data Link Layer Link Active not set in 1000 msec\n");
+ if (pci_bridge_wait_for_secondary_bus(pdev, "DPC",
+ PCIE_RESET_READY_POLL_MS)) {
clear_bit(PCI_DPC_RECOVERED, &pdev->priv_flags);
ret = PCI_ERS_RESULT_DISCONNECT;
} else {
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index a531064..643a3b2 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -4823,6 +4823,26 @@ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags)
PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
}
+/*
+ * Wangxun 10G/1G NICs have no ACS capability, and on multi-function
+ * devices, peer-to-peer transactions are not be used between the functions.
+ * So add an ACS quirk for below devices to isolate functions.
+ * SFxxx 1G NICs(em).
+ * RP1000/RP2000 10G NICs(sp).
+ */
+static int pci_quirk_wangxun_nic_acs(struct pci_dev *dev, u16 acs_flags)
+{
+ switch (dev->device) {
+ case 0x0100 ... 0x010F:
+ case 0x1001:
+ case 0x2001:
+ return pci_acs_ctrl_enabled(acs_flags,
+ PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+
+ return false;
+}
+
static const struct pci_dev_acs_enabled {
u16 vendor;
u16 device;
@@ -4968,6 +4988,8 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs },
/* Zhaoxin Root/Downstream Ports */
{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
+ /* Wangxun nics */
+ { PCI_VENDOR_ID_WANGXUN, PCI_ANY_ID, pci_quirk_wangxun_nic_acs },
{ 0 }
};
@@ -5328,6 +5350,7 @@ static void quirk_no_flr(struct pci_dev *dev)
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x1487, quirk_no_flr);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x148c, quirk_no_flr);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x7901, quirk_no_flr);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr);
diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
index 2ce6369..16d291e 100644
--- a/drivers/pci/setup-bus.c
+++ b/drivers/pci/setup-bus.c
@@ -1878,12 +1878,67 @@ static void adjust_bridge_window(struct pci_dev *bridge, struct resource *res,
add_size = size - new_size;
pci_dbg(bridge, "bridge window %pR shrunken by %pa\n", res,
&add_size);
+ } else {
+ return;
}
res->end = res->start + new_size - 1;
remove_from_list(add_list, res);
}
+static void remove_dev_resource(struct resource *avail, struct pci_dev *dev,
+ struct resource *res)
+{
+ resource_size_t size, align, tmp;
+
+ size = resource_size(res);
+ if (!size)
+ return;
+
+ align = pci_resource_alignment(dev, res);
+ align = align ? ALIGN(avail->start, align) - avail->start : 0;
+ tmp = align + size;
+ avail->start = min(avail->start + tmp, avail->end + 1);
+}
+
+static void remove_dev_resources(struct pci_dev *dev, struct resource *io,
+ struct resource *mmio,
+ struct resource *mmio_pref)
+{
+ int i;
+
+ for (i = 0; i < PCI_NUM_RESOURCES; i++) {
+ struct resource *res = &dev->resource[i];
+
+ if (resource_type(res) == IORESOURCE_IO) {
+ remove_dev_resource(io, dev, res);
+ } else if (resource_type(res) == IORESOURCE_MEM) {
+
+ /*
+ * Make sure prefetchable memory is reduced from
+ * the correct resource. Specifically we put 32-bit
+ * prefetchable memory in non-prefetchable window
+ * if there is an 64-bit pretchable window.
+ *
+ * See comments in __pci_bus_size_bridges() for
+ * more information.
+ */
+ if ((res->flags & IORESOURCE_PREFETCH) &&
+ ((res->flags & IORESOURCE_MEM_64) ==
+ (mmio_pref->flags & IORESOURCE_MEM_64)))
+ remove_dev_resource(mmio_pref, dev, res);
+ else
+ remove_dev_resource(mmio, dev, res);
+ }
+ }
+}
+
+/*
+ * io, mmio and mmio_pref contain the total amount of bridge window space
+ * available. This includes the minimal space needed to cover all the
+ * existing devices on the bus and the possible extra space that can be
+ * shared with the bridges.
+ */
static void pci_bus_distribute_available_resources(struct pci_bus *bus,
struct list_head *add_list,
struct resource io,
@@ -1893,7 +1948,7 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus,
unsigned int normal_bridges = 0, hotplug_bridges = 0;
struct resource *io_res, *mmio_res, *mmio_pref_res;
struct pci_dev *dev, *bridge = bus->self;
- resource_size_t io_per_hp, mmio_per_hp, mmio_pref_per_hp, align;
+ resource_size_t io_per_b, mmio_per_b, mmio_pref_per_b, align;
io_res = &bridge->resource[PCI_BRIDGE_IO_WINDOW];
mmio_res = &bridge->resource[PCI_BRIDGE_MEM_WINDOW];
@@ -1937,94 +1992,88 @@ static void pci_bus_distribute_available_resources(struct pci_bus *bus,
normal_bridges++;
}
- /*
- * There is only one bridge on the bus so it gets all available
- * resources which it can then distribute to the possible hotplug
- * bridges below.
- */
- if (hotplug_bridges + normal_bridges == 1) {
- dev = list_first_entry(&bus->devices, struct pci_dev, bus_list);
- if (dev->subordinate)
- pci_bus_distribute_available_resources(dev->subordinate,
- add_list, io, mmio, mmio_pref);
+ if (!(hotplug_bridges + normal_bridges))
return;
+
+ /*
+ * Calculate the amount of space we can forward from "bus" to any
+ * downstream buses, i.e., the space left over after assigning the
+ * BARs and windows on "bus".
+ */
+ list_for_each_entry(dev, &bus->devices, bus_list) {
+ if (!dev->is_virtfn)
+ remove_dev_resources(dev, &io, &mmio, &mmio_pref);
}
- if (hotplug_bridges == 0)
- return;
-
/*
- * Calculate the total amount of extra resource space we can
- * pass to bridges below this one. This is basically the
- * extra space reduced by the minimal required space for the
- * non-hotplug bridges.
+ * If there is at least one hotplug bridge on this bus it gets all
+ * the extra resource space that was left after the reductions
+ * above.
+ *
+ * If there are no hotplug bridges the extra resource space is
+ * split between non-hotplug bridges. This is to allow possible
+ * hotplug bridges below them to get the extra space as well.
*/
+ if (hotplug_bridges) {
+ io_per_b = div64_ul(resource_size(&io), hotplug_bridges);
+ mmio_per_b = div64_ul(resource_size(&mmio), hotplug_bridges);
+ mmio_pref_per_b = div64_ul(resource_size(&mmio_pref),
+ hotplug_bridges);
+ } else {
+ io_per_b = div64_ul(resource_size(&io), normal_bridges);
+ mmio_per_b = div64_ul(resource_size(&mmio), normal_bridges);
+ mmio_pref_per_b = div64_ul(resource_size(&mmio_pref),
+ normal_bridges);
+ }
+
for_each_pci_bridge(dev, bus) {
- resource_size_t used_size;
struct resource *res;
-
- if (dev->is_hotplug_bridge)
- continue;
-
- /*
- * Reduce the available resource space by what the
- * bridge and devices below it occupy.
- */
- res = &dev->resource[PCI_BRIDGE_IO_WINDOW];
- align = pci_resource_alignment(dev, res);
- align = align ? ALIGN(io.start, align) - io.start : 0;
- used_size = align + resource_size(res);
- if (!res->parent)
- io.start = min(io.start + used_size, io.end + 1);
-
- res = &dev->resource[PCI_BRIDGE_MEM_WINDOW];
- align = pci_resource_alignment(dev, res);
- align = align ? ALIGN(mmio.start, align) - mmio.start : 0;
- used_size = align + resource_size(res);
- if (!res->parent)
- mmio.start = min(mmio.start + used_size, mmio.end + 1);
-
- res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
- align = pci_resource_alignment(dev, res);
- align = align ? ALIGN(mmio_pref.start, align) -
- mmio_pref.start : 0;
- used_size = align + resource_size(res);
- if (!res->parent)
- mmio_pref.start = min(mmio_pref.start + used_size,
- mmio_pref.end + 1);
- }
-
- io_per_hp = div64_ul(resource_size(&io), hotplug_bridges);
- mmio_per_hp = div64_ul(resource_size(&mmio), hotplug_bridges);
- mmio_pref_per_hp = div64_ul(resource_size(&mmio_pref),
- hotplug_bridges);
-
- /*
- * Go over devices on this bus and distribute the remaining
- * resource space between hotplug bridges.
- */
- for_each_pci_bridge(dev, bus) {
struct pci_bus *b;
b = dev->subordinate;
- if (!b || !dev->is_hotplug_bridge)
+ if (!b)
+ continue;
+ if (hotplug_bridges && !dev->is_hotplug_bridge)
continue;
+ res = &dev->resource[PCI_BRIDGE_IO_WINDOW];
+
/*
- * Distribute available extra resources equally between
- * hotplug-capable downstream ports taking alignment into
- * account.
+ * Make sure the split resource space is properly aligned
+ * for bridge windows (align it down to avoid going above
+ * what is available).
*/
- io.end = io.start + io_per_hp - 1;
- mmio.end = mmio.start + mmio_per_hp - 1;
- mmio_pref.end = mmio_pref.start + mmio_pref_per_hp - 1;
+ align = pci_resource_alignment(dev, res);
+ io.end = align ? io.start + ALIGN_DOWN(io_per_b, align) - 1
+ : io.start + io_per_b - 1;
+
+ /*
+ * The x_per_b holds the extra resource space that can be
+ * added for each bridge but there is the minimal already
+ * reserved as well so adjust x.start down accordingly to
+ * cover the whole space.
+ */
+ io.start -= resource_size(res);
+
+ res = &dev->resource[PCI_BRIDGE_MEM_WINDOW];
+ align = pci_resource_alignment(dev, res);
+ mmio.end = align ? mmio.start + ALIGN_DOWN(mmio_per_b, align) - 1
+ : mmio.start + mmio_per_b - 1;
+ mmio.start -= resource_size(res);
+
+ res = &dev->resource[PCI_BRIDGE_PREF_MEM_WINDOW];
+ align = pci_resource_alignment(dev, res);
+ mmio_pref.end = align ? mmio_pref.start +
+ ALIGN_DOWN(mmio_pref_per_b, align) - 1
+ : mmio_pref.start + mmio_pref_per_b - 1;
+ mmio_pref.start -= resource_size(res);
pci_bus_distribute_available_resources(b, add_list, io, mmio,
mmio_pref);
- io.start += io_per_hp;
- mmio.start += mmio_per_hp;
- mmio_pref.start += mmio_pref_per_hp;
+ io.start += io.end + 1;
+ mmio.start += mmio.end + 1;
+ mmio_pref.start += mmio_pref.end + 1;
}
}
diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
index 0b301f8..d021ef3 100644
--- a/drivers/pci/switch/switchtec.c
+++ b/drivers/pci/switch/switchtec.c
@@ -552,21 +552,20 @@ static ssize_t switchtec_dev_read(struct file *filp, char __user *data,
rc = copy_to_user(data, &stuser->return_code,
sizeof(stuser->return_code));
if (rc) {
- rc = -EFAULT;
- goto out;
+ mutex_unlock(&stdev->mrpc_mutex);
+ return -EFAULT;
}
data += sizeof(stuser->return_code);
rc = copy_to_user(data, &stuser->data,
size - sizeof(stuser->return_code));
if (rc) {
- rc = -EFAULT;
- goto out;
+ mutex_unlock(&stdev->mrpc_mutex);
+ return -EFAULT;
}
stuser_set_state(stuser, MRPC_IDLE);
-out:
mutex_unlock(&stdev->mrpc_mutex);
if (stuser->status == SWITCHTEC_MRPC_STATUS_DONE)
diff --git a/drivers/phy/phy-can-transceiver.c b/drivers/phy/phy-can-transceiver.c
index c2cb93b..4525d3f 100644
--- a/drivers/phy/phy-can-transceiver.c
+++ b/drivers/phy/phy-can-transceiver.c
@@ -87,6 +87,7 @@ static int can_transceiver_phy_probe(struct platform_device *pdev)
struct gpio_desc *standby_gpio;
struct gpio_desc *enable_gpio;
u32 max_bitrate = 0;
+ int err;
can_transceiver_phy = devm_kzalloc(dev, sizeof(struct can_transceiver_phy), GFP_KERNEL);
if (!can_transceiver_phy)
@@ -102,8 +103,8 @@ static int can_transceiver_phy_probe(struct platform_device *pdev)
return PTR_ERR(phy);
}
- device_property_read_u32(dev, "max-bitrate", &max_bitrate);
- if (!max_bitrate)
+ err = device_property_read_u32(dev, "max-bitrate", &max_bitrate);
+ if ((err != -EINVAL) && !max_bitrate)
dev_warn(dev, "Invalid value for transceiver max bitrate. Ignoring bitrate limit\n");
phy->attrs.max_link_rate = max_bitrate;
diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c
index a9687e0..eef8631 100644
--- a/drivers/phy/qualcomm/phy-qcom-qmp.c
+++ b/drivers/phy/qualcomm/phy-qcom-qmp.c
@@ -2919,6 +2919,7 @@ struct qcom_qmp {
struct regulator_bulk_data *vregs;
struct qmp_phy **phys;
+ struct qmp_phy *usb_phy;
struct mutex phy_mutex;
int init_count;
@@ -4554,7 +4555,7 @@ static int qcom_qmp_phy_com_init(struct qmp_phy *qphy)
struct qcom_qmp *qmp = qphy->qmp;
const struct qmp_phy_cfg *cfg = qphy->cfg;
void __iomem *serdes = qphy->serdes;
- void __iomem *pcs = qphy->pcs;
+ struct qmp_phy *usb_phy = qmp->usb_phy;
void __iomem *dp_com = qmp->dp_com;
int ret, i;
@@ -4620,13 +4621,13 @@ static int qcom_qmp_phy_com_init(struct qmp_phy *qphy)
qphy_setbits(serdes, cfg->regs[QPHY_COM_POWER_DOWN_CONTROL],
SW_PWRDN);
} else {
- if (cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL])
- qphy_setbits(pcs,
- cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL],
- cfg->pwrdn_ctrl);
+ if (usb_phy->cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL])
+ qphy_setbits(usb_phy->pcs,
+ usb_phy->cfg->regs[QPHY_PCS_POWER_DOWN_CONTROL],
+ usb_phy->cfg->pwrdn_ctrl);
else
- qphy_setbits(pcs, QPHY_POWER_DOWN_CONTROL,
- cfg->pwrdn_ctrl);
+ qphy_setbits(usb_phy->pcs, QPHY_POWER_DOWN_CONTROL,
+ usb_phy->cfg->pwrdn_ctrl);
}
mutex_unlock(&qmp->phy_mutex);
@@ -4984,7 +4985,7 @@ static void qcom_qmp_phy_disable_autonomous_mode(struct qmp_phy *qphy)
static int __maybe_unused qcom_qmp_phy_runtime_suspend(struct device *dev)
{
struct qcom_qmp *qmp = dev_get_drvdata(dev);
- struct qmp_phy *qphy = qmp->phys[0];
+ struct qmp_phy *qphy = qmp->usb_phy;
const struct qmp_phy_cfg *cfg = qphy->cfg;
dev_vdbg(dev, "Suspending QMP phy, mode:%d\n", qphy->mode);
@@ -5009,7 +5010,7 @@ static int __maybe_unused qcom_qmp_phy_runtime_suspend(struct device *dev)
static int __maybe_unused qcom_qmp_phy_runtime_resume(struct device *dev)
{
struct qcom_qmp *qmp = dev_get_drvdata(dev);
- struct qmp_phy *qphy = qmp->phys[0];
+ struct qmp_phy *qphy = qmp->usb_phy;
const struct qmp_phy_cfg *cfg = qphy->cfg;
int ret = 0;
@@ -5387,6 +5388,21 @@ static void qcom_qmp_reset_control_put(void *data)
reset_control_put(data);
}
+static void __iomem *qmp_usb_iomap(struct device *dev, struct device_node *np,
+ int index, bool exclusive)
+{
+ struct resource res;
+
+ if (!exclusive) {
+ if (of_address_to_resource(np, index, &res))
+ return IOMEM_ERR_PTR(-EINVAL);
+
+ return devm_ioremap(dev, res.start, resource_size(&res));
+ }
+
+ return devm_of_iomap(dev, np, index, NULL);
+}
+
static
int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
void __iomem *serdes, const struct qmp_phy_cfg *cfg)
@@ -5396,8 +5412,18 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
struct qmp_phy *qphy;
const struct phy_ops *ops;
char prop_name[MAX_PROP_NAME];
+ bool exclusive = true;
int ret;
+ /*
+ * FIXME: These bindings should be fixed to not rely on overlapping
+ * mappings for PCS.
+ */
+ if (of_device_is_compatible(dev->of_node, "qcom,sdx65-qmp-usb3-uni-phy"))
+ exclusive = false;
+ if (of_device_is_compatible(dev->of_node, "qcom,sm8350-qmp-usb3-uni-phy"))
+ exclusive = false;
+
qphy = devm_kzalloc(dev, sizeof(*qphy), GFP_KERNEL);
if (!qphy)
return -ENOMEM;
@@ -5410,17 +5436,17 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
* For dual lane PHYs: tx2 -> 3, rx2 -> 4, pcs_misc (optional) -> 5
* For single lane PHYs: pcs_misc (optional) -> 3.
*/
- qphy->tx = of_iomap(np, 0);
- if (!qphy->tx)
- return -ENOMEM;
+ qphy->tx = devm_of_iomap(dev, np, 0, NULL);
+ if (IS_ERR(qphy->tx))
+ return PTR_ERR(qphy->tx);
- qphy->rx = of_iomap(np, 1);
- if (!qphy->rx)
- return -ENOMEM;
+ qphy->rx = devm_of_iomap(dev, np, 1, NULL);
+ if (IS_ERR(qphy->rx))
+ return PTR_ERR(qphy->rx);
- qphy->pcs = of_iomap(np, 2);
- if (!qphy->pcs)
- return -ENOMEM;
+ qphy->pcs = qmp_usb_iomap(dev, np, 2, exclusive);
+ if (IS_ERR(qphy->pcs))
+ return PTR_ERR(qphy->pcs);
/*
* If this is a dual-lane PHY, then there should be registers for the
@@ -5429,9 +5455,9 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
* offset from the first lane.
*/
if (cfg->is_dual_lane_phy) {
- qphy->tx2 = of_iomap(np, 3);
- qphy->rx2 = of_iomap(np, 4);
- if (!qphy->tx2 || !qphy->rx2) {
+ qphy->tx2 = devm_of_iomap(dev, np, 3, NULL);
+ qphy->rx2 = devm_of_iomap(dev, np, 4, NULL);
+ if (IS_ERR(qphy->tx2) || IS_ERR(qphy->rx2)) {
dev_warn(dev,
"Underspecified device tree, falling back to legacy register regions\n");
@@ -5441,15 +5467,17 @@ int qcom_qmp_phy_create(struct device *dev, struct device_node *np, int id,
qphy->rx2 = qphy->rx + QMP_PHY_LEGACY_LANE_STRIDE;
} else {
- qphy->pcs_misc = of_iomap(np, 5);
+ qphy->pcs_misc = devm_of_iomap(dev, np, 5, NULL);
}
} else {
- qphy->pcs_misc = of_iomap(np, 3);
+ qphy->pcs_misc = devm_of_iomap(dev, np, 3, NULL);
}
- if (!qphy->pcs_misc)
+ if (IS_ERR(qphy->pcs_misc)) {
dev_vdbg(dev, "PHY pcs_misc-reg not used\n");
+ qphy->pcs_misc = NULL;
+ }
/*
* Get PHY's Pipe clock, if any. USB3 and PCIe are PIPE3
@@ -5740,7 +5768,9 @@ static int qcom_qmp_phy_probe(struct platform_device *pdev)
return -ENOMEM;
pm_runtime_set_active(dev);
- pm_runtime_enable(dev);
+ ret = devm_pm_runtime_enable(dev);
+ if (ret)
+ return ret;
/*
* Prevent runtime pm from being ON by default. Users can enable
* it using power/control in sysfs.
@@ -5765,6 +5795,9 @@ static int qcom_qmp_phy_probe(struct platform_device *pdev)
goto err_node_put;
}
+ if (cfg->type != PHY_TYPE_DP)
+ qmp->usb_phy = qmp->phys[id];
+
/*
* Register the pipe clock provided by phy.
* See function description to see details of this pipe clock.
@@ -5787,16 +5820,16 @@ static int qcom_qmp_phy_probe(struct platform_device *pdev)
id++;
}
+ if (!qmp->usb_phy)
+ return -EINVAL;
+
phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate);
if (!IS_ERR(phy_provider))
dev_info(dev, "Registered Qcom-QMP phy\n");
- else
- pm_runtime_disable(dev);
return PTR_ERR_OR_ZERO(phy_provider);
err_node_put:
- pm_runtime_disable(dev);
of_node_put(child);
return ret;
}
diff --git a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
index 4f569d9..c167b8c 100644
--- a/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
+++ b/drivers/phy/rockchip/phy-rockchip-inno-usb2.c
@@ -467,8 +467,10 @@ static int rockchip_usb2phy_power_on(struct phy *phy)
return ret;
ret = property_enable(base, &rport->port_cfg->phy_sus, false);
- if (ret)
+ if (ret) {
+ clk_disable_unprepare(rphy->clk480m);
return ret;
+ }
/* waiting for the utmi_clk to become stable */
usleep_range(1500, 2000);
diff --git a/drivers/phy/rockchip/phy-rockchip-typec.c b/drivers/phy/rockchip/phy-rockchip-typec.c
index d2bbdc9..0628214 100644
--- a/drivers/phy/rockchip/phy-rockchip-typec.c
+++ b/drivers/phy/rockchip/phy-rockchip-typec.c
@@ -808,9 +808,8 @@ static int tcphy_get_mode(struct rockchip_typec_phy *tcphy)
struct extcon_dev *edev = tcphy->extcon;
union extcon_property_value property;
unsigned int id;
- bool ufp, dp;
u8 mode;
- int ret;
+ int ret, ufp, dp;
if (!edev)
return MODE_DFP_USB;
@@ -821,10 +820,10 @@ static int tcphy_get_mode(struct rockchip_typec_phy *tcphy)
mode = MODE_DFP_USB;
id = EXTCON_USB_HOST;
- if (ufp) {
+ if (ufp > 0) {
mode = MODE_UFP_USB;
id = EXTCON_USB;
- } else if (dp) {
+ } else if (dp > 0) {
mode = MODE_DFP_DP;
id = EXTCON_DISP_DP;
diff --git a/drivers/phy/ti/Kconfig b/drivers/phy/ti/Kconfig
index 15a3bcf..b905902 100644
--- a/drivers/phy/ti/Kconfig
+++ b/drivers/phy/ti/Kconfig
@@ -23,7 +23,7 @@
config PHY_AM654_SERDES
tristate "TI AM654 SERDES support"
- depends on OF && ARCH_K3 || COMPILE_TEST
+ depends on OF && (ARCH_K3 || COMPILE_TEST)
depends on COMMON_CLK
select GENERIC_PHY
select MULTIPLEXER
@@ -35,7 +35,7 @@
config PHY_J721E_WIZ
tristate "TI J721E WIZ (SERDES Wrapper) support"
- depends on OF && ARCH_K3 || COMPILE_TEST
+ depends on OF && (ARCH_K3 || COMPILE_TEST)
depends on HAS_IOMEM && OF_ADDRESS
depends on COMMON_CLK
select GENERIC_PHY
diff --git a/drivers/pinctrl/aspeed/pinctrl-aspeed.c b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
index 83d47ff..5a12fc7 100644
--- a/drivers/pinctrl/aspeed/pinctrl-aspeed.c
+++ b/drivers/pinctrl/aspeed/pinctrl-aspeed.c
@@ -122,7 +122,7 @@ static int aspeed_disable_sig(struct aspeed_pinmux_data *ctx,
int ret = 0;
if (!exprs)
- return true;
+ return -EINVAL;
while (*exprs && !ret) {
ret = aspeed_sig_expr_disable(ctx, *exprs);
diff --git a/drivers/pinctrl/bcm/pinctrl-bcm2835.c b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
index a293899..2c10086 100644
--- a/drivers/pinctrl/bcm/pinctrl-bcm2835.c
+++ b/drivers/pinctrl/bcm/pinctrl-bcm2835.c
@@ -356,8 +356,6 @@ static int bcm2835_of_gpio_ranges_fallback(struct gpio_chip *gc,
{
struct pinctrl_dev *pctldev = of_pinctrl_get(np);
- of_node_put(np);
-
if (!pctldev)
return 0;
diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
index 32807aa..cc64eda 100644
--- a/drivers/pinctrl/intel/pinctrl-intel.c
+++ b/drivers/pinctrl/intel/pinctrl-intel.c
@@ -1661,6 +1661,12 @@ const struct intel_pinctrl_soc_data *intel_pinctrl_get_soc_data(struct platform_
EXPORT_SYMBOL_GPL(intel_pinctrl_get_soc_data);
#ifdef CONFIG_PM_SLEEP
+static bool __intel_gpio_is_direct_irq(u32 value)
+{
+ return (value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
+ (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO);
+}
+
static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int pin)
{
const struct pin_desc *pd = pin_desc_get(pctrl->pctldev, pin);
@@ -1694,8 +1700,7 @@ static bool intel_pinctrl_should_save(struct intel_pinctrl *pctrl, unsigned int
* See https://bugzilla.kernel.org/show_bug.cgi?id=214749.
*/
value = readl(intel_get_padcfg(pctrl, pin, PADCFG0));
- if ((value & PADCFG0_GPIROUTIOXAPIC) && (value & PADCFG0_GPIOTXDIS) &&
- (__intel_gpio_get_gpio_mode(value) == PADCFG0_PMODE_GPIO))
+ if (__intel_gpio_is_direct_irq(value))
return true;
return false;
@@ -1825,7 +1830,12 @@ int intel_pinctrl_resume_noirq(struct device *dev)
for (i = 0; i < pctrl->soc->npins; i++) {
const struct pinctrl_pin_desc *desc = &pctrl->soc->pins[i];
- if (!intel_pinctrl_should_save(pctrl, desc->number))
+ if (!(intel_pinctrl_should_save(pctrl, desc->number) ||
+ /*
+ * If the firmware mangled the register contents too much,
+ * check the saved value for the Direct IRQ mode.
+ */
+ __intel_gpio_is_direct_irq(pads[i].padcfg0)))
continue;
intel_restore_padcfg(pctrl, desc->number, PADCFG0, pads[i].padcfg0);
diff --git a/drivers/pinctrl/mediatek/pinctrl-mt8195.c b/drivers/pinctrl/mediatek/pinctrl-mt8195.c
index a7500e1..c32884f 100644
--- a/drivers/pinctrl/mediatek/pinctrl-mt8195.c
+++ b/drivers/pinctrl/mediatek/pinctrl-mt8195.c
@@ -659,7 +659,7 @@ static const struct mtk_pin_field_calc mt8195_pin_drv_range[] = {
PIN_FIELD_BASE(10, 10, 4, 0x010, 0x10, 9, 3),
PIN_FIELD_BASE(11, 11, 4, 0x000, 0x10, 24, 3),
PIN_FIELD_BASE(12, 12, 4, 0x010, 0x10, 12, 3),
- PIN_FIELD_BASE(13, 13, 4, 0x010, 0x10, 27, 3),
+ PIN_FIELD_BASE(13, 13, 4, 0x000, 0x10, 27, 3),
PIN_FIELD_BASE(14, 14, 4, 0x010, 0x10, 15, 3),
PIN_FIELD_BASE(15, 15, 4, 0x010, 0x10, 0, 3),
PIN_FIELD_BASE(16, 16, 4, 0x010, 0x10, 18, 3),
@@ -708,7 +708,7 @@ static const struct mtk_pin_field_calc mt8195_pin_drv_range[] = {
PIN_FIELD_BASE(78, 78, 3, 0x000, 0x10, 15, 3),
PIN_FIELD_BASE(79, 79, 3, 0x000, 0x10, 18, 3),
PIN_FIELD_BASE(80, 80, 3, 0x000, 0x10, 21, 3),
- PIN_FIELD_BASE(81, 81, 3, 0x000, 0x10, 28, 3),
+ PIN_FIELD_BASE(81, 81, 3, 0x000, 0x10, 24, 3),
PIN_FIELD_BASE(82, 82, 3, 0x000, 0x10, 27, 3),
PIN_FIELD_BASE(83, 83, 3, 0x010, 0x10, 0, 3),
PIN_FIELD_BASE(84, 84, 3, 0x010, 0x10, 3, 3),
diff --git a/drivers/pinctrl/mediatek/pinctrl-paris.c b/drivers/pinctrl/mediatek/pinctrl-paris.c
index 02e2a25..0fa1c36 100644
--- a/drivers/pinctrl/mediatek/pinctrl-paris.c
+++ b/drivers/pinctrl/mediatek/pinctrl-paris.c
@@ -572,9 +572,9 @@ static int mtk_hw_get_value_wrap(struct mtk_pinctrl *hw, unsigned int gpio, int
mtk_hw_get_value_wrap(hw, gpio, PINCTRL_PIN_REG_DRV)
ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
- unsigned int gpio, char *buf, unsigned int bufLen)
+ unsigned int gpio, char *buf, unsigned int buf_len)
{
- int pinmux, pullup, pullen, len = 0, r1 = -1, r0 = -1;
+ int pinmux, pullup = 0, pullen = 0, len = 0, r1 = -1, r0 = -1;
const struct mtk_pin_desc *desc;
if (gpio >= hw->soc->npins)
@@ -608,7 +608,7 @@ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
} else if (pullen != MTK_DISABLE && pullen != MTK_ENABLE) {
pullen = 0;
}
- len += scnprintf(buf + len, bufLen - len,
+ len += scnprintf(buf + len, buf_len - len,
"%03d: %1d%1d%1d%1d%02d%1d%1d%1d%1d",
gpio,
pinmux,
@@ -622,10 +622,10 @@ ssize_t mtk_pctrl_show_one_pin(struct mtk_pinctrl *hw,
pullup);
if (r1 != -1) {
- len += scnprintf(buf + len, bufLen - len, " (%1d %1d)\n",
+ len += scnprintf(buf + len, buf_len - len, " (%1d %1d)\n",
r1, r0);
} else {
- len += scnprintf(buf + len, bufLen - len, "\n");
+ len += scnprintf(buf + len, buf_len - len, "\n");
}
return len;
@@ -637,7 +637,7 @@ static void mtk_pctrl_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *s,
unsigned int gpio)
{
struct mtk_pinctrl *hw = pinctrl_dev_get_drvdata(pctldev);
- char buf[PIN_DBG_BUF_SZ];
+ char buf[PIN_DBG_BUF_SZ] = { 0 };
(void)mtk_pctrl_show_one_pin(hw, gpio, buf, PIN_DBG_BUF_SZ);
diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
index 41136f6..e4a0d16 100644
--- a/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
+++ b/drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
@@ -104,12 +104,12 @@ static void npcm_gpio_set(struct gpio_chip *gc, void __iomem *reg,
unsigned long flags;
unsigned long val;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
val = ioread32(reg) | pinmask;
iowrite32(val, reg);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void npcm_gpio_clr(struct gpio_chip *gc, void __iomem *reg,
@@ -118,12 +118,12 @@ static void npcm_gpio_clr(struct gpio_chip *gc, void __iomem *reg,
unsigned long flags;
unsigned long val;
- spin_lock_irqsave(&gc->bgpio_lock, flags);
+ raw_spin_lock_irqsave(&gc->bgpio_lock, flags);
val = ioread32(reg) & ~pinmask;
iowrite32(val, reg);
- spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+ raw_spin_unlock_irqrestore(&gc->bgpio_lock, flags);
}
static void npcmgpio_dbg_show(struct seq_file *s, struct gpio_chip *chip)
diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
index 52d1fe5..0a77b84 100644
--- a/drivers/pinctrl/pinctrl-amd.c
+++ b/drivers/pinctrl/pinctrl-amd.c
@@ -863,32 +863,34 @@ static const struct pinconf_ops amd_pinconf_ops = {
.pin_config_group_set = amd_pinconf_group_set,
};
-static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
+static void amd_gpio_irq_init_pin(struct amd_gpio *gpio_dev, int pin)
{
- struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
+ const struct pin_desc *pd;
unsigned long flags;
u32 pin_reg, mask;
- int i;
mask = BIT(WAKE_CNTRL_OFF_S0I3) | BIT(WAKE_CNTRL_OFF_S3) |
BIT(INTERRUPT_MASK_OFF) | BIT(INTERRUPT_ENABLE_OFF) |
BIT(WAKE_CNTRL_OFF_S4);
- for (i = 0; i < desc->npins; i++) {
- int pin = desc->pins[i].number;
- const struct pin_desc *pd = pin_desc_get(gpio_dev->pctrl, pin);
+ pd = pin_desc_get(gpio_dev->pctrl, pin);
+ if (!pd)
+ return;
- if (!pd)
- continue;
+ raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+ pin_reg = readl(gpio_dev->base + pin * 4);
+ pin_reg &= ~mask;
+ writel(pin_reg, gpio_dev->base + pin * 4);
+ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+}
- raw_spin_lock_irqsave(&gpio_dev->lock, flags);
+static void amd_gpio_irq_init(struct amd_gpio *gpio_dev)
+{
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
+ int i;
- pin_reg = readl(gpio_dev->base + i * 4);
- pin_reg &= ~mask;
- writel(pin_reg, gpio_dev->base + i * 4);
-
- raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
- }
+ for (i = 0; i < desc->npins; i++)
+ amd_gpio_irq_init_pin(gpio_dev, i);
}
#ifdef CONFIG_PM_SLEEP
@@ -941,8 +943,10 @@ static int amd_gpio_resume(struct device *dev)
for (i = 0; i < desc->npins; i++) {
int pin = desc->pins[i].number;
- if (!amd_gpio_should_save(gpio_dev, pin))
+ if (!amd_gpio_should_save(gpio_dev, pin)) {
+ amd_gpio_irq_init_pin(gpio_dev, pin);
continue;
+ }
raw_spin_lock_irqsave(&gpio_dev->lock, flags);
gpio_dev->saved_regs[i] |= readl(gpio_dev->base + pin * 4) & PIN_IRQ_PENDING;
diff --git a/drivers/pinctrl/pinctrl-at91-pio4.c b/drivers/pinctrl/pinctrl-at91-pio4.c
index 03c32b2..e31f5d9 100644
--- a/drivers/pinctrl/pinctrl-at91-pio4.c
+++ b/drivers/pinctrl/pinctrl-at91-pio4.c
@@ -1126,8 +1126,8 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
pin_desc[i].number = i;
/* Pin naming convention: P(bank_name)(bank_pin_number). */
- pin_desc[i].name = kasprintf(GFP_KERNEL, "P%c%d",
- bank + 'A', line);
+ pin_desc[i].name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "P%c%d",
+ bank + 'A', line);
group->name = group_names[i] = pin_desc[i].name;
group->pin = pin_desc[i].number;
@@ -1184,7 +1184,6 @@ static int atmel_pinctrl_probe(struct platform_device *pdev)
dev_err(dev, "can't add the irq domain\n");
return -ENODEV;
}
- atmel_pioctrl->irq_domain->name = "atmel gpio";
for (i = 0; i < atmel_pioctrl->npins; i++) {
int irq = irq_create_mapping(atmel_pioctrl->irq_domain, i);
diff --git a/drivers/pinctrl/pinctrl-at91.c b/drivers/pinctrl/pinctrl-at91.c
index 6022496..3b0341c 100644
--- a/drivers/pinctrl/pinctrl-at91.c
+++ b/drivers/pinctrl/pinctrl-at91.c
@@ -1891,7 +1891,7 @@ static int at91_gpio_probe(struct platform_device *pdev)
}
for (i = 0; i < chip->ngpio; i++)
- names[i] = kasprintf(GFP_KERNEL, "pio%c%d", alias_idx + 'A', i);
+ names[i] = devm_kasprintf(&pdev->dev, GFP_KERNEL, "pio%c%d", alias_idx + 'A', i);
chip->names = (const char *const *)names;
diff --git a/drivers/pinctrl/pinctrl-ocelot.c b/drivers/pinctrl/pinctrl-ocelot.c
index 0a36ec8..b14f1b7 100644
--- a/drivers/pinctrl/pinctrl-ocelot.c
+++ b/drivers/pinctrl/pinctrl-ocelot.c
@@ -739,7 +739,7 @@ static int ocelot_pinmux_set_mux(struct pinctrl_dev *pctldev,
regmap_update_bits(info->map, REG_ALT(0, info, pin->pin),
BIT(p), f << p);
regmap_update_bits(info->map, REG_ALT(1, info, pin->pin),
- BIT(p), f << (p - 1));
+ BIT(p), (f >> 1) << p);
return 0;
}
diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
index 25ec0d2..a6f4aca 100644
--- a/drivers/pinctrl/pinctrl-rockchip.c
+++ b/drivers/pinctrl/pinctrl-rockchip.c
@@ -285,6 +285,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
{
struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev);
const struct rockchip_pin_group *grp;
+ struct device *dev = info->dev;
struct pinctrl_map *new_map;
struct device_node *parent;
int map_num = 1;
@@ -296,8 +297,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
*/
grp = pinctrl_name_to_group(info, np->name);
if (!grp) {
- dev_err(info->dev, "unable to find group for node %pOFn\n",
- np);
+ dev_err(dev, "unable to find group for node %pOFn\n", np);
return -EINVAL;
}
@@ -331,7 +331,7 @@ static int rockchip_dt_node_to_map(struct pinctrl_dev *pctldev,
new_map[i].data.configs.num_configs = grp->data[i].nconfigs;
}
- dev_dbg(pctldev->dev, "maps: function %s group %s num %d\n",
+ dev_dbg(dev, "maps: function %s group %s num %d\n",
(*map)->data.mux.function, (*map)->data.mux.group, map_num);
return 0;
@@ -758,19 +758,19 @@ static struct rockchip_mux_route_data rk3568_mux_route_data[] = {
RK_MUXROUTE_PMU(0, RK_PB5, 4, 0x0110, WRITE_MASK_VAL(3, 2, 1)), /* PWM1 IO mux M1 */
RK_MUXROUTE_PMU(0, RK_PC1, 1, 0x0110, WRITE_MASK_VAL(5, 4, 0)), /* PWM2 IO mux M0 */
RK_MUXROUTE_PMU(0, RK_PB6, 4, 0x0110, WRITE_MASK_VAL(5, 4, 1)), /* PWM2 IO mux M1 */
- RK_MUXROUTE_PMU(0, RK_PB3, 2, 0x0300, WRITE_MASK_VAL(0, 0, 0)), /* CAN0 IO mux M0 */
+ RK_MUXROUTE_GRF(0, RK_PB3, 2, 0x0300, WRITE_MASK_VAL(0, 0, 0)), /* CAN0 IO mux M0 */
RK_MUXROUTE_GRF(2, RK_PA1, 4, 0x0300, WRITE_MASK_VAL(0, 0, 1)), /* CAN0 IO mux M1 */
RK_MUXROUTE_GRF(1, RK_PA1, 3, 0x0300, WRITE_MASK_VAL(2, 2, 0)), /* CAN1 IO mux M0 */
RK_MUXROUTE_GRF(4, RK_PC3, 3, 0x0300, WRITE_MASK_VAL(2, 2, 1)), /* CAN1 IO mux M1 */
RK_MUXROUTE_GRF(4, RK_PB5, 3, 0x0300, WRITE_MASK_VAL(4, 4, 0)), /* CAN2 IO mux M0 */
RK_MUXROUTE_GRF(2, RK_PB2, 4, 0x0300, WRITE_MASK_VAL(4, 4, 1)), /* CAN2 IO mux M1 */
RK_MUXROUTE_GRF(4, RK_PC4, 1, 0x0300, WRITE_MASK_VAL(6, 6, 0)), /* HPDIN IO mux M0 */
- RK_MUXROUTE_PMU(0, RK_PC2, 2, 0x0300, WRITE_MASK_VAL(6, 6, 1)), /* HPDIN IO mux M1 */
+ RK_MUXROUTE_GRF(0, RK_PC2, 2, 0x0300, WRITE_MASK_VAL(6, 6, 1)), /* HPDIN IO mux M1 */
RK_MUXROUTE_GRF(3, RK_PB1, 3, 0x0300, WRITE_MASK_VAL(8, 8, 0)), /* GMAC1 IO mux M0 */
RK_MUXROUTE_GRF(4, RK_PA7, 3, 0x0300, WRITE_MASK_VAL(8, 8, 1)), /* GMAC1 IO mux M1 */
RK_MUXROUTE_GRF(4, RK_PD1, 1, 0x0300, WRITE_MASK_VAL(10, 10, 0)), /* HDMITX IO mux M0 */
- RK_MUXROUTE_PMU(0, RK_PC7, 1, 0x0300, WRITE_MASK_VAL(10, 10, 1)), /* HDMITX IO mux M1 */
- RK_MUXROUTE_PMU(0, RK_PB6, 1, 0x0300, WRITE_MASK_VAL(14, 14, 0)), /* I2C2 IO mux M0 */
+ RK_MUXROUTE_GRF(0, RK_PC7, 1, 0x0300, WRITE_MASK_VAL(10, 10, 1)), /* HDMITX IO mux M1 */
+ RK_MUXROUTE_GRF(0, RK_PB6, 1, 0x0300, WRITE_MASK_VAL(14, 14, 0)), /* I2C2 IO mux M0 */
RK_MUXROUTE_GRF(4, RK_PB4, 1, 0x0300, WRITE_MASK_VAL(14, 14, 1)), /* I2C2 IO mux M1 */
RK_MUXROUTE_GRF(1, RK_PA0, 1, 0x0304, WRITE_MASK_VAL(0, 0, 0)), /* I2C3 IO mux M0 */
RK_MUXROUTE_GRF(3, RK_PB6, 4, 0x0304, WRITE_MASK_VAL(0, 0, 1)), /* I2C3 IO mux M1 */
@@ -796,7 +796,7 @@ static struct rockchip_mux_route_data rk3568_mux_route_data[] = {
RK_MUXROUTE_GRF(4, RK_PC3, 1, 0x0308, WRITE_MASK_VAL(12, 12, 1)), /* PWM15 IO mux M1 */
RK_MUXROUTE_GRF(3, RK_PD2, 3, 0x0308, WRITE_MASK_VAL(14, 14, 0)), /* SDMMC2 IO mux M0 */
RK_MUXROUTE_GRF(3, RK_PA5, 5, 0x0308, WRITE_MASK_VAL(14, 14, 1)), /* SDMMC2 IO mux M1 */
- RK_MUXROUTE_PMU(0, RK_PB5, 2, 0x030c, WRITE_MASK_VAL(0, 0, 0)), /* SPI0 IO mux M0 */
+ RK_MUXROUTE_GRF(0, RK_PB5, 2, 0x030c, WRITE_MASK_VAL(0, 0, 0)), /* SPI0 IO mux M0 */
RK_MUXROUTE_GRF(2, RK_PD3, 3, 0x030c, WRITE_MASK_VAL(0, 0, 1)), /* SPI0 IO mux M1 */
RK_MUXROUTE_GRF(2, RK_PB5, 3, 0x030c, WRITE_MASK_VAL(2, 2, 0)), /* SPI1 IO mux M0 */
RK_MUXROUTE_GRF(3, RK_PC3, 3, 0x030c, WRITE_MASK_VAL(2, 2, 1)), /* SPI1 IO mux M1 */
@@ -805,8 +805,8 @@ static struct rockchip_mux_route_data rk3568_mux_route_data[] = {
RK_MUXROUTE_GRF(4, RK_PB3, 4, 0x030c, WRITE_MASK_VAL(6, 6, 0)), /* SPI3 IO mux M0 */
RK_MUXROUTE_GRF(4, RK_PC2, 2, 0x030c, WRITE_MASK_VAL(6, 6, 1)), /* SPI3 IO mux M1 */
RK_MUXROUTE_GRF(2, RK_PB4, 2, 0x030c, WRITE_MASK_VAL(8, 8, 0)), /* UART1 IO mux M0 */
- RK_MUXROUTE_PMU(0, RK_PD1, 1, 0x030c, WRITE_MASK_VAL(8, 8, 1)), /* UART1 IO mux M1 */
- RK_MUXROUTE_PMU(0, RK_PD1, 1, 0x030c, WRITE_MASK_VAL(10, 10, 0)), /* UART2 IO mux M0 */
+ RK_MUXROUTE_GRF(3, RK_PD6, 4, 0x030c, WRITE_MASK_VAL(8, 8, 1)), /* UART1 IO mux M1 */
+ RK_MUXROUTE_GRF(0, RK_PD1, 1, 0x030c, WRITE_MASK_VAL(10, 10, 0)), /* UART2 IO mux M0 */
RK_MUXROUTE_GRF(1, RK_PD5, 2, 0x030c, WRITE_MASK_VAL(10, 10, 1)), /* UART2 IO mux M1 */
RK_MUXROUTE_GRF(1, RK_PA1, 2, 0x030c, WRITE_MASK_VAL(12, 12, 0)), /* UART3 IO mux M0 */
RK_MUXROUTE_GRF(3, RK_PB7, 4, 0x030c, WRITE_MASK_VAL(12, 12, 1)), /* UART3 IO mux M1 */
@@ -836,13 +836,13 @@ static struct rockchip_mux_route_data rk3568_mux_route_data[] = {
RK_MUXROUTE_GRF(3, RK_PD6, 5, 0x0314, WRITE_MASK_VAL(1, 0, 1)), /* PDM IO mux M1 */
RK_MUXROUTE_GRF(4, RK_PA0, 4, 0x0314, WRITE_MASK_VAL(1, 0, 1)), /* PDM IO mux M1 */
RK_MUXROUTE_GRF(3, RK_PC4, 5, 0x0314, WRITE_MASK_VAL(1, 0, 2)), /* PDM IO mux M2 */
- RK_MUXROUTE_PMU(0, RK_PA5, 3, 0x0314, WRITE_MASK_VAL(3, 2, 0)), /* PCIE20 IO mux M0 */
+ RK_MUXROUTE_GRF(0, RK_PA5, 3, 0x0314, WRITE_MASK_VAL(3, 2, 0)), /* PCIE20 IO mux M0 */
RK_MUXROUTE_GRF(2, RK_PD0, 4, 0x0314, WRITE_MASK_VAL(3, 2, 1)), /* PCIE20 IO mux M1 */
RK_MUXROUTE_GRF(1, RK_PB0, 4, 0x0314, WRITE_MASK_VAL(3, 2, 2)), /* PCIE20 IO mux M2 */
- RK_MUXROUTE_PMU(0, RK_PA4, 3, 0x0314, WRITE_MASK_VAL(5, 4, 0)), /* PCIE30X1 IO mux M0 */
+ RK_MUXROUTE_GRF(0, RK_PA4, 3, 0x0314, WRITE_MASK_VAL(5, 4, 0)), /* PCIE30X1 IO mux M0 */
RK_MUXROUTE_GRF(2, RK_PD2, 4, 0x0314, WRITE_MASK_VAL(5, 4, 1)), /* PCIE30X1 IO mux M1 */
RK_MUXROUTE_GRF(1, RK_PA5, 4, 0x0314, WRITE_MASK_VAL(5, 4, 2)), /* PCIE30X1 IO mux M2 */
- RK_MUXROUTE_PMU(0, RK_PA6, 2, 0x0314, WRITE_MASK_VAL(7, 6, 0)), /* PCIE30X2 IO mux M0 */
+ RK_MUXROUTE_GRF(0, RK_PA6, 2, 0x0314, WRITE_MASK_VAL(7, 6, 0)), /* PCIE30X2 IO mux M0 */
RK_MUXROUTE_GRF(2, RK_PD4, 4, 0x0314, WRITE_MASK_VAL(7, 6, 1)), /* PCIE30X2 IO mux M1 */
RK_MUXROUTE_GRF(4, RK_PC2, 4, 0x0314, WRITE_MASK_VAL(7, 6, 2)), /* PCIE30X2 IO mux M2 */
};
@@ -927,20 +927,20 @@ static int rockchip_verify_mux(struct rockchip_pin_bank *bank,
int pin, int mux)
{
struct rockchip_pinctrl *info = bank->drvdata;
+ struct device *dev = info->dev;
int iomux_num = (pin / 8);
if (iomux_num > 3)
return -EINVAL;
if (bank->iomux[iomux_num].type & IOMUX_UNROUTED) {
- dev_err(info->dev, "pin %d is unrouted\n", pin);
+ dev_err(dev, "pin %d is unrouted\n", pin);
return -EINVAL;
}
if (bank->iomux[iomux_num].type & IOMUX_GPIO_ONLY) {
if (mux != RK_FUNC_GPIO) {
- dev_err(info->dev,
- "pin %d only supports a gpio mux\n", pin);
+ dev_err(dev, "pin %d only supports a gpio mux\n", pin);
return -ENOTSUPP;
}
}
@@ -964,6 +964,7 @@ static int rockchip_verify_mux(struct rockchip_pin_bank *bank,
static int rockchip_set_mux(struct rockchip_pin_bank *bank, int pin, int mux)
{
struct rockchip_pinctrl *info = bank->drvdata;
+ struct device *dev = info->dev;
int iomux_num = (pin / 8);
struct regmap *regmap;
int reg, ret, mask, mux_type;
@@ -977,8 +978,7 @@ static int rockchip_set_mux(struct rockchip_pin_bank *bank, int pin, int mux)
if (bank->iomux[iomux_num].type & IOMUX_GPIO_ONLY)
return 0;
- dev_dbg(info->dev, "setting mux of GPIO%d-%d to %d\n",
- bank->bank_num, pin, mux);
+ dev_dbg(dev, "setting mux of GPIO%d-%d to %d\n", bank->bank_num, pin, mux);
regmap = (bank->iomux[iomux_num].type & IOMUX_SOURCE_PMU)
? info->regmap_pmu : info->regmap_base;
@@ -1039,9 +1039,9 @@ static int rockchip_set_mux(struct rockchip_pin_bank *bank, int pin, int mux)
#define PX30_PULL_PINS_PER_REG 8
#define PX30_PULL_BANK_STRIDE 16
-static void px30_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int px30_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1061,6 +1061,8 @@ static void px30_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*reg += ((pin_num / PX30_PULL_PINS_PER_REG) * 4);
*bit = (pin_num % PX30_PULL_PINS_PER_REG);
*bit *= PX30_PULL_BITS_PER_PIN;
+
+ return 0;
}
#define PX30_DRV_PMU_OFFSET 0x20
@@ -1069,9 +1071,9 @@ static void px30_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
#define PX30_DRV_PINS_PER_REG 8
#define PX30_DRV_BANK_STRIDE 16
-static void px30_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int px30_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1091,6 +1093,8 @@ static void px30_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*reg += ((pin_num / PX30_DRV_PINS_PER_REG) * 4);
*bit = (pin_num % PX30_DRV_PINS_PER_REG);
*bit *= PX30_DRV_BITS_PER_PIN;
+
+ return 0;
}
#define PX30_SCHMITT_PMU_OFFSET 0x38
@@ -1130,9 +1134,9 @@ static int px30_calc_schmitt_reg_and_bit(struct rockchip_pin_bank *bank,
#define RV1108_PULL_BITS_PER_PIN 2
#define RV1108_PULL_BANK_STRIDE 16
-static void rv1108_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rv1108_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1151,6 +1155,8 @@ static void rv1108_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*reg += ((pin_num / RV1108_PULL_PINS_PER_REG) * 4);
*bit = (pin_num % RV1108_PULL_PINS_PER_REG);
*bit *= RV1108_PULL_BITS_PER_PIN;
+
+ return 0;
}
#define RV1108_DRV_PMU_OFFSET 0x20
@@ -1159,9 +1165,9 @@ static void rv1108_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
#define RV1108_DRV_PINS_PER_REG 8
#define RV1108_DRV_BANK_STRIDE 16
-static void rv1108_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rv1108_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1181,6 +1187,8 @@ static void rv1108_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*reg += ((pin_num / RV1108_DRV_PINS_PER_REG) * 4);
*bit = pin_num % RV1108_DRV_PINS_PER_REG;
*bit *= RV1108_DRV_BITS_PER_PIN;
+
+ return 0;
}
#define RV1108_SCHMITT_PMU_OFFSET 0x30
@@ -1237,9 +1245,9 @@ static int rk3308_calc_schmitt_reg_and_bit(struct rockchip_pin_bank *bank,
#define RK2928_PULL_PINS_PER_REG 16
#define RK2928_PULL_BANK_STRIDE 8
-static void rk2928_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk2928_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1249,13 +1257,15 @@ static void rk2928_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*reg += (pin_num / RK2928_PULL_PINS_PER_REG) * 4;
*bit = pin_num % RK2928_PULL_PINS_PER_REG;
+
+ return 0;
};
#define RK3128_PULL_OFFSET 0x118
-static void rk3128_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3128_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1265,6 +1275,8 @@ static void rk3128_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*reg += ((pin_num / RK2928_PULL_PINS_PER_REG) * 4);
*bit = pin_num % RK2928_PULL_PINS_PER_REG;
+
+ return 0;
}
#define RK3188_PULL_OFFSET 0x164
@@ -1273,9 +1285,9 @@ static void rk3128_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
#define RK3188_PULL_BANK_STRIDE 16
#define RK3188_PULL_PMU_OFFSET 0x64
-static void rk3188_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3188_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1305,12 +1317,14 @@ static void rk3188_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = 7 - (pin_num % RK3188_PULL_PINS_PER_REG);
*bit *= RK3188_PULL_BITS_PER_PIN;
}
+
+ return 0;
}
#define RK3288_PULL_OFFSET 0x140
-static void rk3288_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3288_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1334,6 +1348,8 @@ static void rk3288_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3188_PULL_PINS_PER_REG);
*bit *= RK3188_PULL_BITS_PER_PIN;
}
+
+ return 0;
}
#define RK3288_DRV_PMU_OFFSET 0x70
@@ -1342,9 +1358,9 @@ static void rk3288_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
#define RK3288_DRV_PINS_PER_REG 8
#define RK3288_DRV_BANK_STRIDE 16
-static void rk3288_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3288_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1368,13 +1384,15 @@ static void rk3288_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3288_DRV_PINS_PER_REG);
*bit *= RK3288_DRV_BITS_PER_PIN;
}
+
+ return 0;
}
#define RK3228_PULL_OFFSET 0x100
-static void rk3228_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3228_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1385,13 +1403,15 @@ static void rk3228_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3188_PULL_PINS_PER_REG);
*bit *= RK3188_PULL_BITS_PER_PIN;
+
+ return 0;
}
#define RK3228_DRV_GRF_OFFSET 0x200
-static void rk3228_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3228_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1402,13 +1422,15 @@ static void rk3228_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3288_DRV_PINS_PER_REG);
*bit *= RK3288_DRV_BITS_PER_PIN;
+
+ return 0;
}
#define RK3308_PULL_OFFSET 0xa0
-static void rk3308_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3308_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1419,13 +1441,15 @@ static void rk3308_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3188_PULL_PINS_PER_REG);
*bit *= RK3188_PULL_BITS_PER_PIN;
+
+ return 0;
}
#define RK3308_DRV_GRF_OFFSET 0x100
-static void rk3308_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3308_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1436,14 +1460,16 @@ static void rk3308_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3288_DRV_PINS_PER_REG);
*bit *= RK3288_DRV_BITS_PER_PIN;
+
+ return 0;
}
#define RK3368_PULL_GRF_OFFSET 0x100
#define RK3368_PULL_PMU_OFFSET 0x10
-static void rk3368_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3368_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1467,14 +1493,16 @@ static void rk3368_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3188_PULL_PINS_PER_REG);
*bit *= RK3188_PULL_BITS_PER_PIN;
}
+
+ return 0;
}
#define RK3368_DRV_PMU_OFFSET 0x20
#define RK3368_DRV_GRF_OFFSET 0x200
-static void rk3368_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3368_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1498,15 +1526,17 @@ static void rk3368_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3288_DRV_PINS_PER_REG);
*bit *= RK3288_DRV_BITS_PER_PIN;
}
+
+ return 0;
}
#define RK3399_PULL_GRF_OFFSET 0xe040
#define RK3399_PULL_PMU_OFFSET 0x40
#define RK3399_DRV_3BITS_PER_PIN 3
-static void rk3399_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3399_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1532,11 +1562,13 @@ static void rk3399_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3188_PULL_PINS_PER_REG);
*bit *= RK3188_PULL_BITS_PER_PIN;
}
+
+ return 0;
}
-static void rk3399_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3399_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
int drv_num = (pin_num / 8);
@@ -1553,6 +1585,8 @@ static void rk3399_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % 8) * 3;
else
*bit = (pin_num % 8) * 2;
+
+ return 0;
}
#define RK3568_PULL_PMU_OFFSET 0x20
@@ -1561,9 +1595,9 @@ static void rk3399_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
#define RK3568_PULL_PINS_PER_REG 8
#define RK3568_PULL_BANK_STRIDE 0x10
-static void rk3568_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3568_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1584,6 +1618,8 @@ static void rk3568_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3568_PULL_PINS_PER_REG);
*bit *= RK3568_PULL_BITS_PER_PIN;
}
+
+ return 0;
}
#define RK3568_DRV_PMU_OFFSET 0x70
@@ -1592,9 +1628,9 @@ static void rk3568_calc_pull_reg_and_bit(struct rockchip_pin_bank *bank,
#define RK3568_DRV_PINS_PER_REG 2
#define RK3568_DRV_BANK_STRIDE 0x40
-static void rk3568_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
- int pin_num, struct regmap **regmap,
- int *reg, u8 *bit)
+static int rk3568_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
+ int pin_num, struct regmap **regmap,
+ int *reg, u8 *bit)
{
struct rockchip_pinctrl *info = bank->drvdata;
@@ -1615,6 +1651,8 @@ static void rk3568_calc_drv_reg_and_bit(struct rockchip_pin_bank *bank,
*bit = (pin_num % RK3568_DRV_PINS_PER_REG);
*bit *= RK3568_DRV_BITS_PER_PIN;
}
+
+ return 0;
}
static int rockchip_perpin_drv_list[DRV_TYPE_MAX][8] = {
@@ -1630,13 +1668,16 @@ static int rockchip_get_drive_perpin(struct rockchip_pin_bank *bank,
{
struct rockchip_pinctrl *info = bank->drvdata;
struct rockchip_pin_ctrl *ctrl = info->ctrl;
+ struct device *dev = info->dev;
struct regmap *regmap;
int reg, ret;
u32 data, temp, rmask_bits;
u8 bit;
int drv_type = bank->drv[pin_num / 8].drv_type;
- ctrl->drv_calc_reg(bank, pin_num, ®map, ®, &bit);
+ ret = ctrl->drv_calc_reg(bank, pin_num, ®map, ®, &bit);
+ if (ret)
+ return ret;
switch (drv_type) {
case DRV_TYPE_IO_1V8_3V0_AUTO:
@@ -1675,7 +1716,7 @@ static int rockchip_get_drive_perpin(struct rockchip_pin_bank *bank,
bit -= 16;
break;
default:
- dev_err(info->dev, "unsupported bit: %d for pinctrl drive type: %d\n",
+ dev_err(dev, "unsupported bit: %d for pinctrl drive type: %d\n",
bit, drv_type);
return -EINVAL;
}
@@ -1687,8 +1728,7 @@ static int rockchip_get_drive_perpin(struct rockchip_pin_bank *bank,
rmask_bits = RK3288_DRV_BITS_PER_PIN;
break;
default:
- dev_err(info->dev, "unsupported pinctrl drive type: %d\n",
- drv_type);
+ dev_err(dev, "unsupported pinctrl drive type: %d\n", drv_type);
return -EINVAL;
}
@@ -1707,16 +1747,19 @@ static int rockchip_set_drive_perpin(struct rockchip_pin_bank *bank,
{
struct rockchip_pinctrl *info = bank->drvdata;
struct rockchip_pin_ctrl *ctrl = info->ctrl;
+ struct device *dev = info->dev;
struct regmap *regmap;
int reg, ret, i;
u32 data, rmask, rmask_bits, temp;
u8 bit;
int drv_type = bank->drv[pin_num / 8].drv_type;
- dev_dbg(info->dev, "setting drive of GPIO%d-%d to %d\n",
+ dev_dbg(dev, "setting drive of GPIO%d-%d to %d\n",
bank->bank_num, pin_num, strength);
- ctrl->drv_calc_reg(bank, pin_num, ®map, ®, &bit);
+ ret = ctrl->drv_calc_reg(bank, pin_num, ®map, ®, &bit);
+ if (ret)
+ return ret;
if (ctrl->type == RK3568) {
rmask_bits = RK3568_DRV_BITS_PER_PIN;
ret = (1 << (strength + 1)) - 1;
@@ -1735,8 +1778,7 @@ static int rockchip_set_drive_perpin(struct rockchip_pin_bank *bank,
}
if (ret < 0) {
- dev_err(info->dev, "unsupported driver strength %d\n",
- strength);
+ dev_err(dev, "unsupported driver strength %d\n", strength);
return ret;
}
@@ -1775,7 +1817,7 @@ static int rockchip_set_drive_perpin(struct rockchip_pin_bank *bank,
bit -= 16;
break;
default:
- dev_err(info->dev, "unsupported bit: %d for pinctrl drive type: %d\n",
+ dev_err(dev, "unsupported bit: %d for pinctrl drive type: %d\n",
bit, drv_type);
return -EINVAL;
}
@@ -1786,8 +1828,7 @@ static int rockchip_set_drive_perpin(struct rockchip_pin_bank *bank,
rmask_bits = RK3288_DRV_BITS_PER_PIN;
break;
default:
- dev_err(info->dev, "unsupported pinctrl drive type: %d\n",
- drv_type);
+ dev_err(dev, "unsupported pinctrl drive type: %d\n", drv_type);
return -EINVAL;
}
@@ -1821,6 +1862,7 @@ static int rockchip_get_pull(struct rockchip_pin_bank *bank, int pin_num)
{
struct rockchip_pinctrl *info = bank->drvdata;
struct rockchip_pin_ctrl *ctrl = info->ctrl;
+ struct device *dev = info->dev;
struct regmap *regmap;
int reg, ret, pull_type;
u8 bit;
@@ -1830,7 +1872,9 @@ static int rockchip_get_pull(struct rockchip_pin_bank *bank, int pin_num)
if (ctrl->type == RK3066B)
return PIN_CONFIG_BIAS_DISABLE;
- ctrl->pull_calc_reg(bank, pin_num, ®map, ®, &bit);
+ ret = ctrl->pull_calc_reg(bank, pin_num, ®map, ®, &bit);
+ if (ret)
+ return ret;
ret = regmap_read(regmap, reg, &data);
if (ret)
@@ -1849,13 +1893,22 @@ static int rockchip_get_pull(struct rockchip_pin_bank *bank, int pin_num)
case RK3308:
case RK3368:
case RK3399:
+ case RK3568:
pull_type = bank->pull_type[pin_num / 8];
data >>= bit;
data &= (1 << RK3188_PULL_BITS_PER_PIN) - 1;
+ /*
+ * In the TRM, pull-up being 1 for everything except the GPIO0_D3-D6,
+ * where that pull up value becomes 3.
+ */
+ if (ctrl->type == RK3568 && bank->bank_num == 0 && pin_num >= 27 && pin_num <= 30) {
+ if (data == 3)
+ data = 1;
+ }
return rockchip_pull_list[pull_type][data];
default:
- dev_err(info->dev, "unsupported pinctrl type\n");
+ dev_err(dev, "unsupported pinctrl type\n");
return -EINVAL;
};
}
@@ -1865,19 +1918,21 @@ static int rockchip_set_pull(struct rockchip_pin_bank *bank,
{
struct rockchip_pinctrl *info = bank->drvdata;
struct rockchip_pin_ctrl *ctrl = info->ctrl;
+ struct device *dev = info->dev;
struct regmap *regmap;
int reg, ret, i, pull_type;
u8 bit;
u32 data, rmask;
- dev_dbg(info->dev, "setting pull of GPIO%d-%d to %d\n",
- bank->bank_num, pin_num, pull);
+ dev_dbg(dev, "setting pull of GPIO%d-%d to %d\n", bank->bank_num, pin_num, pull);
/* rk3066b does support any pulls */
if (ctrl->type == RK3066B)
return pull ? -EINVAL : 0;
- ctrl->pull_calc_reg(bank, pin_num, ®map, ®, &bit);
+ ret = ctrl->pull_calc_reg(bank, pin_num, ®map, ®, &bit);
+ if (ret)
+ return ret;
switch (ctrl->type) {
case RK2928:
@@ -1905,7 +1960,7 @@ static int rockchip_set_pull(struct rockchip_pin_bank *bank,
}
}
/*
- * In the TRM, pull-up being 1 for everything except the GPIO0_D0-D6,
+ * In the TRM, pull-up being 1 for everything except the GPIO0_D3-D6,
* where that pull up value becomes 3.
*/
if (ctrl->type == RK3568 && bank->bank_num == 0 && pin_num >= 27 && pin_num <= 30) {
@@ -1914,8 +1969,7 @@ static int rockchip_set_pull(struct rockchip_pin_bank *bank,
}
if (ret < 0) {
- dev_err(info->dev, "unsupported pull setting %d\n",
- pull);
+ dev_err(dev, "unsupported pull setting %d\n", pull);
return ret;
}
@@ -1927,7 +1981,7 @@ static int rockchip_set_pull(struct rockchip_pin_bank *bank,
ret = regmap_update_bits(regmap, reg, rmask, data);
break;
default:
- dev_err(info->dev, "unsupported pinctrl type\n");
+ dev_err(dev, "unsupported pinctrl type\n");
return -EINVAL;
}
@@ -2018,12 +2072,13 @@ static int rockchip_set_schmitt(struct rockchip_pin_bank *bank,
{
struct rockchip_pinctrl *info = bank->drvdata;
struct rockchip_pin_ctrl *ctrl = info->ctrl;
+ struct device *dev = info->dev;
struct regmap *regmap;
int reg, ret;
u8 bit;
u32 data, rmask;
- dev_dbg(info->dev, "setting input schmitt of GPIO%d-%d to %d\n",
+ dev_dbg(dev, "setting input schmitt of GPIO%d-%d to %d\n",
bank->bank_num, pin_num, enable);
ret = ctrl->schmitt_calc_reg(bank, pin_num, ®map, ®, &bit);
@@ -2083,10 +2138,11 @@ static int rockchip_pmx_set(struct pinctrl_dev *pctldev, unsigned selector,
struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev);
const unsigned int *pins = info->groups[group].pins;
const struct rockchip_pin_config *data = info->groups[group].data;
+ struct device *dev = info->dev;
struct rockchip_pin_bank *bank;
int cnt, ret = 0;
- dev_dbg(info->dev, "enable function %s group %s\n",
+ dev_dbg(dev, "enable function %s group %s\n",
info->functions[selector].name, info->groups[group].name);
/*
@@ -2392,6 +2448,7 @@ static int rockchip_pinctrl_parse_groups(struct device_node *np,
struct rockchip_pinctrl *info,
u32 index)
{
+ struct device *dev = info->dev;
struct rockchip_pin_bank *bank;
int size;
const __be32 *list;
@@ -2399,7 +2456,7 @@ static int rockchip_pinctrl_parse_groups(struct device_node *np,
int i, j;
int ret;
- dev_dbg(info->dev, "group(%d): %pOFn\n", index, np);
+ dev_dbg(dev, "group(%d): %pOFn\n", index, np);
/* Initialise group */
grp->name = np->name;
@@ -2412,18 +2469,14 @@ static int rockchip_pinctrl_parse_groups(struct device_node *np,
/* we do not check return since it's safe node passed down */
size /= sizeof(*list);
if (!size || size % 4) {
- dev_err(info->dev, "wrong pins number or pins and configs should be by 4\n");
+ dev_err(dev, "wrong pins number or pins and configs should be by 4\n");
return -EINVAL;
}
grp->npins = size / 4;
- grp->pins = devm_kcalloc(info->dev, grp->npins, sizeof(unsigned int),
- GFP_KERNEL);
- grp->data = devm_kcalloc(info->dev,
- grp->npins,
- sizeof(struct rockchip_pin_config),
- GFP_KERNEL);
+ grp->pins = devm_kcalloc(dev, grp->npins, sizeof(*grp->pins), GFP_KERNEL);
+ grp->data = devm_kcalloc(dev, grp->npins, sizeof(*grp->data), GFP_KERNEL);
if (!grp->pins || !grp->data)
return -ENOMEM;
@@ -2446,6 +2499,7 @@ static int rockchip_pinctrl_parse_groups(struct device_node *np,
np_config = of_find_node_by_phandle(be32_to_cpup(phandle));
ret = pinconf_generic_parse_dt_config(np_config, NULL,
&grp->data[j].configs, &grp->data[j].nconfigs);
+ of_node_put(np_config);
if (ret)
return ret;
}
@@ -2457,6 +2511,7 @@ static int rockchip_pinctrl_parse_functions(struct device_node *np,
struct rockchip_pinctrl *info,
u32 index)
{
+ struct device *dev = info->dev;
struct device_node *child;
struct rockchip_pmx_func *func;
struct rockchip_pin_group *grp;
@@ -2464,7 +2519,7 @@ static int rockchip_pinctrl_parse_functions(struct device_node *np,
static u32 grp_index;
u32 i = 0;
- dev_dbg(info->dev, "parse function(%d): %pOFn\n", index, np);
+ dev_dbg(dev, "parse function(%d): %pOFn\n", index, np);
func = &info->functions[index];
@@ -2474,8 +2529,7 @@ static int rockchip_pinctrl_parse_functions(struct device_node *np,
if (func->ngroups <= 0)
return 0;
- func->groups = devm_kcalloc(info->dev,
- func->ngroups, sizeof(char *), GFP_KERNEL);
+ func->groups = devm_kcalloc(dev, func->ngroups, sizeof(*func->groups), GFP_KERNEL);
if (!func->groups)
return -ENOMEM;
@@ -2503,20 +2557,14 @@ static int rockchip_pinctrl_parse_dt(struct platform_device *pdev,
rockchip_pinctrl_child_count(info, np);
- dev_dbg(&pdev->dev, "nfunctions = %d\n", info->nfunctions);
- dev_dbg(&pdev->dev, "ngroups = %d\n", info->ngroups);
+ dev_dbg(dev, "nfunctions = %d\n", info->nfunctions);
+ dev_dbg(dev, "ngroups = %d\n", info->ngroups);
- info->functions = devm_kcalloc(dev,
- info->nfunctions,
- sizeof(struct rockchip_pmx_func),
- GFP_KERNEL);
+ info->functions = devm_kcalloc(dev, info->nfunctions, sizeof(*info->functions), GFP_KERNEL);
if (!info->functions)
return -ENOMEM;
- info->groups = devm_kcalloc(dev,
- info->ngroups,
- sizeof(struct rockchip_pin_group),
- GFP_KERNEL);
+ info->groups = devm_kcalloc(dev, info->ngroups, sizeof(*info->groups), GFP_KERNEL);
if (!info->groups)
return -ENOMEM;
@@ -2528,7 +2576,7 @@ static int rockchip_pinctrl_parse_dt(struct platform_device *pdev,
ret = rockchip_pinctrl_parse_functions(child, info, i++);
if (ret) {
- dev_err(&pdev->dev, "failed to parse function\n");
+ dev_err(dev, "failed to parse function\n");
of_node_put(child);
return ret;
}
@@ -2543,6 +2591,7 @@ static int rockchip_pinctrl_register(struct platform_device *pdev,
struct pinctrl_desc *ctrldesc = &info->pctl;
struct pinctrl_pin_desc *pindesc, *pdesc;
struct rockchip_pin_bank *pin_bank;
+ struct device *dev = &pdev->dev;
int pin, bank, ret;
int k;
@@ -2552,9 +2601,7 @@ static int rockchip_pinctrl_register(struct platform_device *pdev,
ctrldesc->pmxops = &rockchip_pmx_ops;
ctrldesc->confops = &rockchip_pinconf_ops;
- pindesc = devm_kcalloc(&pdev->dev,
- info->ctrl->nr_pins, sizeof(*pindesc),
- GFP_KERNEL);
+ pindesc = devm_kcalloc(dev, info->ctrl->nr_pins, sizeof(*pindesc), GFP_KERNEL);
if (!pindesc)
return -ENOMEM;
@@ -2579,9 +2626,9 @@ static int rockchip_pinctrl_register(struct platform_device *pdev,
if (ret)
return ret;
- info->pctl_dev = devm_pinctrl_register(&pdev->dev, ctrldesc, info);
+ info->pctl_dev = devm_pinctrl_register(dev, ctrldesc, info);
if (IS_ERR(info->pctl_dev)) {
- dev_err(&pdev->dev, "could not register pinctrl driver\n");
+ dev_err(dev, "could not register pinctrl driver\n");
return PTR_ERR(info->pctl_dev);
}
@@ -2595,8 +2642,9 @@ static struct rockchip_pin_ctrl *rockchip_pinctrl_get_soc_data(
struct rockchip_pinctrl *d,
struct platform_device *pdev)
{
+ struct device *dev = &pdev->dev;
+ struct device_node *node = dev->of_node;
const struct of_device_id *match;
- struct device_node *node = pdev->dev.of_node;
struct rockchip_pin_ctrl *ctrl;
struct rockchip_pin_bank *bank;
int grf_offs, pmu_offs, drv_grf_offs, drv_pmu_offs, i, j;
@@ -2648,7 +2696,7 @@ static struct rockchip_pin_ctrl *rockchip_pinctrl_get_soc_data(
drv_pmu_offs : drv_grf_offs;
}
- dev_dbg(d->dev, "bank %d, iomux %d has iom_offset 0x%x drv_offset 0x%x\n",
+ dev_dbg(dev, "bank %d, iomux %d has iom_offset 0x%x drv_offset 0x%x\n",
i, j, iom->offset, drv->offset);
/*
@@ -2757,8 +2805,8 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
{
struct rockchip_pinctrl *info;
struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node, *node;
struct rockchip_pin_ctrl *ctrl;
- struct device_node *np = pdev->dev.of_node, *node;
struct resource *res;
void __iomem *base;
int ret;
@@ -2795,8 +2843,8 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
rockchip_regmap_config.max_register = resource_size(res) - 4;
rockchip_regmap_config.name = "rockchip,pinctrl";
- info->regmap_base = devm_regmap_init_mmio(&pdev->dev, base,
- &rockchip_regmap_config);
+ info->regmap_base =
+ devm_regmap_init_mmio(dev, base, &rockchip_regmap_config);
/* to check for the old dt-bindings */
info->reg_size = resource_size(res);
@@ -2808,12 +2856,10 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
if (IS_ERR(base))
return PTR_ERR(base);
- rockchip_regmap_config.max_register =
- resource_size(res) - 4;
+ rockchip_regmap_config.max_register = resource_size(res) - 4;
rockchip_regmap_config.name = "rockchip,pinctrl-pull";
- info->regmap_pull = devm_regmap_init_mmio(&pdev->dev,
- base,
- &rockchip_regmap_config);
+ info->regmap_pull =
+ devm_regmap_init_mmio(dev, base, &rockchip_regmap_config);
}
}
@@ -2834,7 +2880,7 @@ static int rockchip_pinctrl_probe(struct platform_device *pdev)
ret = of_platform_populate(np, NULL, NULL, &pdev->dev);
if (ret) {
- dev_err(&pdev->dev, "failed to register gpio device\n");
+ dev_err(dev, "failed to register gpio device\n");
return ret;
}
diff --git a/drivers/pinctrl/pinctrl-rockchip.h b/drivers/pinctrl/pinctrl-rockchip.h
index 98a01a6..59116e1 100644
--- a/drivers/pinctrl/pinctrl-rockchip.h
+++ b/drivers/pinctrl/pinctrl-rockchip.h
@@ -230,10 +230,10 @@ struct rockchip_pin_ctrl {
struct rockchip_mux_route_data *iomux_routes;
u32 niomux_routes;
- void (*pull_calc_reg)(struct rockchip_pin_bank *bank,
+ int (*pull_calc_reg)(struct rockchip_pin_bank *bank,
int pin_num, struct regmap **regmap,
int *reg, u8 *bit);
- void (*drv_calc_reg)(struct rockchip_pin_bank *bank,
+ int (*drv_calc_reg)(struct rockchip_pin_bank *bank,
int pin_num, struct regmap **regmap,
int *reg, u8 *bit);
int (*schmitt_calc_reg)(struct rockchip_pin_bank *bank,
diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
index 414ee6b..9ad8f70 100644
--- a/drivers/pinctrl/pinctrl-single.c
+++ b/drivers/pinctrl/pinctrl-single.c
@@ -372,6 +372,8 @@ static int pcs_set_mux(struct pinctrl_dev *pctldev, unsigned fselector,
if (!pcs->fmask)
return 0;
function = pinmux_generic_get_function(pctldev, fselector);
+ if (!function)
+ return -EINVAL;
func = function->data;
if (!func)
return -EINVAL;
diff --git a/drivers/pinctrl/qcom/pinctrl-msm8976.c b/drivers/pinctrl/qcom/pinctrl-msm8976.c
index ec43edf..e11d845 100644
--- a/drivers/pinctrl/qcom/pinctrl-msm8976.c
+++ b/drivers/pinctrl/qcom/pinctrl-msm8976.c
@@ -733,7 +733,7 @@ static const char * const codec_int2_groups[] = {
"gpio74",
};
static const char * const wcss_bt_groups[] = {
- "gpio39", "gpio47", "gpio88",
+ "gpio39", "gpio47", "gpio48",
};
static const char * const sdc3_groups[] = {
"gpio39", "gpio40", "gpio41",
@@ -958,9 +958,9 @@ static const struct msm_pingroup msm8976_groups[] = {
PINGROUP(37, NA, NA, NA, qdss_tracedata_b, NA, NA, NA, NA, NA),
PINGROUP(38, NA, NA, NA, NA, NA, NA, NA, qdss_tracedata_b, NA),
PINGROUP(39, wcss_bt, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
- PINGROUP(40, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
- PINGROUP(41, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
- PINGROUP(42, wcss_wlan, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+ PINGROUP(40, wcss_wlan2, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+ PINGROUP(41, wcss_wlan1, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
+ PINGROUP(42, wcss_wlan0, sdc3, NA, qdss_tracedata_a, NA, NA, NA, NA, NA),
PINGROUP(43, wcss_wlan, sdc3, NA, NA, qdss_tracedata_a, NA, NA, NA, NA),
PINGROUP(44, wcss_wlan, sdc3, NA, NA, NA, NA, NA, NA, NA),
PINGROUP(45, wcss_fm, NA, qdss_tracectl_a, NA, NA, NA, NA, NA, NA),
diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
index d3fa8cf..abb12a5 100644
--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
+++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
@@ -1334,6 +1334,7 @@ static struct irq_domain *stm32_pctrl_get_irq_domain(struct device_node *np)
return ERR_PTR(-ENXIO);
domain = irq_find_host(parent);
+ of_node_put(parent);
if (!domain)
/* domain not registered yet */
return ERR_PTR(-EPROBE_DEFER);
diff --git a/drivers/platform/chrome/cros_ec_chardev.c b/drivers/platform/chrome/cros_ec_chardev.c
index 0de7c25..d6de5a2 100644
--- a/drivers/platform/chrome/cros_ec_chardev.c
+++ b/drivers/platform/chrome/cros_ec_chardev.c
@@ -284,7 +284,7 @@ static long cros_ec_chardev_ioctl_xcmd(struct cros_ec_dev *ec, void __user *arg)
u_cmd.insize > EC_MAX_MSG_BYTES)
return -EINVAL;
- s_cmd = kmalloc(sizeof(*s_cmd) + max(u_cmd.outsize, u_cmd.insize),
+ s_cmd = kzalloc(sizeof(*s_cmd) + max(u_cmd.outsize, u_cmd.insize),
GFP_KERNEL);
if (!s_cmd)
return -ENOMEM;
diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig
index f1ff003..6118682 100644
--- a/drivers/platform/x86/Kconfig
+++ b/drivers/platform/x86/Kconfig
@@ -171,6 +171,7 @@
config AMD_PMC
tristate "AMD SoC PMC driver"
depends on ACPI && PCI
+ select SERIO
help
The driver provides support for AMD Power Management Controller
primarily responsible for S2Idle transactions that are driven from
@@ -942,7 +943,8 @@
config MLX_PLATFORM
tristate "Mellanox Technologies platform support"
- depends on I2C && REGMAP
+ depends on I2C
+ select REGMAP
help
This option enables system support for the Mellanox Technologies
platform. The Mellanox systems provide data center networking
diff --git a/drivers/platform/x86/amd-pmc.c b/drivers/platform/x86/amd-pmc.c
index d3efd51..83fea28 100644
--- a/drivers/platform/x86/amd-pmc.c
+++ b/drivers/platform/x86/amd-pmc.c
@@ -20,6 +20,7 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
+#include <linux/serio.h>
#include <linux/suspend.h>
#include <linux/seq_file.h>
#include <linux/uaccess.h>
@@ -29,6 +30,10 @@
#define AMD_PMC_REGISTER_RESPONSE 0x980
#define AMD_PMC_REGISTER_ARGUMENT 0x9BC
+/* PMC Scratch Registers */
+#define AMD_PMC_SCRATCH_REG_CZN 0x94
+#define AMD_PMC_SCRATCH_REG_YC 0xD14
+
/* Base address of SMU for mapping physical address to virtual address */
#define AMD_PMC_SMU_INDEX_ADDRESS 0xB8
#define AMD_PMC_SMU_INDEX_DATA 0xBC
@@ -110,6 +115,11 @@ struct amd_pmc_dev {
u32 base_addr;
u32 cpu_id;
u32 active_ips;
+/* SMU version information */
+ u8 smu_program;
+ u8 major;
+ u8 minor;
+ u8 rev;
struct device *dev;
struct mutex lock; /* generic mutex lock */
#if IS_ENABLED(CONFIG_DEBUG_FS)
@@ -147,6 +157,51 @@ struct smu_metrics {
u64 timecondition_notmet_totaltime[SOC_SUBSYSTEM_IP_MAX];
} __packed;
+static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
+{
+ int rc;
+ u32 val;
+
+ rc = amd_pmc_send_cmd(dev, 0, &val, SMU_MSG_GETSMUVERSION, 1);
+ if (rc)
+ return rc;
+
+ dev->smu_program = (val >> 24) & GENMASK(7, 0);
+ dev->major = (val >> 16) & GENMASK(7, 0);
+ dev->minor = (val >> 8) & GENMASK(7, 0);
+ dev->rev = (val >> 0) & GENMASK(7, 0);
+
+ dev_dbg(dev->dev, "SMU program %u version is %u.%u.%u\n",
+ dev->smu_program, dev->major, dev->minor, dev->rev);
+
+ return 0;
+}
+
+static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
+ struct seq_file *s)
+{
+ u32 val;
+
+ switch (pdev->cpu_id) {
+ case AMD_CPU_ID_CZN:
+ val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_CZN);
+ break;
+ case AMD_CPU_ID_YC:
+ val = amd_pmc_reg_read(pdev, AMD_PMC_SCRATCH_REG_YC);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (dev)
+ dev_dbg(pdev->dev, "SMU idlemask s0i3: 0x%x\n", val);
+
+ if (s)
+ seq_printf(s, "SMU idlemask : 0x%x\n", val);
+
+ return 0;
+}
+
#ifdef CONFIG_DEBUG_FS
static int smu_fw_info_show(struct seq_file *s, void *unused)
{
@@ -201,6 +256,23 @@ static int s0ix_stats_show(struct seq_file *s, void *unused)
}
DEFINE_SHOW_ATTRIBUTE(s0ix_stats);
+static int amd_pmc_idlemask_show(struct seq_file *s, void *unused)
+{
+ struct amd_pmc_dev *dev = s->private;
+ int rc;
+
+ if (dev->major > 56 || (dev->major >= 55 && dev->minor >= 37)) {
+ rc = amd_pmc_idlemask_read(dev, NULL, s);
+ if (rc)
+ return rc;
+ } else {
+ seq_puts(s, "Unsupported SMU version for Idlemask\n");
+ }
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(amd_pmc_idlemask);
+
static void amd_pmc_dbgfs_unregister(struct amd_pmc_dev *dev)
{
debugfs_remove_recursive(dev->dbgfs_dir);
@@ -213,6 +285,8 @@ static void amd_pmc_dbgfs_register(struct amd_pmc_dev *dev)
&smu_fw_info_fops);
debugfs_create_file("s0ix_stats", 0644, dev->dbgfs_dir, dev,
&s0ix_stats_fops);
+ debugfs_create_file("amd_pmc_idlemask", 0644, dev->dbgfs_dir, dev,
+ &amd_pmc_idlemask_fops);
}
#else
static inline void amd_pmc_dbgfs_register(struct amd_pmc_dev *dev)
@@ -339,16 +413,54 @@ static int amd_pmc_get_os_hint(struct amd_pmc_dev *dev)
return -EINVAL;
}
+static int amd_pmc_czn_wa_irq1(struct amd_pmc_dev *pdev)
+{
+ struct device *d;
+ int rc;
+
+ if (!pdev->major) {
+ rc = amd_pmc_get_smu_version(pdev);
+ if (rc)
+ return rc;
+ }
+
+ if (pdev->major > 64 || (pdev->major == 64 && pdev->minor > 65))
+ return 0;
+
+ d = bus_find_device_by_name(&serio_bus, NULL, "serio0");
+ if (!d)
+ return 0;
+ if (device_may_wakeup(d)) {
+ dev_info_once(d, "Disabling IRQ1 wakeup source to avoid platform firmware bug\n");
+ disable_irq_wake(1);
+ device_set_wakeup_enable(d, false);
+ }
+ put_device(d);
+
+ return 0;
+}
+
static int __maybe_unused amd_pmc_suspend(struct device *dev)
{
struct amd_pmc_dev *pdev = dev_get_drvdata(dev);
int rc;
u8 msg;
+ if (pdev->cpu_id == AMD_CPU_ID_CZN) {
+ int rc = amd_pmc_czn_wa_irq1(pdev);
+
+ if (rc) {
+ dev_err(pdev->dev, "failed to adjust keyboard wakeup: %d\n", rc);
+ return rc;
+ }
+ }
+
/* Reset and Start SMU logging - to monitor the s0i3 stats */
amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_RESET, 0);
amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_START, 0);
+ /* Dump the IdleMask before we send hint to SMU */
+ amd_pmc_idlemask_read(pdev, dev, NULL);
msg = amd_pmc_get_os_hint(pdev);
rc = amd_pmc_send_cmd(pdev, 1, NULL, msg, 0);
if (rc)
@@ -371,6 +483,9 @@ static int __maybe_unused amd_pmc_resume(struct device *dev)
if (rc)
dev_err(pdev->dev, "resume failed\n");
+ /* Dump the IdleMask to see the blockers */
+ amd_pmc_idlemask_read(pdev, dev, NULL);
+
return 0;
}
@@ -458,6 +573,7 @@ static int amd_pmc_probe(struct platform_device *pdev)
if (err)
dev_err(dev->dev, "SMU debugging info not supported on this platform\n");
+ amd_pmc_get_smu_version(dev);
platform_set_drvdata(pdev, dev);
amd_pmc_dbgfs_register(dev);
return 0;
diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
index a81dc4b..4d7327b 100644
--- a/drivers/platform/x86/asus-nb-wmi.c
+++ b/drivers/platform/x86/asus-nb-wmi.c
@@ -521,6 +521,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
{ KE_KEY, 0x30, { KEY_VOLUMEUP } },
{ KE_KEY, 0x31, { KEY_VOLUMEDOWN } },
{ KE_KEY, 0x32, { KEY_MUTE } },
+ { KE_KEY, 0x33, { KEY_SCREENLOCK } },
{ KE_KEY, 0x35, { KEY_SCREENLOCK } },
{ KE_KEY, 0x40, { KEY_PREVIOUSSONG } },
{ KE_KEY, 0x41, { KEY_NEXTSONG } },
diff --git a/drivers/platform/x86/dell/dell-wmi-base.c b/drivers/platform/x86/dell/dell-wmi-base.c
index 089c125..b83d6fa 100644
--- a/drivers/platform/x86/dell/dell-wmi-base.c
+++ b/drivers/platform/x86/dell/dell-wmi-base.c
@@ -260,6 +260,9 @@ static const struct key_entry dell_wmi_keymap_type_0010[] = {
{ KE_KEY, 0x57, { KEY_BRIGHTNESSDOWN } },
{ KE_KEY, 0x58, { KEY_BRIGHTNESSUP } },
+ /*Speaker Mute*/
+ { KE_KEY, 0x109, { KEY_MUTE} },
+
/* Mic mute */
{ KE_KEY, 0x150, { KEY_MICMUTE } },
diff --git a/drivers/platform/x86/gigabyte-wmi.c b/drivers/platform/x86/gigabyte-wmi.c
index ebd15c1..0163e91 100644
--- a/drivers/platform/x86/gigabyte-wmi.c
+++ b/drivers/platform/x86/gigabyte-wmi.c
@@ -141,6 +141,7 @@ static u8 gigabyte_wmi_detect_sensor_usability(struct wmi_device *wdev)
static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M DS3H-CF"),
+ DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M DS3H WIFI-CF"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B450M S2H V2"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE AX V2"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550 AORUS ELITE"),
diff --git a/drivers/platform/x86/intel/int3472/Makefile b/drivers/platform/x86/intel/int3472/Makefile
index 2362e04..771e720 100644
--- a/drivers/platform/x86/intel/int3472/Makefile
+++ b/drivers/platform/x86/intel/int3472/Makefile
@@ -1,5 +1,4 @@
-obj-$(CONFIG_INTEL_SKL_INT3472) += intel_skl_int3472.o
-intel_skl_int3472-y := intel_skl_int3472_common.o \
- intel_skl_int3472_discrete.o \
- intel_skl_int3472_tps68470.o \
- intel_skl_int3472_clk_and_regulator.o
+obj-$(CONFIG_INTEL_SKL_INT3472) += intel_skl_int3472_discrete.o \
+ intel_skl_int3472_tps68470.o
+intel_skl_int3472_discrete-y := discrete.o clk_and_regulator.o common.o
+intel_skl_int3472_tps68470-y := tps68470.o common.o
diff --git a/drivers/platform/x86/intel/int3472/intel_skl_int3472_clk_and_regulator.c b/drivers/platform/x86/intel/int3472/clk_and_regulator.c
similarity index 97%
rename from drivers/platform/x86/intel/int3472/intel_skl_int3472_clk_and_regulator.c
rename to drivers/platform/x86/intel/int3472/clk_and_regulator.c
index 1700e75..28353ad 100644
--- a/drivers/platform/x86/intel/int3472/intel_skl_int3472_clk_and_regulator.c
+++ b/drivers/platform/x86/intel/int3472/clk_and_regulator.c
@@ -9,7 +9,7 @@
#include <linux/regulator/driver.h>
#include <linux/slab.h>
-#include "intel_skl_int3472_common.h"
+#include "common.h"
/*
* The regulators have to have .ops to be valid, but the only ops we actually
@@ -181,6 +181,9 @@ int skl_int3472_register_regulator(struct int3472_discrete_device *int3472,
return PTR_ERR(int3472->regulator.gpio);
}
+ /* Ensure the pin is in output mode and non-active state */
+ gpiod_direction_output(int3472->regulator.gpio, 0);
+
cfg.dev = &int3472->adev->dev;
cfg.init_data = &init_data;
cfg.ena_gpiod = int3472->regulator.gpio;
diff --git a/drivers/platform/x86/intel/int3472/common.c b/drivers/platform/x86/intel/int3472/common.c
new file mode 100644
index 0000000..350655a9
--- /dev/null
+++ b/drivers/platform/x86/intel/int3472/common.c
@@ -0,0 +1,54 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Author: Dan Scally <djrscally@gmail.com> */
+
+#include <linux/acpi.h>
+#include <linux/slab.h>
+
+#include "common.h"
+
+union acpi_object *skl_int3472_get_acpi_buffer(struct acpi_device *adev, char *id)
+{
+ struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+ acpi_handle handle = adev->handle;
+ union acpi_object *obj;
+ acpi_status status;
+
+ status = acpi_evaluate_object(handle, id, NULL, &buffer);
+ if (ACPI_FAILURE(status))
+ return ERR_PTR(-ENODEV);
+
+ obj = buffer.pointer;
+ if (!obj)
+ return ERR_PTR(-ENODEV);
+
+ if (obj->type != ACPI_TYPE_BUFFER) {
+ acpi_handle_err(handle, "%s object is not an ACPI buffer\n", id);
+ kfree(obj);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return obj;
+}
+
+int skl_int3472_fill_cldb(struct acpi_device *adev, struct int3472_cldb *cldb)
+{
+ union acpi_object *obj;
+ int ret;
+
+ obj = skl_int3472_get_acpi_buffer(adev, "CLDB");
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ if (obj->buffer.length > sizeof(*cldb)) {
+ acpi_handle_err(adev->handle, "The CLDB buffer is too large\n");
+ ret = -EINVAL;
+ goto out_free_obj;
+ }
+
+ memcpy(cldb, obj->buffer.pointer, obj->buffer.length);
+ ret = 0;
+
+out_free_obj:
+ kfree(obj);
+ return ret;
+}
diff --git a/drivers/platform/x86/intel/int3472/intel_skl_int3472_common.h b/drivers/platform/x86/intel/int3472/common.h
similarity index 94%
rename from drivers/platform/x86/intel/int3472/intel_skl_int3472_common.h
rename to drivers/platform/x86/intel/int3472/common.h
index 714fde7..d14944e 100644
--- a/drivers/platform/x86/intel/int3472/intel_skl_int3472_common.h
+++ b/drivers/platform/x86/intel/int3472/common.h
@@ -105,9 +105,6 @@ struct int3472_discrete_device {
struct gpiod_lookup_table gpios;
};
-int skl_int3472_discrete_probe(struct platform_device *pdev);
-int skl_int3472_discrete_remove(struct platform_device *pdev);
-int skl_int3472_tps68470_probe(struct i2c_client *client);
union acpi_object *skl_int3472_get_acpi_buffer(struct acpi_device *adev,
char *id);
int skl_int3472_fill_cldb(struct acpi_device *adev, struct int3472_cldb *cldb);
diff --git a/drivers/platform/x86/intel/int3472/intel_skl_int3472_discrete.c b/drivers/platform/x86/intel/int3472/discrete.c
similarity index 91%
rename from drivers/platform/x86/intel/int3472/intel_skl_int3472_discrete.c
rename to drivers/platform/x86/intel/int3472/discrete.c
index e59d79c..401fa8f 100644
--- a/drivers/platform/x86/intel/int3472/intel_skl_int3472_discrete.c
+++ b/drivers/platform/x86/intel/int3472/discrete.c
@@ -14,7 +14,7 @@
#include <linux/platform_device.h>
#include <linux/uuid.h>
-#include "intel_skl_int3472_common.h"
+#include "common.h"
/*
* 79234640-9e10-4fea-a5c1-b5aa8b19756f
@@ -169,6 +169,8 @@ static int skl_int3472_map_gpio_to_clk(struct int3472_discrete_device *int3472,
return (PTR_ERR(gpio));
int3472->clock.ena_gpio = gpio;
+ /* Ensure the pin is in output mode and non-active state */
+ gpiod_direction_output(int3472->clock.ena_gpio, 0);
break;
case INT3472_GPIO_TYPE_PRIVACY_LED:
gpio = acpi_get_and_request_gpiod(path, pin, "int3472,privacy-led");
@@ -176,6 +178,8 @@ static int skl_int3472_map_gpio_to_clk(struct int3472_discrete_device *int3472,
return (PTR_ERR(gpio));
int3472->clock.led_gpio = gpio;
+ /* Ensure the pin is in output mode and non-active state */
+ gpiod_direction_output(int3472->clock.led_gpio, 0);
break;
default:
dev_err(int3472->dev, "Invalid GPIO type 0x%02x for clock\n", type);
@@ -332,7 +336,9 @@ static int skl_int3472_parse_crs(struct int3472_discrete_device *int3472)
return 0;
}
-int skl_int3472_discrete_probe(struct platform_device *pdev)
+static int skl_int3472_discrete_remove(struct platform_device *pdev);
+
+static int skl_int3472_discrete_probe(struct platform_device *pdev)
{
struct acpi_device *adev = ACPI_COMPANION(&pdev->dev);
struct int3472_discrete_device *int3472;
@@ -395,7 +401,7 @@ int skl_int3472_discrete_probe(struct platform_device *pdev)
return ret;
}
-int skl_int3472_discrete_remove(struct platform_device *pdev)
+static int skl_int3472_discrete_remove(struct platform_device *pdev)
{
struct int3472_discrete_device *int3472 = platform_get_drvdata(pdev);
@@ -411,3 +417,23 @@ int skl_int3472_discrete_remove(struct platform_device *pdev)
return 0;
}
+
+static const struct acpi_device_id int3472_device_id[] = {
+ { "INT3472", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(acpi, int3472_device_id);
+
+static struct platform_driver int3472_discrete = {
+ .driver = {
+ .name = "int3472-discrete",
+ .acpi_match_table = int3472_device_id,
+ },
+ .probe = skl_int3472_discrete_probe,
+ .remove = skl_int3472_discrete_remove,
+};
+module_platform_driver(int3472_discrete);
+
+MODULE_DESCRIPTION("Intel SkyLake INT3472 ACPI Discrete Device Driver");
+MODULE_AUTHOR("Daniel Scally <djrscally@gmail.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/platform/x86/intel/int3472/intel_skl_int3472_common.c b/drivers/platform/x86/intel/int3472/intel_skl_int3472_common.c
deleted file mode 100644
index 497e74f..0000000
--- a/drivers/platform/x86/intel/int3472/intel_skl_int3472_common.c
+++ /dev/null
@@ -1,106 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Author: Dan Scally <djrscally@gmail.com> */
-
-#include <linux/acpi.h>
-#include <linux/i2c.h>
-#include <linux/platform_device.h>
-#include <linux/slab.h>
-
-#include "intel_skl_int3472_common.h"
-
-union acpi_object *skl_int3472_get_acpi_buffer(struct acpi_device *adev, char *id)
-{
- struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
- acpi_handle handle = adev->handle;
- union acpi_object *obj;
- acpi_status status;
-
- status = acpi_evaluate_object(handle, id, NULL, &buffer);
- if (ACPI_FAILURE(status))
- return ERR_PTR(-ENODEV);
-
- obj = buffer.pointer;
- if (!obj)
- return ERR_PTR(-ENODEV);
-
- if (obj->type != ACPI_TYPE_BUFFER) {
- acpi_handle_err(handle, "%s object is not an ACPI buffer\n", id);
- kfree(obj);
- return ERR_PTR(-EINVAL);
- }
-
- return obj;
-}
-
-int skl_int3472_fill_cldb(struct acpi_device *adev, struct int3472_cldb *cldb)
-{
- union acpi_object *obj;
- int ret;
-
- obj = skl_int3472_get_acpi_buffer(adev, "CLDB");
- if (IS_ERR(obj))
- return PTR_ERR(obj);
-
- if (obj->buffer.length > sizeof(*cldb)) {
- acpi_handle_err(adev->handle, "The CLDB buffer is too large\n");
- ret = -EINVAL;
- goto out_free_obj;
- }
-
- memcpy(cldb, obj->buffer.pointer, obj->buffer.length);
- ret = 0;
-
-out_free_obj:
- kfree(obj);
- return ret;
-}
-
-static const struct acpi_device_id int3472_device_id[] = {
- { "INT3472", 0 },
- { }
-};
-MODULE_DEVICE_TABLE(acpi, int3472_device_id);
-
-static struct platform_driver int3472_discrete = {
- .driver = {
- .name = "int3472-discrete",
- .acpi_match_table = int3472_device_id,
- },
- .probe = skl_int3472_discrete_probe,
- .remove = skl_int3472_discrete_remove,
-};
-
-static struct i2c_driver int3472_tps68470 = {
- .driver = {
- .name = "int3472-tps68470",
- .acpi_match_table = int3472_device_id,
- },
- .probe_new = skl_int3472_tps68470_probe,
-};
-
-static int skl_int3472_init(void)
-{
- int ret;
-
- ret = platform_driver_register(&int3472_discrete);
- if (ret)
- return ret;
-
- ret = i2c_register_driver(THIS_MODULE, &int3472_tps68470);
- if (ret)
- platform_driver_unregister(&int3472_discrete);
-
- return ret;
-}
-module_init(skl_int3472_init);
-
-static void skl_int3472_exit(void)
-{
- platform_driver_unregister(&int3472_discrete);
- i2c_del_driver(&int3472_tps68470);
-}
-module_exit(skl_int3472_exit);
-
-MODULE_DESCRIPTION("Intel SkyLake INT3472 ACPI Device Driver");
-MODULE_AUTHOR("Daniel Scally <djrscally@gmail.com>");
-MODULE_LICENSE("GPL v2");
diff --git a/drivers/platform/x86/intel/int3472/intel_skl_int3472_tps68470.c b/drivers/platform/x86/intel/int3472/tps68470.c
similarity index 85%
rename from drivers/platform/x86/intel/int3472/intel_skl_int3472_tps68470.c
rename to drivers/platform/x86/intel/int3472/tps68470.c
index c05b4cf..fd3bef4 100644
--- a/drivers/platform/x86/intel/int3472/intel_skl_int3472_tps68470.c
+++ b/drivers/platform/x86/intel/int3472/tps68470.c
@@ -7,7 +7,7 @@
#include <linux/platform_device.h>
#include <linux/regmap.h>
-#include "intel_skl_int3472_common.h"
+#include "common.h"
#define DESIGNED_FOR_CHROMEOS 1
#define DESIGNED_FOR_WINDOWS 2
@@ -95,7 +95,7 @@ static int skl_int3472_tps68470_calc_type(struct acpi_device *adev)
return DESIGNED_FOR_WINDOWS;
}
-int skl_int3472_tps68470_probe(struct i2c_client *client)
+static int skl_int3472_tps68470_probe(struct i2c_client *client)
{
struct acpi_device *adev = ACPI_COMPANION(&client->dev);
struct regmap *regmap;
@@ -135,3 +135,22 @@ int skl_int3472_tps68470_probe(struct i2c_client *client)
return ret;
}
+
+static const struct acpi_device_id int3472_device_id[] = {
+ { "INT3472", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(acpi, int3472_device_id);
+
+static struct i2c_driver int3472_tps68470 = {
+ .driver = {
+ .name = "int3472-tps68470",
+ .acpi_match_table = int3472_device_id,
+ },
+ .probe_new = skl_int3472_tps68470_probe,
+};
+module_i2c_driver(int3472_tps68470);
+
+MODULE_DESCRIPTION("Intel SkyLake INT3472 ACPI TPS68470 Device Driver");
+MODULE_AUTHOR("Daniel Scally <djrscally@gmail.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/platform/x86/intel/pmc/core.c b/drivers/platform/x86/intel/pmc/core.c
index ac19fcc..b8d67bc 100644
--- a/drivers/platform/x86/intel/pmc/core.c
+++ b/drivers/platform/x86/intel/pmc/core.c
@@ -958,7 +958,18 @@ static inline void pmc_core_reg_write(struct pmc_dev *pmcdev, int reg_offset,
static inline u64 pmc_core_adjust_slp_s0_step(struct pmc_dev *pmcdev, u32 value)
{
- return (u64)value * pmcdev->map->slp_s0_res_counter_step;
+ /*
+ * ADL PCH does not have the SLP_S0 counter and LPM Residency counters are
+ * used as a workaround which uses 30.5 usec tick. All other client
+ * programs have the legacy SLP_S0 residency counter that is using the 122
+ * usec tick.
+ */
+ const int lpm_adj_x2 = pmcdev->map->lpm_res_counter_step_x2;
+
+ if (pmcdev->map == &adl_reg_map)
+ return (u64)value * GET_X2_COUNTER((u64)lpm_adj_x2);
+ else
+ return (u64)value * pmcdev->map->slp_s0_res_counter_step;
}
static int set_etr3(struct pmc_dev *pmcdev)
diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c
index c4d9c45..76f0d04 100644
--- a/drivers/platform/x86/think-lmi.c
+++ b/drivers/platform/x86/think-lmi.c
@@ -504,7 +504,7 @@ static ssize_t display_name_show(struct kobject *kobj, struct kobj_attribute *at
static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf)
{
struct tlmi_attr_setting *setting = to_tlmi_attr_setting(kobj);
- char *item, *value;
+ char *item, *value, *p;
int ret;
ret = tlmi_setting(setting->index, &item, LENOVO_BIOS_SETTING_GUID);
@@ -514,10 +514,15 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a
/* validate and split from `item,value` -> `value` */
value = strpbrk(item, ",");
if (!value || value == item || !strlen(value + 1))
- return -EINVAL;
-
- ret = sysfs_emit(buf, "%s\n", value + 1);
+ ret = -EINVAL;
+ else {
+ /* On Workstations remove the Options part after the value */
+ p = strchrnul(value, ';');
+ *p = '\0';
+ ret = sysfs_emit(buf, "%s\n", value + 1);
+ }
kfree(item);
+
return ret;
}
@@ -525,12 +530,23 @@ static ssize_t possible_values_show(struct kobject *kobj, struct kobj_attribute
{
struct tlmi_attr_setting *setting = to_tlmi_attr_setting(kobj);
- if (!tlmi_priv.can_get_bios_selections)
- return -EOPNOTSUPP;
-
return sysfs_emit(buf, "%s\n", setting->possible_values);
}
+static ssize_t type_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *buf)
+{
+ struct tlmi_attr_setting *setting = to_tlmi_attr_setting(kobj);
+
+ if (setting->possible_values) {
+ /* Figure out what setting type is as BIOS does not return this */
+ if (strchr(setting->possible_values, ';'))
+ return sysfs_emit(buf, "enumeration\n");
+ }
+ /* Anything else is going to be a string */
+ return sysfs_emit(buf, "string\n");
+}
+
static ssize_t current_value_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
@@ -601,14 +617,30 @@ static struct kobj_attribute attr_possible_values = __ATTR_RO(possible_values);
static struct kobj_attribute attr_current_val = __ATTR_RW_MODE(current_value, 0600);
+static struct kobj_attribute attr_type = __ATTR_RO(type);
+
+static umode_t attr_is_visible(struct kobject *kobj,
+ struct attribute *attr, int n)
+{
+ struct tlmi_attr_setting *setting = to_tlmi_attr_setting(kobj);
+
+ /* We don't want to display possible_values attributes if not available */
+ if ((attr == &attr_possible_values.attr) && (!setting->possible_values))
+ return 0;
+
+ return attr->mode;
+}
+
static struct attribute *tlmi_attrs[] = {
&attr_displ_name.attr,
&attr_current_val.attr,
&attr_possible_values.attr,
+ &attr_type.attr,
NULL
};
static const struct attribute_group tlmi_attr_group = {
+ .is_visible = attr_is_visible,
.attrs = tlmi_attrs,
};
@@ -916,7 +948,35 @@ static int tlmi_analyze(void)
if (ret || !setting->possible_values)
pr_info("Error retrieving possible values for %d : %s\n",
i, setting->display_name);
+ } else {
+ /*
+ * Older Thinkstations don't support the bios_selections API.
+ * Instead they store this as a [Optional:Option1,Option2] section of the
+ * name string.
+ * Try and pull that out if it's available.
+ */
+ char *optitem, *optstart, *optend;
+
+ if (!tlmi_setting(setting->index, &optitem, LENOVO_BIOS_SETTING_GUID)) {
+ optstart = strstr(optitem, "[Optional:");
+ if (optstart) {
+ optstart += strlen("[Optional:");
+ optend = strstr(optstart, "]");
+ if (optend)
+ setting->possible_values =
+ kstrndup(optstart, optend - optstart,
+ GFP_KERNEL);
+ }
+ kfree(optitem);
+ }
}
+ /*
+ * firmware-attributes requires that possible_values are separated by ';' but
+ * Lenovo FW uses ','. Replace appropriately.
+ */
+ if (setting->possible_values)
+ strreplace(setting->possible_values, ',', ';');
+
kobject_init(&setting->kobj, &tlmi_attr_setting_ktype);
tlmi_priv.setting[i] = setting;
kfree(item);
diff --git a/drivers/platform/x86/touchscreen_dmi.c b/drivers/platform/x86/touchscreen_dmi.c
index 3d07902..69ba2c5 100644
--- a/drivers/platform/x86/touchscreen_dmi.c
+++ b/drivers/platform/x86/touchscreen_dmi.c
@@ -255,6 +255,23 @@ static const struct ts_dmi_data connect_tablet9_data = {
.properties = connect_tablet9_props,
};
+static const struct property_entry csl_panther_tab_hd_props[] = {
+ PROPERTY_ENTRY_U32("touchscreen-min-x", 1),
+ PROPERTY_ENTRY_U32("touchscreen-min-y", 20),
+ PROPERTY_ENTRY_U32("touchscreen-size-x", 1980),
+ PROPERTY_ENTRY_U32("touchscreen-size-y", 1526),
+ PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"),
+ PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
+ PROPERTY_ENTRY_STRING("firmware-name", "gsl1680-csl-panther-tab-hd.fw"),
+ PROPERTY_ENTRY_U32("silead,max-fingers", 10),
+ { }
+};
+
+static const struct ts_dmi_data csl_panther_tab_hd_data = {
+ .acpi_name = "MSSL1680:00",
+ .properties = csl_panther_tab_hd_props,
+};
+
static const struct property_entry cube_iwork8_air_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 1),
PROPERTY_ENTRY_U32("touchscreen-min-y", 3),
@@ -1057,6 +1074,15 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
},
},
{
+ /* Chuwi Vi8 (CWI501) */
+ .driver_data = (void *)&chuwi_vi8_data,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "i86"),
+ DMI_MATCH(DMI_BIOS_VERSION, "CHUWI.W86JLBNR01"),
+ },
+ },
+ {
/* Chuwi Vi8 (CWI506) */
.driver_data = (void *)&chuwi_vi8_data,
.matches = {
@@ -1101,6 +1127,14 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
},
},
{
+ /* CSL Panther Tab HD */
+ .driver_data = (void *)&csl_panther_tab_hd_data,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "CSL Computer GmbH & Co. KG"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "CSL Panther Tab HD"),
+ },
+ },
+ {
/* CUBE iwork8 Air */
.driver_data = (void *)&cube_iwork8_air_data,
.matches = {
diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
index 16c4876..ebb5ba7 100644
--- a/drivers/power/supply/bq24190_charger.c
+++ b/drivers/power/supply/bq24190_charger.c
@@ -446,11 +446,9 @@ static ssize_t bq24190_sysfs_show(struct device *dev,
if (!info)
return -EINVAL;
- ret = pm_runtime_get_sync(bdi->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
+ if (ret < 0)
return ret;
- }
ret = bq24190_read_mask(bdi, info->reg, info->mask, info->shift, &v);
if (ret)
@@ -481,11 +479,9 @@ static ssize_t bq24190_sysfs_store(struct device *dev,
if (ret < 0)
return ret;
- ret = pm_runtime_get_sync(bdi->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
+ if (ret < 0)
return ret;
- }
ret = bq24190_write_mask(bdi, info->reg, info->mask, info->shift, v);
if (ret)
@@ -504,10 +500,9 @@ static int bq24190_set_charge_mode(struct regulator_dev *dev, u8 val)
struct bq24190_dev_info *bdi = rdev_get_drvdata(dev);
int ret;
- ret = pm_runtime_get_sync(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
if (ret < 0) {
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", ret);
- pm_runtime_put_noidle(bdi->dev);
return ret;
}
@@ -537,10 +532,9 @@ static int bq24190_vbus_is_enabled(struct regulator_dev *dev)
int ret;
u8 val;
- ret = pm_runtime_get_sync(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
if (ret < 0) {
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", ret);
- pm_runtime_put_noidle(bdi->dev);
return ret;
}
@@ -1081,11 +1075,9 @@ static int bq24190_charger_get_property(struct power_supply *psy,
dev_dbg(bdi->dev, "prop: %d\n", psp);
- ret = pm_runtime_get_sync(bdi->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
+ if (ret < 0)
return ret;
- }
switch (psp) {
case POWER_SUPPLY_PROP_CHARGE_TYPE:
@@ -1155,11 +1147,9 @@ static int bq24190_charger_set_property(struct power_supply *psy,
dev_dbg(bdi->dev, "prop: %d\n", psp);
- ret = pm_runtime_get_sync(bdi->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
+ if (ret < 0)
return ret;
- }
switch (psp) {
case POWER_SUPPLY_PROP_ONLINE:
@@ -1418,11 +1408,9 @@ static int bq24190_battery_get_property(struct power_supply *psy,
dev_warn(bdi->dev, "warning: /sys/class/power_supply/bq24190-battery is deprecated\n");
dev_dbg(bdi->dev, "prop: %d\n", psp);
- ret = pm_runtime_get_sync(bdi->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
+ if (ret < 0)
return ret;
- }
switch (psp) {
case POWER_SUPPLY_PROP_STATUS:
@@ -1466,11 +1454,9 @@ static int bq24190_battery_set_property(struct power_supply *psy,
dev_warn(bdi->dev, "warning: /sys/class/power_supply/bq24190-battery is deprecated\n");
dev_dbg(bdi->dev, "prop: %d\n", psp);
- ret = pm_runtime_get_sync(bdi->dev);
- if (ret < 0) {
- pm_runtime_put_noidle(bdi->dev);
+ ret = pm_runtime_resume_and_get(bdi->dev);
+ if (ret < 0)
return ret;
- }
switch (psp) {
case POWER_SUPPLY_PROP_ONLINE:
@@ -1624,10 +1610,9 @@ static irqreturn_t bq24190_irq_handler_thread(int irq, void *data)
int error;
bdi->irq_event = true;
- error = pm_runtime_get_sync(bdi->dev);
+ error = pm_runtime_resume_and_get(bdi->dev);
if (error < 0) {
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
- pm_runtime_put_noidle(bdi->dev);
return IRQ_NONE;
}
bq24190_check_status(bdi);
@@ -1847,11 +1832,10 @@ static int bq24190_remove(struct i2c_client *client)
struct bq24190_dev_info *bdi = i2c_get_clientdata(client);
int error;
- error = pm_runtime_get_sync(bdi->dev);
- if (error < 0) {
+ cancel_delayed_work_sync(&bdi->input_current_limit_work);
+ error = pm_runtime_resume_and_get(bdi->dev);
+ if (error < 0)
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
- pm_runtime_put_noidle(bdi->dev);
- }
bq24190_register_reset(bdi);
if (bdi->battery)
@@ -1900,11 +1884,9 @@ static __maybe_unused int bq24190_pm_suspend(struct device *dev)
struct bq24190_dev_info *bdi = i2c_get_clientdata(client);
int error;
- error = pm_runtime_get_sync(bdi->dev);
- if (error < 0) {
+ error = pm_runtime_resume_and_get(bdi->dev);
+ if (error < 0)
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
- pm_runtime_put_noidle(bdi->dev);
- }
bq24190_register_reset(bdi);
@@ -1925,11 +1907,9 @@ static __maybe_unused int bq24190_pm_resume(struct device *dev)
bdi->f_reg = 0;
bdi->ss_reg = BQ24190_REG_SS_VBUS_STAT_MASK; /* impossible state */
- error = pm_runtime_get_sync(bdi->dev);
- if (error < 0) {
+ error = pm_runtime_resume_and_get(bdi->dev);
+ if (error < 0)
dev_warn(bdi->dev, "pm_runtime_get failed: %i\n", error);
- pm_runtime_put_noidle(bdi->dev);
- }
bq24190_register_reset(bdi);
bq24190_set_config(bdi);
diff --git a/drivers/power/supply/da9150-charger.c b/drivers/power/supply/da9150-charger.c
index f9314cc..6b987da 100644
--- a/drivers/power/supply/da9150-charger.c
+++ b/drivers/power/supply/da9150-charger.c
@@ -662,6 +662,7 @@ static int da9150_charger_remove(struct platform_device *pdev)
if (!IS_ERR_OR_NULL(charger->usb_phy))
usb_unregister_notifier(charger->usb_phy, &charger->otg_nb);
+ cancel_work_sync(&charger->otg_work);
power_supply_unregister(charger->battery);
power_supply_unregister(charger->usb);
diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
index 3f9c60c..8161fad 100644
--- a/drivers/power/supply/power_supply_core.c
+++ b/drivers/power/supply/power_supply_core.c
@@ -1014,87 +1014,6 @@ static void psy_unregister_thermal(struct power_supply *psy)
thermal_zone_device_unregister(psy->tzd);
}
-/* thermal cooling device callbacks */
-static int ps_get_max_charge_cntl_limit(struct thermal_cooling_device *tcd,
- unsigned long *state)
-{
- struct power_supply *psy;
- union power_supply_propval val;
- int ret;
-
- psy = tcd->devdata;
- ret = power_supply_get_property(psy,
- POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT_MAX, &val);
- if (ret)
- return ret;
-
- *state = val.intval;
-
- return ret;
-}
-
-static int ps_get_cur_charge_cntl_limit(struct thermal_cooling_device *tcd,
- unsigned long *state)
-{
- struct power_supply *psy;
- union power_supply_propval val;
- int ret;
-
- psy = tcd->devdata;
- ret = power_supply_get_property(psy,
- POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT, &val);
- if (ret)
- return ret;
-
- *state = val.intval;
-
- return ret;
-}
-
-static int ps_set_cur_charge_cntl_limit(struct thermal_cooling_device *tcd,
- unsigned long state)
-{
- struct power_supply *psy;
- union power_supply_propval val;
- int ret;
-
- psy = tcd->devdata;
- val.intval = state;
- ret = psy->desc->set_property(psy,
- POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT, &val);
-
- return ret;
-}
-
-static const struct thermal_cooling_device_ops psy_tcd_ops = {
- .get_max_state = ps_get_max_charge_cntl_limit,
- .get_cur_state = ps_get_cur_charge_cntl_limit,
- .set_cur_state = ps_set_cur_charge_cntl_limit,
-};
-
-static int psy_register_cooler(struct power_supply *psy)
-{
- int i;
-
- /* Register for cooling device if psy can control charging */
- for (i = 0; i < psy->desc->num_properties; i++) {
- if (psy->desc->properties[i] ==
- POWER_SUPPLY_PROP_CHARGE_CONTROL_LIMIT) {
- psy->tcd = thermal_cooling_device_register(
- (char *)psy->desc->name,
- psy, &psy_tcd_ops);
- return PTR_ERR_OR_ZERO(psy->tcd);
- }
- }
- return 0;
-}
-
-static void psy_unregister_cooler(struct power_supply *psy)
-{
- if (IS_ERR_OR_NULL(psy->tcd))
- return;
- thermal_cooling_device_unregister(psy->tcd);
-}
#else
static int psy_register_thermal(struct power_supply *psy)
{
@@ -1104,15 +1023,6 @@ static int psy_register_thermal(struct power_supply *psy)
static void psy_unregister_thermal(struct power_supply *psy)
{
}
-
-static int psy_register_cooler(struct power_supply *psy)
-{
- return 0;
-}
-
-static void psy_unregister_cooler(struct power_supply *psy)
-{
-}
#endif
static struct power_supply *__must_check
@@ -1188,10 +1098,6 @@ __power_supply_register(struct device *parent,
if (rc)
goto register_thermal_failed;
- rc = psy_register_cooler(psy);
- if (rc)
- goto register_cooler_failed;
-
rc = power_supply_create_triggers(psy);
if (rc)
goto create_triggers_failed;
@@ -1221,8 +1127,6 @@ __power_supply_register(struct device *parent,
add_hwmon_sysfs_failed:
power_supply_remove_triggers(psy);
create_triggers_failed:
- psy_unregister_cooler(psy);
-register_cooler_failed:
psy_unregister_thermal(psy);
register_thermal_failed:
wakeup_init_failed:
@@ -1374,7 +1278,6 @@ void power_supply_unregister(struct power_supply *psy)
sysfs_remove_link(&psy->dev.kobj, "powers");
power_supply_remove_hwmon_sysfs(psy);
power_supply_remove_triggers(psy);
- psy_unregister_cooler(psy);
psy_unregister_thermal(psy);
device_init_wakeup(&psy->dev, false);
device_unregister(&psy->dev);
diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c
index f0654a9..ff736b0 100644
--- a/drivers/powercap/powercap_sys.c
+++ b/drivers/powercap/powercap_sys.c
@@ -529,9 +529,6 @@ struct powercap_zone *powercap_register_zone(
power_zone->name = kstrdup(name, GFP_KERNEL);
if (!power_zone->name)
goto err_name_alloc;
- dev_set_name(&power_zone->dev, "%s:%x",
- dev_name(power_zone->dev.parent),
- power_zone->id);
power_zone->constraints = kcalloc(nr_constraints,
sizeof(*power_zone->constraints),
GFP_KERNEL);
@@ -554,9 +551,16 @@ struct powercap_zone *powercap_register_zone(
power_zone->dev_attr_groups[0] = &power_zone->dev_zone_attr_group;
power_zone->dev_attr_groups[1] = NULL;
power_zone->dev.groups = power_zone->dev_attr_groups;
+ dev_set_name(&power_zone->dev, "%s:%x",
+ dev_name(power_zone->dev.parent),
+ power_zone->id);
result = device_register(&power_zone->dev);
- if (result)
- goto err_dev_ret;
+ if (result) {
+ put_device(&power_zone->dev);
+ mutex_unlock(&control_type->lock);
+
+ return ERR_PTR(result);
+ }
control_type->nr_zones++;
mutex_unlock(&control_type->lock);
diff --git a/drivers/ptp/ptp_qoriq.c b/drivers/ptp/ptp_qoriq.c
index 08f4cf0..8fa9772 100644
--- a/drivers/ptp/ptp_qoriq.c
+++ b/drivers/ptp/ptp_qoriq.c
@@ -601,7 +601,7 @@ static int ptp_qoriq_probe(struct platform_device *dev)
return 0;
no_clock:
- iounmap(ptp_qoriq->base);
+ iounmap(base);
no_ioremap:
release_resource(ptp_qoriq->rsrc);
no_resource:
diff --git a/drivers/pwm/pwm-cros-ec.c b/drivers/pwm/pwm-cros-ec.c
index 5e29d9c..adfd03c 100644
--- a/drivers/pwm/pwm-cros-ec.c
+++ b/drivers/pwm/pwm-cros-ec.c
@@ -157,6 +157,7 @@ static void cros_ec_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
state->enabled = (ret > 0);
state->period = EC_PWM_MAX_DUTY;
+ state->polarity = PWM_POLARITY_NORMAL;
/*
* Note that "disabled" and "duty cycle == 0" are treated the same. If
diff --git a/drivers/pwm/pwm-sifive.c b/drivers/pwm/pwm-sifive.c
index 07e9fc5..41a6fc4 100644
--- a/drivers/pwm/pwm-sifive.c
+++ b/drivers/pwm/pwm-sifive.c
@@ -41,7 +41,7 @@
struct pwm_sifive_ddata {
struct pwm_chip chip;
- struct mutex lock; /* lock to protect user_count */
+ struct mutex lock; /* lock to protect user_count and approx_period */
struct notifier_block notifier;
struct clk *clk;
void __iomem *regs;
@@ -76,6 +76,7 @@ static void pwm_sifive_free(struct pwm_chip *chip, struct pwm_device *pwm)
mutex_unlock(&ddata->lock);
}
+/* Called holding ddata->lock */
static void pwm_sifive_update_clock(struct pwm_sifive_ddata *ddata,
unsigned long rate)
{
@@ -163,7 +164,6 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
return ret;
}
- mutex_lock(&ddata->lock);
cur_state = pwm->state;
enabled = cur_state.enabled;
@@ -182,14 +182,23 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
/* The hardware cannot generate a 100% duty cycle */
frac = min(frac, (1U << PWM_SIFIVE_CMPWIDTH) - 1);
+ mutex_lock(&ddata->lock);
if (state->period != ddata->approx_period) {
- if (ddata->user_count != 1) {
+ /*
+ * Don't let a 2nd user change the period underneath the 1st user.
+ * However if ddate->approx_period == 0 this is the first time we set
+ * any period, so let whoever gets here first set the period so other
+ * users who agree on the period won't fail.
+ */
+ if (ddata->user_count != 1 && ddata->approx_period) {
+ mutex_unlock(&ddata->lock);
ret = -EBUSY;
goto exit;
}
ddata->approx_period = state->period;
pwm_sifive_update_clock(ddata, clk_get_rate(ddata->clk));
}
+ mutex_unlock(&ddata->lock);
writel(frac, ddata->regs + PWM_SIFIVE_PWMCMP(pwm->hwpwm));
@@ -198,7 +207,6 @@ static int pwm_sifive_apply(struct pwm_chip *chip, struct pwm_device *pwm,
exit:
clk_disable(ddata->clk);
- mutex_unlock(&ddata->lock);
return ret;
}
diff --git a/drivers/pwm/pwm-sprd.c b/drivers/pwm/pwm-sprd.c
index 7004f55..869e696 100644
--- a/drivers/pwm/pwm-sprd.c
+++ b/drivers/pwm/pwm-sprd.c
@@ -109,6 +109,7 @@ static void sprd_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
duty = val & SPRD_PWM_DUTY_MSK;
tmp = (prescale + 1) * NSEC_PER_SEC * duty;
state->duty_cycle = DIV_ROUND_CLOSEST_ULL(tmp, chn->clk_rate);
+ state->polarity = PWM_POLARITY_NORMAL;
/* Disable PWM clocks if the PWM channel is not in enable state. */
if (!state->enabled)
diff --git a/drivers/pwm/pwm-stm32-lp.c b/drivers/pwm/pwm-stm32-lp.c
index 3115abb..61a1c87c 100644
--- a/drivers/pwm/pwm-stm32-lp.c
+++ b/drivers/pwm/pwm-stm32-lp.c
@@ -127,7 +127,7 @@ static int stm32_pwm_lp_apply(struct pwm_chip *chip, struct pwm_device *pwm,
/* ensure CMP & ARR registers are properly written */
ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val,
- (val & STM32_LPTIM_CMPOK_ARROK),
+ (val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK,
100, 1000);
if (ret) {
dev_err(priv->chip.dev, "ARR/CMP registers write issue\n");
diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
index 3eae3aa..cd10880 100644
--- a/drivers/regulator/core.c
+++ b/drivers/regulator/core.c
@@ -1521,6 +1521,27 @@ static int set_machine_constraints(struct regulator_dev *rdev)
}
}
+ /*
+ * If there is no mechanism for controlling the regulator then
+ * flag it as always_on so we don't end up duplicating checks
+ * for this so much. Note that we could control the state of
+ * a supply to control the output on a regulator that has no
+ * direct control.
+ */
+ if (!rdev->ena_pin && !ops->enable) {
+ if (rdev->supply_name && !rdev->supply)
+ return -EPROBE_DEFER;
+
+ if (rdev->supply)
+ rdev->constraints->always_on =
+ rdev->supply->rdev->constraints->always_on;
+ else
+ rdev->constraints->always_on = true;
+ }
+
+ if (rdev->desc->off_on_delay)
+ rdev->last_off = ktime_get_boottime();
+
/* If the constraints say the regulator should be on at this point
* and we have control then make sure it is enabled.
*/
@@ -1554,8 +1575,6 @@ static int set_machine_constraints(struct regulator_dev *rdev)
if (rdev->constraints->always_on)
rdev->use_count++;
- } else if (rdev->desc->off_on_delay) {
- rdev->last_off = ktime_get();
}
print_constraints(rdev);
@@ -2610,7 +2629,7 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
* this regulator was disabled.
*/
ktime_t end = ktime_add_us(rdev->last_off, rdev->desc->off_on_delay);
- s64 remaining = ktime_us_delta(end, ktime_get());
+ s64 remaining = ktime_us_delta(end, ktime_get_boottime());
if (remaining > 0)
_regulator_enable_delay(remaining);
@@ -2849,7 +2868,7 @@ static int _regulator_do_disable(struct regulator_dev *rdev)
}
if (rdev->desc->off_on_delay)
- rdev->last_off = ktime_get();
+ rdev->last_off = ktime_get_boottime();
trace_regulator_disable_complete(rdev_get_name(rdev));
diff --git a/drivers/regulator/fixed.c b/drivers/regulator/fixed.c
index 599ad20..fb16345 100644
--- a/drivers/regulator/fixed.c
+++ b/drivers/regulator/fixed.c
@@ -215,7 +215,7 @@ static int reg_fixed_voltage_probe(struct platform_device *pdev)
drvdata->enable_clock = devm_clk_get(dev, NULL);
if (IS_ERR(drvdata->enable_clock)) {
dev_err(dev, "Can't get enable-clock from devicetree\n");
- return -ENOENT;
+ return PTR_ERR(drvdata->enable_clock);
}
} else if (drvtype && drvtype->has_performance_state) {
drvdata->desc.ops = &fixed_voltage_domain_ops;
diff --git a/drivers/regulator/max77802-regulator.c b/drivers/regulator/max77802-regulator.c
index 21e0eb0..befe5f3 100644
--- a/drivers/regulator/max77802-regulator.c
+++ b/drivers/regulator/max77802-regulator.c
@@ -94,9 +94,11 @@ static int max77802_set_suspend_disable(struct regulator_dev *rdev)
{
unsigned int val = MAX77802_OFF_PWRREQ;
struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
- int id = rdev_get_id(rdev);
+ unsigned int id = rdev_get_id(rdev);
int shift = max77802_get_opmode_shift(id);
+ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
+ return -EINVAL;
max77802->opmode[id] = val;
return regmap_update_bits(rdev->regmap, rdev->desc->enable_reg,
rdev->desc->enable_mask, val << shift);
@@ -110,7 +112,7 @@ static int max77802_set_suspend_disable(struct regulator_dev *rdev)
static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
{
struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
- int id = rdev_get_id(rdev);
+ unsigned int id = rdev_get_id(rdev);
unsigned int val;
int shift = max77802_get_opmode_shift(id);
@@ -127,6 +129,9 @@ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
return -EINVAL;
}
+ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
+ return -EINVAL;
+
max77802->opmode[id] = val;
return regmap_update_bits(rdev->regmap, rdev->desc->enable_reg,
rdev->desc->enable_mask, val << shift);
@@ -135,8 +140,10 @@ static int max77802_set_mode(struct regulator_dev *rdev, unsigned int mode)
static unsigned max77802_get_mode(struct regulator_dev *rdev)
{
struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
- int id = rdev_get_id(rdev);
+ unsigned int id = rdev_get_id(rdev);
+ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
+ return -EINVAL;
return max77802_map_mode(max77802->opmode[id]);
}
@@ -160,10 +167,13 @@ static int max77802_set_suspend_mode(struct regulator_dev *rdev,
unsigned int mode)
{
struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
- int id = rdev_get_id(rdev);
+ unsigned int id = rdev_get_id(rdev);
unsigned int val;
int shift = max77802_get_opmode_shift(id);
+ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
+ return -EINVAL;
+
/*
* If the regulator has been disabled for suspend
* then is invalid to try setting a suspend mode.
@@ -209,9 +219,11 @@ static int max77802_set_suspend_mode(struct regulator_dev *rdev,
static int max77802_enable(struct regulator_dev *rdev)
{
struct max77802_regulator_prv *max77802 = rdev_get_drvdata(rdev);
- int id = rdev_get_id(rdev);
+ unsigned int id = rdev_get_id(rdev);
int shift = max77802_get_opmode_shift(id);
+ if (WARN_ON_ONCE(id >= ARRAY_SIZE(max77802->opmode)))
+ return -EINVAL;
if (max77802->opmode[id] == MAX77802_OFF_PWRREQ)
max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
@@ -495,7 +507,7 @@ static int max77802_pmic_probe(struct platform_device *pdev)
for (i = 0; i < MAX77802_REG_MAX; i++) {
struct regulator_dev *rdev;
- int id = regulators[i].id;
+ unsigned int id = regulators[i].id;
int shift = max77802_get_opmode_shift(id);
int ret;
@@ -513,10 +525,12 @@ static int max77802_pmic_probe(struct platform_device *pdev)
* the hardware reports OFF as the regulator operating mode.
* Default to operating mode NORMAL in that case.
*/
- if (val == MAX77802_STATUS_OFF)
- max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
- else
- max77802->opmode[id] = val;
+ if (id < ARRAY_SIZE(max77802->opmode)) {
+ if (val == MAX77802_STATUS_OFF)
+ max77802->opmode[id] = MAX77802_OPMODE_NORMAL;
+ else
+ max77802->opmode[id] = val;
+ }
rdev = devm_regulator_register(&pdev->dev,
®ulators[i], &config);
diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c
index 35269f9..754c6fc 100644
--- a/drivers/regulator/s5m8767.c
+++ b/drivers/regulator/s5m8767.c
@@ -923,10 +923,14 @@ static int s5m8767_pmic_probe(struct platform_device *pdev)
for (i = 0; i < pdata->num_regulators; i++) {
const struct sec_voltage_desc *desc;
- int id = pdata->regulators[i].id;
+ unsigned int id = pdata->regulators[i].id;
int enable_reg, enable_val;
struct regulator_dev *rdev;
+ BUILD_BUG_ON(ARRAY_SIZE(regulators) != ARRAY_SIZE(reg_voltage_map));
+ if (WARN_ON_ONCE(id >= ARRAY_SIZE(regulators)))
+ continue;
+
desc = reg_voltage_map[id];
if (desc) {
regulators[id].n_voltages =
diff --git a/drivers/remoteproc/mtk_scp_ipi.c b/drivers/remoteproc/mtk_scp_ipi.c
index 6dc955e..968128b 100644
--- a/drivers/remoteproc/mtk_scp_ipi.c
+++ b/drivers/remoteproc/mtk_scp_ipi.c
@@ -164,21 +164,21 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,
WARN_ON(len > sizeof(send_obj->share_buf)) || WARN_ON(!buf))
return -EINVAL;
- mutex_lock(&scp->send_lock);
-
ret = clk_prepare_enable(scp->clk);
if (ret) {
dev_err(scp->dev, "failed to enable clock\n");
- goto unlock_mutex;
+ return ret;
}
+ mutex_lock(&scp->send_lock);
+
/* Wait until SCP receives the last command */
timeout = jiffies + msecs_to_jiffies(2000);
do {
if (time_after(jiffies, timeout)) {
dev_err(scp->dev, "%s: IPI timeout!\n", __func__);
ret = -ETIMEDOUT;
- goto clock_disable;
+ goto unlock_mutex;
}
} while (readl(scp->reg_base + scp->data->host_to_scp_reg));
@@ -205,10 +205,9 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,
ret = 0;
}
-clock_disable:
- clk_disable_unprepare(scp->clk);
unlock_mutex:
mutex_unlock(&scp->send_lock);
+ clk_disable_unprepare(scp->clk);
return ret;
}
diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
index ca1c738..93eefef 100644
--- a/drivers/remoteproc/qcom_q6v5_mss.c
+++ b/drivers/remoteproc/qcom_q6v5_mss.c
@@ -17,6 +17,7 @@
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
#include <linux/platform_device.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
@@ -192,6 +193,9 @@ struct q6v5 {
size_t mba_size;
size_t dp_size;
+ phys_addr_t mdata_phys;
+ size_t mdata_size;
+
phys_addr_t mpss_phys;
phys_addr_t mpss_reloc;
size_t mpss_size;
@@ -832,15 +836,35 @@ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw)
if (IS_ERR(metadata))
return PTR_ERR(metadata);
- ptr = dma_alloc_attrs(qproc->dev, size, &phys, GFP_KERNEL, dma_attrs);
- if (!ptr) {
- kfree(metadata);
- dev_err(qproc->dev, "failed to allocate mdt buffer\n");
- return -ENOMEM;
+ if (qproc->mdata_phys) {
+ if (size > qproc->mdata_size) {
+ ret = -EINVAL;
+ dev_err(qproc->dev, "metadata size outside memory range\n");
+ goto free_metadata;
+ }
+
+ phys = qproc->mdata_phys;
+ ptr = memremap(qproc->mdata_phys, size, MEMREMAP_WC);
+ if (!ptr) {
+ ret = -EBUSY;
+ dev_err(qproc->dev, "unable to map memory region: %pa+%zx\n",
+ &qproc->mdata_phys, size);
+ goto free_metadata;
+ }
+ } else {
+ ptr = dma_alloc_attrs(qproc->dev, size, &phys, GFP_KERNEL, dma_attrs);
+ if (!ptr) {
+ ret = -ENOMEM;
+ dev_err(qproc->dev, "failed to allocate mdt buffer\n");
+ goto free_metadata;
+ }
}
memcpy(ptr, metadata, size);
+ if (qproc->mdata_phys)
+ memunmap(ptr);
+
/* Hypervisor mapping to access metadata by modem */
mdata_perm = BIT(QCOM_SCM_VMID_HLOS);
ret = q6v5_xfer_mem_ownership(qproc, &mdata_perm, false, true,
@@ -869,7 +893,9 @@ static int q6v5_mpss_init_image(struct q6v5 *qproc, const struct firmware *fw)
"mdt buffer not reclaimed system may become unstable\n");
free_dma_attrs:
- dma_free_attrs(qproc->dev, size, ptr, phys, dma_attrs);
+ if (!qproc->mdata_phys)
+ dma_free_attrs(qproc->dev, size, ptr, phys, dma_attrs);
+free_metadata:
kfree(metadata);
return ret < 0 ? ret : 0;
@@ -1615,6 +1641,7 @@ static int q6v5_init_reset(struct q6v5 *qproc)
static int q6v5_alloc_memory_region(struct q6v5 *qproc)
{
struct device_node *child;
+ struct reserved_mem *rmem;
struct device_node *node;
struct resource r;
int ret;
@@ -1661,6 +1688,26 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
qproc->mpss_phys = qproc->mpss_reloc = r.start;
qproc->mpss_size = resource_size(&r);
+ if (!child) {
+ node = of_parse_phandle(qproc->dev->of_node, "memory-region", 2);
+ } else {
+ child = of_get_child_by_name(qproc->dev->of_node, "metadata");
+ node = of_parse_phandle(child, "memory-region", 0);
+ of_node_put(child);
+ }
+
+ if (!node)
+ return 0;
+
+ rmem = of_reserved_mem_lookup(node);
+ if (!rmem) {
+ dev_err(qproc->dev, "unable to resolve metadata region\n");
+ return -EINVAL;
+ }
+
+ qproc->mdata_phys = rmem->base;
+ qproc->mdata_size = rmem->size;
+
return 0;
}
diff --git a/drivers/reset/reset-uniphier-glue.c b/drivers/reset/reset-uniphier-glue.c
index 027990b..7493e96 100644
--- a/drivers/reset/reset-uniphier-glue.c
+++ b/drivers/reset/reset-uniphier-glue.c
@@ -23,7 +23,7 @@ struct uniphier_glue_reset_soc_data {
struct uniphier_glue_reset_priv {
struct clk_bulk_data clk[MAX_CLKS];
- struct reset_control *rst[MAX_RSTS];
+ struct reset_control_bulk_data rst[MAX_RSTS];
struct reset_simple_data rdata;
const struct uniphier_glue_reset_soc_data *data;
};
@@ -33,9 +33,7 @@ static int uniphier_glue_reset_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct uniphier_glue_reset_priv *priv;
struct resource *res;
- resource_size_t size;
- const char *name;
- int i, ret, nr;
+ int i, ret;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
@@ -47,7 +45,6 @@ static int uniphier_glue_reset_probe(struct platform_device *pdev)
return -EINVAL;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- size = resource_size(res);
priv->rdata.membase = devm_ioremap_resource(dev, res);
if (IS_ERR(priv->rdata.membase))
return PTR_ERR(priv->rdata.membase);
@@ -58,26 +55,24 @@ static int uniphier_glue_reset_probe(struct platform_device *pdev)
if (ret)
return ret;
- for (i = 0; i < priv->data->nrsts; i++) {
- name = priv->data->reset_names[i];
- priv->rst[i] = devm_reset_control_get_shared(dev, name);
- if (IS_ERR(priv->rst[i]))
- return PTR_ERR(priv->rst[i]);
- }
+ for (i = 0; i < priv->data->nrsts; i++)
+ priv->rst[i].id = priv->data->reset_names[i];
+ ret = devm_reset_control_bulk_get_shared(dev, priv->data->nrsts,
+ priv->rst);
+ if (ret)
+ return ret;
ret = clk_bulk_prepare_enable(priv->data->nclks, priv->clk);
if (ret)
return ret;
- for (nr = 0; nr < priv->data->nrsts; nr++) {
- ret = reset_control_deassert(priv->rst[nr]);
- if (ret)
- goto out_rst_assert;
- }
+ ret = reset_control_bulk_deassert(priv->data->nrsts, priv->rst);
+ if (ret)
+ goto out_clk_disable;
spin_lock_init(&priv->rdata.lock);
priv->rdata.rcdev.owner = THIS_MODULE;
- priv->rdata.rcdev.nr_resets = size * BITS_PER_BYTE;
+ priv->rdata.rcdev.nr_resets = resource_size(res) * BITS_PER_BYTE;
priv->rdata.rcdev.ops = &reset_simple_ops;
priv->rdata.rcdev.of_node = dev->of_node;
priv->rdata.active_low = true;
@@ -91,9 +86,9 @@ static int uniphier_glue_reset_probe(struct platform_device *pdev)
return 0;
out_rst_assert:
- while (nr--)
- reset_control_assert(priv->rst[nr]);
+ reset_control_bulk_assert(priv->data->nrsts, priv->rst);
+out_clk_disable:
clk_bulk_disable_unprepare(priv->data->nclks, priv->clk);
return ret;
@@ -102,10 +97,8 @@ static int uniphier_glue_reset_probe(struct platform_device *pdev)
static int uniphier_glue_reset_remove(struct platform_device *pdev)
{
struct uniphier_glue_reset_priv *priv = platform_get_drvdata(pdev);
- int i;
- for (i = 0; i < priv->data->nrsts; i++)
- reset_control_assert(priv->rst[i]);
+ reset_control_bulk_assert(priv->data->nrsts, priv->rst);
clk_bulk_disable_unprepare(priv->data->nclks, priv->clk);
diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
index 13c3137..fd4c2f0 100644
--- a/drivers/rpmsg/qcom_glink_native.c
+++ b/drivers/rpmsg/qcom_glink_native.c
@@ -929,6 +929,7 @@ static void qcom_glink_handle_intent(struct qcom_glink *glink,
spin_unlock_irqrestore(&glink->idr_lock, flags);
if (!channel) {
dev_err(glink->dev, "intents for non-existing channel\n");
+ qcom_glink_rx_advance(glink, ALIGN(msglen, 8));
return;
}
diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
index 3ee17c4..f49ab45 100644
--- a/drivers/rtc/interface.c
+++ b/drivers/rtc/interface.c
@@ -392,7 +392,7 @@ int rtc_read_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
return err;
if (!rtc->ops) {
err = -ENODEV;
- } else if (!test_bit(RTC_FEATURE_ALARM, rtc->features) || !rtc->ops->read_alarm) {
+ } else if (!test_bit(RTC_FEATURE_ALARM, rtc->features)) {
err = -EINVAL;
} else {
memset(alarm, 0, sizeof(struct rtc_wkalrm));
diff --git a/drivers/rtc/rtc-pm8xxx.c b/drivers/rtc/rtc-pm8xxx.c
index 29a1c65..b1fb870 100644
--- a/drivers/rtc/rtc-pm8xxx.c
+++ b/drivers/rtc/rtc-pm8xxx.c
@@ -220,7 +220,6 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
{
int rc, i;
u8 value[NUM_8_BIT_RTC_REGS];
- unsigned int ctrl_reg;
unsigned long secs, irq_flags;
struct pm8xxx_rtc *rtc_dd = dev_get_drvdata(dev);
const struct pm8xxx_rtc_regs *regs = rtc_dd->regs;
@@ -232,6 +231,11 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
secs >>= 8;
}
+ rc = regmap_update_bits(rtc_dd->regmap, regs->alarm_ctrl,
+ regs->alarm_en, 0);
+ if (rc)
+ return rc;
+
spin_lock_irqsave(&rtc_dd->ctrl_reg_lock, irq_flags);
rc = regmap_bulk_write(rtc_dd->regmap, regs->alarm_rw, value,
@@ -241,19 +245,11 @@ static int pm8xxx_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *alarm)
goto rtc_rw_fail;
}
- rc = regmap_read(rtc_dd->regmap, regs->alarm_ctrl, &ctrl_reg);
- if (rc)
- goto rtc_rw_fail;
-
- if (alarm->enabled)
- ctrl_reg |= regs->alarm_en;
- else
- ctrl_reg &= ~regs->alarm_en;
-
- rc = regmap_write(rtc_dd->regmap, regs->alarm_ctrl, ctrl_reg);
- if (rc) {
- dev_err(dev, "Write to RTC alarm control register failed\n");
- goto rtc_rw_fail;
+ if (alarm->enabled) {
+ rc = regmap_update_bits(rtc_dd->regmap, regs->alarm_ctrl,
+ regs->alarm_en, regs->alarm_en);
+ if (rc)
+ goto rtc_rw_fail;
}
dev_dbg(dev, "Alarm Set for h:m:s=%ptRt, y-m-d=%ptRdr\n",
diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c
index c551ebf..536bd02 100644
--- a/drivers/rtc/rtc-sun6i.c
+++ b/drivers/rtc/rtc-sun6i.c
@@ -128,7 +128,6 @@ struct sun6i_rtc_clk_data {
unsigned int fixed_prescaler : 16;
unsigned int has_prescaler : 1;
unsigned int has_out_clk : 1;
- unsigned int export_iosc : 1;
unsigned int has_losc_en : 1;
unsigned int has_auto_swt : 1;
};
@@ -260,10 +259,8 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
/* Yes, I know, this is ugly. */
sun6i_rtc = rtc;
- /* Only read IOSC name from device tree if it is exported */
- if (rtc->data->export_iosc)
- of_property_read_string_index(node, "clock-output-names", 2,
- &iosc_name);
+ of_property_read_string_index(node, "clock-output-names", 2,
+ &iosc_name);
rtc->int_osc = clk_hw_register_fixed_rate_with_accuracy(NULL,
iosc_name,
@@ -304,13 +301,10 @@ static void __init sun6i_rtc_clk_init(struct device_node *node,
goto err_register;
}
- clk_data->num = 2;
+ clk_data->num = 3;
clk_data->hws[0] = &rtc->hw;
clk_data->hws[1] = __clk_get_hw(rtc->ext_losc);
- if (rtc->data->export_iosc) {
- clk_data->hws[2] = rtc->int_osc;
- clk_data->num = 3;
- }
+ clk_data->hws[2] = rtc->int_osc;
of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
return;
@@ -350,7 +344,6 @@ static const struct sun6i_rtc_clk_data sun8i_h3_rtc_data = {
.fixed_prescaler = 32,
.has_prescaler = 1,
.has_out_clk = 1,
- .export_iosc = 1,
};
static void __init sun8i_h3_rtc_clk_init(struct device_node *node)
@@ -368,7 +361,6 @@ static const struct sun6i_rtc_clk_data sun50i_h6_rtc_data = {
.fixed_prescaler = 32,
.has_prescaler = 1,
.has_out_clk = 1,
- .export_iosc = 1,
.has_losc_en = 1,
.has_auto_swt = 1,
};
diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c
index 57dfc92..56ab74a 100644
--- a/drivers/s390/block/dasd_eckd.c
+++ b/drivers/s390/block/dasd_eckd.c
@@ -6729,8 +6729,10 @@ dasd_eckd_init(void)
return -ENOMEM;
dasd_vol_info_req = kmalloc(sizeof(*dasd_vol_info_req),
GFP_KERNEL | GFP_DMA);
- if (!dasd_vol_info_req)
+ if (!dasd_vol_info_req) {
+ kfree(dasd_reserve_req);
return -ENOMEM;
+ }
pe_handler_worker = kmalloc(sizeof(*pe_handler_worker),
GFP_KERNEL | GFP_DMA);
if (!pe_handler_worker) {
diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c
index 4d2556b..5196c9ac 100644
--- a/drivers/s390/crypto/vfio_ap_drv.c
+++ b/drivers/s390/crypto/vfio_ap_drv.c
@@ -86,8 +86,9 @@ static struct ap_driver vfio_ap_drv = {
static void vfio_ap_matrix_dev_release(struct device *dev)
{
- struct ap_matrix_dev *matrix_dev = dev_get_drvdata(dev);
+ struct ap_matrix_dev *matrix_dev;
+ matrix_dev = container_of(dev, struct ap_matrix_dev, device);
kfree(matrix_dev);
}
diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
index c6b63ea..ce48f34 100644
--- a/drivers/scsi/aic94xx/aic94xx_task.c
+++ b/drivers/scsi/aic94xx/aic94xx_task.c
@@ -50,6 +50,9 @@ static int asd_map_scatterlist(struct sas_task *task,
dma_addr_t dma = dma_map_single(&asd_ha->pcidev->dev, p,
task->total_xfer_len,
task->data_dir);
+ if (dma_mapping_error(&asd_ha->pcidev->dev, dma))
+ return -ENOMEM;
+
sg_arr[0].bus_addr = cpu_to_le64((u64)dma);
sg_arr[0].size = cpu_to_le32(task->total_xfer_len);
sg_arr[0].flags |= ASD_SG_EL_LIST_EOL;
diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
index 1d9be77..a9c4a5e 100644
--- a/drivers/scsi/device_handler/scsi_dh_alua.c
+++ b/drivers/scsi/device_handler/scsi_dh_alua.c
@@ -1117,10 +1117,12 @@ static int alua_activate(struct scsi_device *sdev,
rcu_read_unlock();
mutex_unlock(&h->init_mutex);
- if (alua_rtpg_queue(pg, sdev, qdata, true))
+ if (alua_rtpg_queue(pg, sdev, qdata, true)) {
fn = NULL;
- else
+ } else {
+ kfree(qdata);
err = SCSI_DH_DEV_OFFLINED;
+ }
kref_put(&pg->kref, release_port_group);
out:
if (fn)
diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
index 9515c45..7d93783 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
@@ -1414,7 +1414,7 @@ static void hisi_sas_refresh_port_id(struct hisi_hba *hisi_hba)
device->linkrate = phy->sas_phy.linkrate;
hisi_hba->hw->setup_itct(hisi_hba, sas_dev);
- } else
+ } else if (!port->port_attached)
port->id = 0xff;
}
}
diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
index fa22cb7..9515ab6 100644
--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
@@ -2424,8 +2424,7 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *hisi_hba)
hisi_hba->cq_nvecs = vectors - BASE_VECTORS_V3_HW;
shost->nr_hw_queues = hisi_hba->cq_nvecs;
- devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
- return 0;
+ return devm_add_action(&pdev->dev, hisi_sas_v3_free_vectors, pdev);
}
static int interrupt_init_v3_hw(struct hisi_hba *hisi_hba)
diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index 0165dad..7dc42d0 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -180,6 +180,7 @@ void scsi_remove_host(struct Scsi_Host *shost)
scsi_forget_host(shost);
mutex_unlock(&shost->scan_mutex);
scsi_proc_host_rm(shost);
+ scsi_proc_hostdir_rm(shost->hostt);
spin_lock_irqsave(shost->host_lock, flags);
if (scsi_host_set_state(shost, SHOST_DEL))
@@ -321,8 +322,6 @@ static void scsi_host_dev_release(struct device *dev)
struct Scsi_Host *shost = dev_to_shost(dev);
struct device *parent = dev->parent;
- scsi_proc_hostdir_rm(shost->hostt);
-
/* Wait for functions invoked through call_rcu(&scmd->rcu, ...) */
rcu_barrier();
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index cf7988d..8aa5c22 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -5848,7 +5848,7 @@ static int hpsa_scsi_host_alloc(struct ctlr_info *h)
{
struct Scsi_Host *sh;
- sh = scsi_host_alloc(&hpsa_driver_template, sizeof(h));
+ sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info));
if (sh == NULL) {
dev_err(&h->pdev->dev, "scsi_host_alloc failed\n");
return -ENOMEM;
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index 04fb7fc..e5e3843 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -1516,23 +1516,22 @@ static void ipr_process_ccn(struct ipr_cmnd *ipr_cmd)
}
/**
- * strip_and_pad_whitespace - Strip and pad trailing whitespace.
- * @i: index into buffer
- * @buf: string to modify
+ * strip_whitespace - Strip and pad trailing whitespace.
+ * @i: size of buffer
+ * @buf: string to modify
*
- * This function will strip all trailing whitespace, pad the end
- * of the string with a single space, and NULL terminate the string.
+ * This function will strip all trailing whitespace and
+ * NUL terminate the string.
*
- * Return value:
- * new length of string
**/
-static int strip_and_pad_whitespace(int i, char *buf)
+static void strip_whitespace(int i, char *buf)
{
+ if (i < 1)
+ return;
+ i--;
while (i && buf[i] == ' ')
i--;
- buf[i+1] = ' ';
- buf[i+2] = '\0';
- return i + 2;
+ buf[i+1] = '\0';
}
/**
@@ -1547,19 +1546,21 @@ static int strip_and_pad_whitespace(int i, char *buf)
static void ipr_log_vpd_compact(char *prefix, struct ipr_hostrcb *hostrcb,
struct ipr_vpd *vpd)
{
- char buffer[IPR_VENDOR_ID_LEN + IPR_PROD_ID_LEN + IPR_SERIAL_NUM_LEN + 3];
- int i = 0;
+ char vendor_id[IPR_VENDOR_ID_LEN + 1];
+ char product_id[IPR_PROD_ID_LEN + 1];
+ char sn[IPR_SERIAL_NUM_LEN + 1];
- memcpy(buffer, vpd->vpids.vendor_id, IPR_VENDOR_ID_LEN);
- i = strip_and_pad_whitespace(IPR_VENDOR_ID_LEN - 1, buffer);
+ memcpy(vendor_id, vpd->vpids.vendor_id, IPR_VENDOR_ID_LEN);
+ strip_whitespace(IPR_VENDOR_ID_LEN, vendor_id);
- memcpy(&buffer[i], vpd->vpids.product_id, IPR_PROD_ID_LEN);
- i = strip_and_pad_whitespace(i + IPR_PROD_ID_LEN - 1, buffer);
+ memcpy(product_id, vpd->vpids.product_id, IPR_PROD_ID_LEN);
+ strip_whitespace(IPR_PROD_ID_LEN, product_id);
- memcpy(&buffer[i], vpd->sn, IPR_SERIAL_NUM_LEN);
- buffer[IPR_SERIAL_NUM_LEN + i] = '\0';
+ memcpy(sn, vpd->sn, IPR_SERIAL_NUM_LEN);
+ strip_whitespace(IPR_SERIAL_NUM_LEN, sn);
- ipr_hcam_err(hostrcb, "%s VPID/SN: %s\n", prefix, buffer);
+ ipr_hcam_err(hostrcb, "%s VPID/SN: %s %s %s\n", prefix,
+ vendor_id, product_id, sn);
}
/**
diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
index 4d2f330..fe705b8 100644
--- a/drivers/scsi/iscsi_tcp.c
+++ b/drivers/scsi/iscsi_tcp.c
@@ -767,13 +767,12 @@ static int iscsi_sw_tcp_conn_set_param(struct iscsi_cls_conn *cls_conn,
iscsi_set_param(cls_conn, param, buf, buflen);
break;
case ISCSI_PARAM_DATADGST_EN:
- iscsi_set_param(cls_conn, param, buf, buflen);
-
mutex_lock(&tcp_sw_conn->sock_lock);
if (!tcp_sw_conn->sock) {
mutex_unlock(&tcp_sw_conn->sock_lock);
return -ENOTCONN;
}
+ iscsi_set_param(cls_conn, param, buf, buflen);
tcp_sw_conn->sendpage = conn->datadgst_en ?
sock_no_sendpage : tcp_sw_conn->sock->ops->sendpage;
mutex_unlock(&tcp_sw_conn->sock_lock);
@@ -848,7 +847,7 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
enum iscsi_host_param param, char *buf)
{
struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(shost);
- struct iscsi_session *session = tcp_sw_host->session;
+ struct iscsi_session *session;
struct iscsi_conn *conn;
struct iscsi_tcp_conn *tcp_conn;
struct iscsi_sw_tcp_conn *tcp_sw_conn;
@@ -858,6 +857,7 @@ static int iscsi_sw_tcp_host_get_param(struct Scsi_Host *shost,
switch (param) {
case ISCSI_HOST_PARAM_IPADDRESS:
+ session = tcp_sw_host->session;
if (!session)
return -ENOTCONN;
@@ -958,11 +958,13 @@ iscsi_sw_tcp_session_create(struct iscsi_endpoint *ep, uint16_t cmds_max,
if (!cls_session)
goto remove_host;
session = cls_session->dd_data;
- tcp_sw_host = iscsi_host_priv(shost);
- tcp_sw_host->session = session;
if (iscsi_tcp_r2tpool_alloc(session))
goto remove_session;
+
+ /* We are now fully setup so expose the session to sysfs. */
+ tcp_sw_host = iscsi_host_priv(shost);
+ tcp_sw_host->session = session;
return cls_session;
remove_session:
@@ -982,10 +984,17 @@ static void iscsi_sw_tcp_session_destroy(struct iscsi_cls_session *cls_session)
if (WARN_ON_ONCE(session->leadconn))
return;
- iscsi_tcp_r2tpool_free(cls_session->dd_data);
- iscsi_session_teardown(cls_session);
-
+ iscsi_session_remove(cls_session);
+ /*
+ * Our get_host_param needs to access the session, so remove the
+ * host from sysfs before freeing the session to make sure userspace
+ * is no longer accessing the callout.
+ */
iscsi_host_remove(shost, false);
+
+ iscsi_tcp_r2tpool_free(cls_session->dd_data);
+
+ iscsi_session_free(cls_session);
iscsi_host_free(shost);
}
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
index 73d2355..d422e8f 100644
--- a/drivers/scsi/libiscsi.c
+++ b/drivers/scsi/libiscsi.c
@@ -3024,17 +3024,32 @@ iscsi_session_setup(struct iscsi_transport *iscsit, struct Scsi_Host *shost,
}
EXPORT_SYMBOL_GPL(iscsi_session_setup);
-/**
- * iscsi_session_teardown - destroy session, host, and cls_session
- * @cls_session: iscsi session
+/*
+ * issi_session_remove - Remove session from iSCSI class.
*/
-void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
+void iscsi_session_remove(struct iscsi_cls_session *cls_session)
{
struct iscsi_session *session = cls_session->dd_data;
- struct module *owner = cls_session->transport->owner;
struct Scsi_Host *shost = session->host;
iscsi_remove_session(cls_session);
+ /*
+ * host removal only has to wait for its children to be removed from
+ * sysfs, and iscsi_tcp needs to do iscsi_host_remove before freeing
+ * the session, so drop the session count here.
+ */
+ iscsi_host_dec_session_cnt(shost);
+}
+EXPORT_SYMBOL_GPL(iscsi_session_remove);
+
+/**
+ * iscsi_session_free - Free iscsi session and it's resources
+ * @cls_session: iscsi session
+ */
+void iscsi_session_free(struct iscsi_cls_session *cls_session)
+{
+ struct iscsi_session *session = cls_session->dd_data;
+ struct module *owner = cls_session->transport->owner;
iscsi_pool_free(&session->cmdpool);
kfree(session->password);
@@ -3052,10 +3067,19 @@ void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
kfree(session->discovery_parent_type);
iscsi_free_session(cls_session);
-
- iscsi_host_dec_session_cnt(shost);
module_put(owner);
}
+EXPORT_SYMBOL_GPL(iscsi_session_free);
+
+/**
+ * iscsi_session_teardown - destroy session and cls_session
+ * @cls_session: iscsi session
+ */
+void iscsi_session_teardown(struct iscsi_cls_session *cls_session)
+{
+ iscsi_session_remove(cls_session);
+ iscsi_session_free(cls_session);
+}
EXPORT_SYMBOL_GPL(iscsi_session_teardown);
/**
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index 855817f..f79299f 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -7056,6 +7056,8 @@ lpfc_sli4_cgn_params_read(struct lpfc_hba *phba)
/* Find out if the FW has a new set of congestion parameters. */
len = sizeof(struct lpfc_cgn_param);
pdata = kzalloc(len, GFP_KERNEL);
+ if (!pdata)
+ return -ENOMEM;
ret = lpfc_read_object(phba, (char *)LPFC_PORT_CFG_NAME,
pdata, len);
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index df3b190..30bc723 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -21066,6 +21066,7 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
struct lpfc_mbx_wr_object *wr_object;
LPFC_MBOXQ_t *mbox;
int rc = 0, i = 0;
+ int mbox_status = 0;
uint32_t shdr_status, shdr_add_status, shdr_add_status_2;
uint32_t shdr_change_status = 0, shdr_csf = 0;
uint32_t mbox_tmo;
@@ -21111,11 +21112,15 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
wr_object->u.request.bde_count = i;
bf_set(lpfc_wr_object_write_length, &wr_object->u.request, written);
if (!phba->sli4_hba.intr_enable)
- rc = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
+ mbox_status = lpfc_sli_issue_mbox(phba, mbox, MBX_POLL);
else {
mbox_tmo = lpfc_mbox_tmo_val(phba, mbox);
- rc = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
+ mbox_status = lpfc_sli_issue_mbox_wait(phba, mbox, mbox_tmo);
}
+
+ /* The mbox status needs to be maintained to detect MBOX_TIMEOUT. */
+ rc = mbox_status;
+
/* The IOCTL status is embedded in the mailbox subheader. */
shdr_status = bf_get(lpfc_mbox_hdr_status,
&wr_object->header.cfg_shdr.response);
@@ -21130,10 +21135,6 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
&wr_object->u.response);
}
- if (!phba->sli4_hba.intr_enable)
- mempool_free(mbox, phba->mbox_mem_pool);
- else if (rc != MBX_TIMEOUT)
- mempool_free(mbox, phba->mbox_mem_pool);
if (shdr_status || shdr_add_status || shdr_add_status_2 || rc) {
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"3025 Write Object mailbox failed with "
@@ -21151,6 +21152,12 @@ lpfc_wr_object(struct lpfc_hba *phba, struct list_head *dmabuf_list,
lpfc_log_fw_write_cmpl(phba, shdr_status, shdr_add_status,
shdr_add_status_2, shdr_change_status,
shdr_csf);
+
+ if (!phba->sli4_hba.intr_enable)
+ mempool_free(mbox, phba->mbox_mem_pool);
+ else if (mbox_status != MBX_TIMEOUT)
+ mempool_free(mbox, phba->mbox_mem_pool);
+
return rc;
}
@@ -22159,20 +22166,20 @@ lpfc_get_io_buf_from_private_pool(struct lpfc_hba *phba,
static struct lpfc_io_buf *
lpfc_get_io_buf_from_expedite_pool(struct lpfc_hba *phba)
{
- struct lpfc_io_buf *lpfc_ncmd;
+ struct lpfc_io_buf *lpfc_ncmd = NULL, *iter;
struct lpfc_io_buf *lpfc_ncmd_next;
unsigned long iflag;
struct lpfc_epd_pool *epd_pool;
epd_pool = &phba->epd_pool;
- lpfc_ncmd = NULL;
spin_lock_irqsave(&epd_pool->lock, iflag);
if (epd_pool->count > 0) {
- list_for_each_entry_safe(lpfc_ncmd, lpfc_ncmd_next,
+ list_for_each_entry_safe(iter, lpfc_ncmd_next,
&epd_pool->list, list) {
- list_del(&lpfc_ncmd->list);
+ list_del(&iter->list);
epd_pool->count--;
+ lpfc_ncmd = iter;
break;
}
}
@@ -22369,10 +22376,6 @@ lpfc_read_object(struct lpfc_hba *phba, char *rdobject, uint32_t *datap,
struct lpfc_dmabuf *pcmd;
u32 rd_object_name[LPFC_MBX_OBJECT_NAME_LEN_DW] = {0};
- /* sanity check on queue memory */
- if (!datap)
- return -ENODEV;
-
mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!mbox)
return -ENOMEM;
diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
index 650210d..02d7ab1 100644
--- a/drivers/scsi/megaraid/megaraid_sas.h
+++ b/drivers/scsi/megaraid/megaraid_sas.h
@@ -1517,6 +1517,8 @@ struct megasas_ctrl_info {
#define MEGASAS_MAX_LD_IDS (MEGASAS_MAX_LD_CHANNELS * \
MEGASAS_MAX_DEV_PER_CHANNEL)
+#define MEGASAS_MAX_SUPPORTED_LD_IDS 240
+
#define MEGASAS_MAX_SECTORS (2*1024)
#define MEGASAS_MAX_SECTORS_IEEE (2*128)
#define MEGASAS_DBG_LVL 1
diff --git a/drivers/scsi/megaraid/megaraid_sas_fp.c b/drivers/scsi/megaraid/megaraid_sas_fp.c
index 83f69c3..ec10d35 100644
--- a/drivers/scsi/megaraid/megaraid_sas_fp.c
+++ b/drivers/scsi/megaraid/megaraid_sas_fp.c
@@ -358,7 +358,7 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance, u64 map_id)
ld = MR_TargetIdToLdGet(i, drv_map);
/* For non existing VDs, iterate to next VD*/
- if (ld >= (MAX_LOGICAL_DRIVES_EXT - 1))
+ if (ld >= MEGASAS_MAX_SUPPORTED_LD_IDS)
continue;
raid = MR_LdRaidGet(ld, drv_map);
diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
index 0568378..c254254 100644
--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
+++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
@@ -4737,7 +4737,7 @@ int megasas_task_abort_fusion(struct scsi_cmnd *scmd)
devhandle = megasas_get_tm_devhandle(scmd->device);
if (devhandle == (u16)ULONG_MAX) {
- ret = SUCCESS;
+ ret = FAILED;
sdev_printk(KERN_INFO, scmd->device,
"task abort issued for invalid devhandle\n");
mutex_unlock(&instance->reset_mutex);
@@ -4807,7 +4807,7 @@ int megasas_reset_target_fusion(struct scsi_cmnd *scmd)
devhandle = megasas_get_tm_devhandle(scmd->device);
if (devhandle == (u16)ULONG_MAX) {
- ret = SUCCESS;
+ ret = FAILED;
sdev_printk(KERN_INFO, scmd->device,
"target reset issued for invalid devhandle\n");
mutex_unlock(&instance->reset_mutex);
diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
index 766c3a5..5aa4ae0 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
@@ -2990,19 +2990,25 @@ static int
_base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
{
struct sysinfo s;
+ u64 coherent_dma_mask, dma_mask;
- if (ioc->is_mcpu_endpoint ||
- sizeof(dma_addr_t) == 4 || ioc->use_32bit_dma ||
- dma_get_required_mask(&pdev->dev) <= DMA_BIT_MASK(32))
+ if (ioc->is_mcpu_endpoint || sizeof(dma_addr_t) == 4) {
ioc->dma_mask = 32;
+ coherent_dma_mask = dma_mask = DMA_BIT_MASK(32);
/* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
- else if (ioc->hba_mpi_version_belonged > MPI2_VERSION)
+ } else if (ioc->hba_mpi_version_belonged > MPI2_VERSION) {
ioc->dma_mask = 63;
- else
+ coherent_dma_mask = dma_mask = DMA_BIT_MASK(63);
+ } else {
ioc->dma_mask = 64;
+ coherent_dma_mask = dma_mask = DMA_BIT_MASK(64);
+ }
- if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)) ||
- dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(ioc->dma_mask)))
+ if (ioc->use_32bit_dma)
+ coherent_dma_mask = DMA_BIT_MASK(32);
+
+ if (dma_set_mask(&pdev->dev, dma_mask) ||
+ dma_set_coherent_mask(&pdev->dev, coherent_dma_mask))
return -ENODEV;
if (ioc->dma_mask > 32) {
@@ -5682,6 +5688,9 @@ _base_release_memory_pools(struct MPT3SAS_ADAPTER *ioc)
}
dma_pool_destroy(ioc->pcie_sgl_dma_pool);
}
+ kfree(ioc->pcie_sg_lookup);
+ ioc->pcie_sg_lookup = NULL;
+
if (ioc->config_page) {
dexitprintk(ioc,
ioc_info(ioc, "config_page(0x%p): free\n",
@@ -6447,11 +6456,6 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
else if (rc == -EAGAIN)
goto try_32bit_dma;
total_sz += sense_sz;
- ioc_info(ioc,
- "sense pool(0x%p)- dma(0x%llx): depth(%d),"
- "element_size(%d), pool_size(%d kB)\n",
- ioc->sense, (unsigned long long)ioc->sense_dma, ioc->scsiio_depth,
- SCSI_SENSE_BUFFERSIZE, sz / 1024);
/* reply pool, 4 byte align */
sz = ioc->reply_free_queue_depth * ioc->reply_sz;
rc = _base_allocate_reply_pool(ioc, sz);
diff --git a/drivers/scsi/mpt3sas/mpt3sas_transport.c b/drivers/scsi/mpt3sas/mpt3sas_transport.c
index e5ecd6a..e8a4750 100644
--- a/drivers/scsi/mpt3sas/mpt3sas_transport.c
+++ b/drivers/scsi/mpt3sas/mpt3sas_transport.c
@@ -785,7 +785,7 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
goto out_fail;
}
port = sas_port_alloc_num(sas_node->parent_dev);
- if ((sas_port_add(port))) {
+ if (!port || (sas_port_add(port))) {
ioc_err(ioc, "failure at %s:%d/%s()!\n",
__FILE__, __LINE__, __func__);
goto out_fail;
@@ -824,6 +824,12 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
mpt3sas_port->remote_identify.sas_address;
}
+ if (!rphy) {
+ ioc_err(ioc, "failure at %s:%d/%s()!\n",
+ __FILE__, __LINE__, __func__);
+ goto out_delete_port;
+ }
+
rphy->identify = mpt3sas_port->remote_identify;
if ((sas_rphy_add(rphy))) {
@@ -831,6 +837,7 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
__FILE__, __LINE__, __func__);
sas_rphy_free(rphy);
rphy = NULL;
+ goto out_delete_port;
}
if (mpt3sas_port->remote_identify.device_type == SAS_END_DEVICE) {
@@ -857,7 +864,10 @@ mpt3sas_transport_port_add(struct MPT3SAS_ADAPTER *ioc, u16 handle,
rphy_to_expander_device(rphy), hba_port->port_id);
return mpt3sas_port;
- out_fail:
+out_delete_port:
+ sas_port_delete(port);
+
+out_fail:
list_for_each_entry_safe(mpt3sas_phy, next, &mpt3sas_port->phy_list,
port_siblings)
list_del(&mpt3sas_phy->port_siblings);
diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
index 3650f16..c7c5c013 100644
--- a/drivers/scsi/qla2xxx/qla_bsg.c
+++ b/drivers/scsi/qla2xxx/qla_bsg.c
@@ -278,8 +278,8 @@ qla2x00_process_els(struct bsg_job *bsg_job)
const char *type;
int req_sg_cnt, rsp_sg_cnt;
int rval = (DID_ERROR << 16);
- uint16_t nextlid = 0;
uint32_t els_cmd = 0;
+ int qla_port_allocated = 0;
if (bsg_request->msgcode == FC_BSG_RPT_ELS) {
rport = fc_bsg_to_rport(bsg_job);
@@ -329,9 +329,9 @@ qla2x00_process_els(struct bsg_job *bsg_job)
/* make sure the rport is logged in,
* if not perform fabric login
*/
- if (qla2x00_fabric_login(vha, fcport, &nextlid)) {
+ if (atomic_read(&fcport->state) != FCS_ONLINE) {
ql_dbg(ql_dbg_user, vha, 0x7003,
- "Failed to login port %06X for ELS passthru.\n",
+ "Port %06X is not online for ELS passthru.\n",
fcport->d_id.b24);
rval = -EIO;
goto done;
@@ -348,6 +348,7 @@ qla2x00_process_els(struct bsg_job *bsg_job)
goto done;
}
+ qla_port_allocated = 1;
/* Initialize all required fields of fcport */
fcport->vha = vha;
fcport->d_id.b.al_pa =
@@ -432,7 +433,7 @@ qla2x00_process_els(struct bsg_job *bsg_job)
goto done_free_fcport;
done_free_fcport:
- if (bsg_request->msgcode != FC_BSG_RPT_ELS)
+ if (qla_port_allocated)
qla2x00_free_fcport(fcport);
done:
return rval;
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index 307ffdfe..5b499b0 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -655,7 +655,7 @@ enum {
struct iocb_resource {
u8 res_type;
- u8 pad;
+ u8 exch_cnt;
u16 iocb_cnt;
};
@@ -3707,6 +3707,10 @@ struct qla_fw_resources {
u16 iocbs_limit;
u16 iocbs_qp_limit;
u16 iocbs_used;
+ u16 exch_total;
+ u16 exch_limit;
+ u16 exch_used;
+ u16 pad;
};
#define QLA_IOCB_PCT_LIMIT 95
diff --git a/drivers/scsi/qla2xxx/qla_dfs.c b/drivers/scsi/qla2xxx/qla_dfs.c
index 85bd0e4..8f6f56c 100644
--- a/drivers/scsi/qla2xxx/qla_dfs.c
+++ b/drivers/scsi/qla2xxx/qla_dfs.c
@@ -235,7 +235,7 @@ qla_dfs_fw_resource_cnt_show(struct seq_file *s, void *unused)
uint16_t mb[MAX_IOCB_MB_REG];
int rc;
struct qla_hw_data *ha = vha->hw;
- u16 iocbs_used, i;
+ u16 iocbs_used, i, exch_used;
rc = qla24xx_res_count_wait(vha, mb, SIZEOF_IOCB_MB_REG);
if (rc != QLA_SUCCESS) {
@@ -263,13 +263,19 @@ qla_dfs_fw_resource_cnt_show(struct seq_file *s, void *unused)
if (ql2xenforce_iocb_limit) {
/* lock is not require. It's an estimate. */
iocbs_used = ha->base_qpair->fwres.iocbs_used;
+ exch_used = ha->base_qpair->fwres.exch_used;
for (i = 0; i < ha->max_qpairs; i++) {
- if (ha->queue_pair_map[i])
+ if (ha->queue_pair_map[i]) {
iocbs_used += ha->queue_pair_map[i]->fwres.iocbs_used;
+ exch_used += ha->queue_pair_map[i]->fwres.exch_used;
+ }
}
seq_printf(s, "Driver: estimate iocb used [%d] high water limit [%d]\n",
iocbs_used, ha->base_qpair->fwres.iocbs_limit);
+
+ seq_printf(s, "estimate exchange used[%d] high water limit [%d] n",
+ exch_used, ha->base_qpair->fwres.exch_limit);
}
return 0;
diff --git a/drivers/scsi/qla2xxx/qla_edif.c b/drivers/scsi/qla2xxx/qla_edif.c
index 8e92374..f81cf85 100644
--- a/drivers/scsi/qla2xxx/qla_edif.c
+++ b/drivers/scsi/qla2xxx/qla_edif.c
@@ -2898,6 +2898,13 @@ qla28xx_start_scsi_edif(srb_t *sp)
tot_dsds = nseg;
req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+
+ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
+ sp->iores.exch_cnt = 1;
+ sp->iores.iocb_cnt = req_cnt;
+ if (qla_get_fw_resources(sp->qpair, &sp->iores))
+ goto queuing_error;
+
if (req->cnt < (req_cnt + 2)) {
cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr :
rd_reg_dword(req->req_q_out);
@@ -3089,6 +3096,7 @@ qla28xx_start_scsi_edif(srb_t *sp)
mempool_free(sp->u.scmd.ct6_ctx, ha->ctx_mempool);
sp->u.scmd.ct6_ctx = NULL;
}
+ qla_put_fw_resources(sp->qpair, &sp->iores);
spin_unlock_irqrestore(lock, flags);
return QLA_FUNCTION_FAILED;
diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
index 30798ab..ded027f 100644
--- a/drivers/scsi/qla2xxx/qla_init.c
+++ b/drivers/scsi/qla2xxx/qla_init.c
@@ -128,12 +128,14 @@ static void qla24xx_abort_iocb_timeout(void *data)
sp->cmd_sp)) {
qpair->req->outstanding_cmds[handle] = NULL;
cmdsp_found = 1;
+ qla_put_fw_resources(qpair, &sp->cmd_sp->iores);
}
/* removing the abort */
if (qpair->req->outstanding_cmds[handle] == sp) {
qpair->req->outstanding_cmds[handle] = NULL;
sp_found = 1;
+ qla_put_fw_resources(qpair, &sp->iores);
break;
}
}
@@ -2002,6 +2004,7 @@ qla2x00_tmf_iocb_timeout(void *data)
for (h = 1; h < sp->qpair->req->num_outstanding_cmds; h++) {
if (sp->qpair->req->outstanding_cmds[h] == sp) {
sp->qpair->req->outstanding_cmds[h] = NULL;
+ qla_put_fw_resources(sp->qpair, &sp->iores);
break;
}
}
@@ -2075,7 +2078,6 @@ qla2x00_async_tm_cmd(fc_port_t *fcport, uint32_t flags, uint32_t lun,
done_free_sp:
/* ref: INIT */
kref_put(&sp->cmd_kref, qla2x00_sp_release);
- fcport->flags &= ~FCF_ASYNC_SENT;
done:
return rval;
}
@@ -3945,6 +3947,12 @@ void qla_init_iocb_limit(scsi_qla_host_t *vha)
ha->base_qpair->fwres.iocbs_limit = limit;
ha->base_qpair->fwres.iocbs_qp_limit = limit / num_qps;
ha->base_qpair->fwres.iocbs_used = 0;
+
+ ha->base_qpair->fwres.exch_total = ha->orig_fw_xcb_count;
+ ha->base_qpair->fwres.exch_limit = (ha->orig_fw_xcb_count *
+ QLA_IOCB_PCT_LIMIT) / 100;
+ ha->base_qpair->fwres.exch_used = 0;
+
for (i = 0; i < ha->max_qpairs; i++) {
if (ha->queue_pair_map[i]) {
ha->queue_pair_map[i]->fwres.iocbs_total =
@@ -3953,6 +3961,10 @@ void qla_init_iocb_limit(scsi_qla_host_t *vha)
ha->queue_pair_map[i]->fwres.iocbs_qp_limit =
limit / num_qps;
ha->queue_pair_map[i]->fwres.iocbs_used = 0;
+ ha->queue_pair_map[i]->fwres.exch_total = ha->orig_fw_xcb_count;
+ ha->queue_pair_map[i]->fwres.exch_limit =
+ (ha->orig_fw_xcb_count * QLA_IOCB_PCT_LIMIT) / 100;
+ ha->queue_pair_map[i]->fwres.exch_used = 0;
}
}
}
diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h
index 5185dc5..b0ee307 100644
--- a/drivers/scsi/qla2xxx/qla_inline.h
+++ b/drivers/scsi/qla2xxx/qla_inline.h
@@ -380,24 +380,26 @@ qla2xxx_get_fc4_priority(struct scsi_qla_host *vha)
enum {
RESOURCE_NONE,
- RESOURCE_INI,
+ RESOURCE_IOCB = BIT_0,
+ RESOURCE_EXCH = BIT_1, /* exchange */
+ RESOURCE_FORCE = BIT_2,
};
static inline int
-qla_get_iocbs(struct qla_qpair *qp, struct iocb_resource *iores)
+qla_get_fw_resources(struct qla_qpair *qp, struct iocb_resource *iores)
{
u16 iocbs_used, i;
+ u16 exch_used;
struct qla_hw_data *ha = qp->vha->hw;
if (!ql2xenforce_iocb_limit) {
iores->res_type = RESOURCE_NONE;
return 0;
}
+ if (iores->res_type & RESOURCE_FORCE)
+ goto force;
- if ((iores->iocb_cnt + qp->fwres.iocbs_used) < qp->fwres.iocbs_qp_limit) {
- qp->fwres.iocbs_used += iores->iocb_cnt;
- return 0;
- } else {
+ if ((iores->iocb_cnt + qp->fwres.iocbs_used) >= qp->fwres.iocbs_qp_limit) {
/* no need to acquire qpair lock. It's just rough calculation */
iocbs_used = ha->base_qpair->fwres.iocbs_used;
for (i = 0; i < ha->max_qpairs; i++) {
@@ -405,30 +407,49 @@ qla_get_iocbs(struct qla_qpair *qp, struct iocb_resource *iores)
iocbs_used += ha->queue_pair_map[i]->fwres.iocbs_used;
}
- if ((iores->iocb_cnt + iocbs_used) < qp->fwres.iocbs_limit) {
- qp->fwres.iocbs_used += iores->iocb_cnt;
- return 0;
- } else {
+ if ((iores->iocb_cnt + iocbs_used) >= qp->fwres.iocbs_limit) {
iores->res_type = RESOURCE_NONE;
return -ENOSPC;
}
}
+
+ if (iores->res_type & RESOURCE_EXCH) {
+ exch_used = ha->base_qpair->fwres.exch_used;
+ for (i = 0; i < ha->max_qpairs; i++) {
+ if (ha->queue_pair_map[i])
+ exch_used += ha->queue_pair_map[i]->fwres.exch_used;
+ }
+
+ if ((exch_used + iores->exch_cnt) >= qp->fwres.exch_limit) {
+ iores->res_type = RESOURCE_NONE;
+ return -ENOSPC;
+ }
+ }
+force:
+ qp->fwres.iocbs_used += iores->iocb_cnt;
+ qp->fwres.exch_used += iores->exch_cnt;
+ return 0;
}
static inline void
-qla_put_iocbs(struct qla_qpair *qp, struct iocb_resource *iores)
+qla_put_fw_resources(struct qla_qpair *qp, struct iocb_resource *iores)
{
- switch (iores->res_type) {
- case RESOURCE_NONE:
- break;
- default:
+ if (iores->res_type & RESOURCE_IOCB) {
if (qp->fwres.iocbs_used >= iores->iocb_cnt) {
qp->fwres.iocbs_used -= iores->iocb_cnt;
} else {
- // should not happen
+ /* should not happen */
qp->fwres.iocbs_used = 0;
}
- break;
+ }
+
+ if (iores->res_type & RESOURCE_EXCH) {
+ if (qp->fwres.exch_used >= iores->exch_cnt) {
+ qp->fwres.exch_used -= iores->exch_cnt;
+ } else {
+ /* should not happen */
+ qp->fwres.exch_used = 0;
+ }
}
iores->res_type = RESOURCE_NONE;
}
diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
index 42ce4e1..4f48f09 100644
--- a/drivers/scsi/qla2xxx/qla_iocb.c
+++ b/drivers/scsi/qla2xxx/qla_iocb.c
@@ -1589,9 +1589,10 @@ qla24xx_start_scsi(srb_t *sp)
tot_dsds = nseg;
req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
- sp->iores.res_type = RESOURCE_INI;
+ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
+ sp->iores.exch_cnt = 1;
sp->iores.iocb_cnt = req_cnt;
- if (qla_get_iocbs(sp->qpair, &sp->iores))
+ if (qla_get_fw_resources(sp->qpair, &sp->iores))
goto queuing_error;
if (req->cnt < (req_cnt + 2)) {
@@ -1678,7 +1679,7 @@ qla24xx_start_scsi(srb_t *sp)
if (tot_dsds)
scsi_dma_unmap(cmd);
- qla_put_iocbs(sp->qpair, &sp->iores);
+ qla_put_fw_resources(sp->qpair, &sp->iores);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
return QLA_FUNCTION_FAILED;
@@ -1793,9 +1794,10 @@ qla24xx_dif_start_scsi(srb_t *sp)
tot_prot_dsds = nseg;
tot_dsds += nseg;
- sp->iores.res_type = RESOURCE_INI;
+ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
+ sp->iores.exch_cnt = 1;
sp->iores.iocb_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
- if (qla_get_iocbs(sp->qpair, &sp->iores))
+ if (qla_get_fw_resources(sp->qpair, &sp->iores))
goto queuing_error;
if (req->cnt < (req_cnt + 2)) {
@@ -1883,7 +1885,7 @@ qla24xx_dif_start_scsi(srb_t *sp)
}
/* Cleanup will be performed by the caller (queuecommand) */
- qla_put_iocbs(sp->qpair, &sp->iores);
+ qla_put_fw_resources(sp->qpair, &sp->iores);
spin_unlock_irqrestore(&ha->hardware_lock, flags);
return QLA_FUNCTION_FAILED;
@@ -1952,9 +1954,10 @@ qla2xxx_start_scsi_mq(srb_t *sp)
tot_dsds = nseg;
req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
- sp->iores.res_type = RESOURCE_INI;
+ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
+ sp->iores.exch_cnt = 1;
sp->iores.iocb_cnt = req_cnt;
- if (qla_get_iocbs(sp->qpair, &sp->iores))
+ if (qla_get_fw_resources(sp->qpair, &sp->iores))
goto queuing_error;
if (req->cnt < (req_cnt + 2)) {
@@ -2041,7 +2044,7 @@ qla2xxx_start_scsi_mq(srb_t *sp)
if (tot_dsds)
scsi_dma_unmap(cmd);
- qla_put_iocbs(sp->qpair, &sp->iores);
+ qla_put_fw_resources(sp->qpair, &sp->iores);
spin_unlock_irqrestore(&qpair->qp_lock, flags);
return QLA_FUNCTION_FAILED;
@@ -2171,9 +2174,10 @@ qla2xxx_dif_start_scsi_mq(srb_t *sp)
tot_prot_dsds = nseg;
tot_dsds += nseg;
- sp->iores.res_type = RESOURCE_INI;
+ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
+ sp->iores.exch_cnt = 1;
sp->iores.iocb_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
- if (qla_get_iocbs(sp->qpair, &sp->iores))
+ if (qla_get_fw_resources(sp->qpair, &sp->iores))
goto queuing_error;
if (req->cnt < (req_cnt + 2)) {
@@ -2260,7 +2264,7 @@ qla2xxx_dif_start_scsi_mq(srb_t *sp)
}
/* Cleanup will be performed by the caller (queuecommand) */
- qla_put_iocbs(sp->qpair, &sp->iores);
+ qla_put_fw_resources(sp->qpair, &sp->iores);
spin_unlock_irqrestore(&qpair->qp_lock, flags);
return QLA_FUNCTION_FAILED;
@@ -3813,6 +3817,65 @@ qla24xx_prlo_iocb(srb_t *sp, struct logio_entry_24xx *logio)
logio->vp_index = sp->fcport->vha->vp_idx;
}
+int qla_get_iocbs_resource(struct srb *sp)
+{
+ bool get_exch;
+ bool push_it_through = false;
+
+ if (!ql2xenforce_iocb_limit) {
+ sp->iores.res_type = RESOURCE_NONE;
+ return 0;
+ }
+ sp->iores.res_type = RESOURCE_NONE;
+
+ switch (sp->type) {
+ case SRB_TM_CMD:
+ case SRB_PRLI_CMD:
+ case SRB_ADISC_CMD:
+ push_it_through = true;
+ fallthrough;
+ case SRB_LOGIN_CMD:
+ case SRB_ELS_CMD_RPT:
+ case SRB_ELS_CMD_HST:
+ case SRB_ELS_CMD_HST_NOLOGIN:
+ case SRB_CT_CMD:
+ case SRB_NVME_LS:
+ case SRB_ELS_DCMD:
+ get_exch = true;
+ break;
+
+ case SRB_FXIOCB_DCMD:
+ case SRB_FXIOCB_BCMD:
+ sp->iores.res_type = RESOURCE_NONE;
+ return 0;
+
+ case SRB_SA_UPDATE:
+ case SRB_SA_REPLACE:
+ case SRB_MB_IOCB:
+ case SRB_ABT_CMD:
+ case SRB_NACK_PLOGI:
+ case SRB_NACK_PRLI:
+ case SRB_NACK_LOGO:
+ case SRB_LOGOUT_CMD:
+ case SRB_CTRL_VP:
+ push_it_through = true;
+ fallthrough;
+ default:
+ get_exch = false;
+ }
+
+ sp->iores.res_type |= RESOURCE_IOCB;
+ sp->iores.iocb_cnt = 1;
+ if (get_exch) {
+ sp->iores.res_type |= RESOURCE_EXCH;
+ sp->iores.exch_cnt = 1;
+ }
+ if (push_it_through)
+ sp->iores.res_type |= RESOURCE_FORCE;
+
+ return qla_get_fw_resources(sp->qpair, &sp->iores);
+}
+
int
qla2x00_start_sp(srb_t *sp)
{
@@ -3827,6 +3890,12 @@ qla2x00_start_sp(srb_t *sp)
return -EIO;
spin_lock_irqsave(qp->qp_lock_ptr, flags);
+ rval = qla_get_iocbs_resource(sp);
+ if (rval) {
+ spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
+ return -EAGAIN;
+ }
+
pkt = __qla2x00_alloc_iocbs(sp->qpair, sp);
if (!pkt) {
rval = EAGAIN;
@@ -3927,6 +3996,8 @@ qla2x00_start_sp(srb_t *sp)
wmb();
qla2x00_start_iocbs(vha, qp->req);
done:
+ if (rval)
+ qla_put_fw_resources(sp->qpair, &sp->iores);
spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
return rval;
}
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index 59f5918..6659599 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -1897,6 +1897,8 @@ qla2x00_get_sp_from_handle(scsi_qla_host_t *vha, const char *func,
}
req->outstanding_cmds[index] = NULL;
+
+ qla_put_fw_resources(sp->qpair, &sp->iores);
return sp;
}
@@ -3184,7 +3186,7 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
}
return;
}
- qla_put_iocbs(sp->qpair, &sp->iores);
+ qla_put_fw_resources(sp->qpair, &sp->iores);
if (sp->cmd_type != TYPE_SRB) {
req->outstanding_cmds[handle] = NULL;
@@ -3349,8 +3351,6 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
"Dropped frame(s) detected (0x%x of 0x%x bytes).\n",
resid, scsi_bufflen(cp));
- vha->interface_err_cnt++;
-
res = DID_ERROR << 16 | lscsi_status;
goto check_scsi_status;
}
@@ -3605,7 +3605,6 @@ qla2x00_error_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, sts_entry_t *pkt)
default:
sp = qla2x00_get_sp_from_handle(vha, func, req, pkt);
if (sp) {
- qla_put_iocbs(sp->qpair, &sp->iores);
sp->done(sp, res);
return 0;
}
diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
index 3e167dc..98edab6 100644
--- a/drivers/scsi/qla2xxx/qla_nvme.c
+++ b/drivers/scsi/qla2xxx/qla_nvme.c
@@ -165,18 +165,6 @@ static void qla_nvme_release_fcp_cmd_kref(struct kref *kref)
qla2xxx_rel_qpair_sp(sp->qpair, sp);
}
-static void qla_nvme_ls_unmap(struct srb *sp, struct nvmefc_ls_req *fd)
-{
- if (sp->flags & SRB_DMA_VALID) {
- struct srb_iocb *nvme = &sp->u.iocb_cmd;
- struct qla_hw_data *ha = sp->fcport->vha->hw;
-
- dma_unmap_single(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
- fd->rqstlen, DMA_TO_DEVICE);
- sp->flags &= ~SRB_DMA_VALID;
- }
-}
-
static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
{
struct srb *sp = container_of(kref, struct srb, cmd_kref);
@@ -194,7 +182,6 @@ static void qla_nvme_release_ls_cmd_kref(struct kref *kref)
fd = priv->fd;
- qla_nvme_ls_unmap(sp, fd);
fd->done(fd, priv->comp_status);
out:
qla2x00_rel_sp(sp);
@@ -360,13 +347,10 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
nvme->u.nvme.rsp_len = fd->rsplen;
nvme->u.nvme.rsp_dma = fd->rspdma;
nvme->u.nvme.timeout_sec = fd->timeout;
- nvme->u.nvme.cmd_dma = dma_map_single(&ha->pdev->dev, fd->rqstaddr,
- fd->rqstlen, DMA_TO_DEVICE);
+ nvme->u.nvme.cmd_dma = fd->rqstdma;
dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma,
fd->rqstlen, DMA_TO_DEVICE);
- sp->flags |= SRB_DMA_VALID;
-
rval = qla2x00_start_sp(sp);
if (rval != QLA_SUCCESS) {
ql_log(ql_log_warn, vha, 0x700e,
@@ -374,7 +358,6 @@ static int qla_nvme_ls_req(struct nvme_fc_local_port *lport,
wake_up(&sp->nvme_ls_waitq);
sp->priv = NULL;
priv->sp = NULL;
- qla_nvme_ls_unmap(sp, fd);
qla2x00_rel_sp(sp);
return rval;
}
@@ -438,13 +421,24 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
goto queuing_error;
}
req_cnt = qla24xx_calc_iocbs(vha, tot_dsds);
+
+ sp->iores.res_type = RESOURCE_IOCB | RESOURCE_EXCH;
+ sp->iores.exch_cnt = 1;
+ sp->iores.iocb_cnt = req_cnt;
+ if (qla_get_fw_resources(sp->qpair, &sp->iores)) {
+ rval = -EBUSY;
+ goto queuing_error;
+ }
+
if (req->cnt < (req_cnt + 2)) {
if (IS_SHADOW_REG_CAPABLE(ha)) {
cnt = *req->out_ptr;
} else {
cnt = rd_reg_dword_relaxed(req->req_q_out);
- if (qla2x00_check_reg16_for_disconnect(vha, cnt))
+ if (qla2x00_check_reg16_for_disconnect(vha, cnt)) {
+ rval = -EBUSY;
goto queuing_error;
+ }
}
if (req->ring_index < cnt)
@@ -589,6 +583,8 @@ static inline int qla2x00_start_nvme_mq(srb_t *sp)
wrt_reg_dword(req->req_q_in, req->ring_index);
queuing_error:
+ if (rval)
+ qla_put_fw_resources(sp->qpair, &sp->iores);
spin_unlock_irqrestore(&qpair->qp_lock, flags);
return rval;
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index 05d8272..2efe313 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -1845,6 +1845,17 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res)
for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) {
sp = req->outstanding_cmds[cnt];
if (sp) {
+ /*
+ * perform lockless completion during driver unload
+ */
+ if (qla2x00_chip_is_down(vha)) {
+ req->outstanding_cmds[cnt] = NULL;
+ spin_unlock_irqrestore(qp->qp_lock_ptr, flags);
+ sp->done(sp, res);
+ spin_lock_irqsave(qp->qp_lock_ptr, flags);
+ continue;
+ }
+
switch (sp->cmd_type) {
case TYPE_SRB:
qla2x00_abort_srb(qp, sp, res, &flags);
@@ -3562,6 +3573,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
probe_failed:
qla_enode_stop(base_vha);
qla_edb_stop(base_vha);
+ vfree(base_vha->scan.l);
if (base_vha->gnl.l) {
dma_free_coherent(&ha->pdev->dev, base_vha->gnl.size,
base_vha->gnl.l, base_vha->gnl.ldma);
@@ -7069,9 +7081,12 @@ qla2x00_do_dpc(void *data)
}
}
loop_resync_check:
- if (test_and_clear_bit(LOOP_RESYNC_NEEDED,
+ if (!qla2x00_reset_active(base_vha) &&
+ test_and_clear_bit(LOOP_RESYNC_NEEDED,
&base_vha->dpc_flags)) {
-
+ /*
+ * Allow abort_isp to complete before moving on to scanning.
+ */
ql_dbg(ql_dbg_dpc, base_vha, 0x400f,
"Loop resync scheduled.\n");
@@ -7422,7 +7437,7 @@ qla2x00_timer(struct timer_list *t)
/* if the loop has been down for 4 minutes, reinit adapter */
if (atomic_dec_and_test(&vha->loop_down_timer) != 0) {
- if (!(vha->device_flags & DFLG_NO_CABLE)) {
+ if (!(vha->device_flags & DFLG_NO_CABLE) && !vha->vp_idx) {
ql_log(ql_log_warn, vha, 0x6009,
"Loop down - aborting ISP.\n");
diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
index c708045..bd110a9 100644
--- a/drivers/scsi/scsi_devinfo.c
+++ b/drivers/scsi/scsi_devinfo.c
@@ -233,6 +233,7 @@ static struct {
{"SGI", "RAID5", "*", BLIST_SPARSELUN},
{"SGI", "TP9100", "*", BLIST_REPORTLUN2},
{"SGI", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
+ {"SKhynix", "H28U74301AMR", NULL, BLIST_SKIP_VPD_PAGES},
{"IBM", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
{"SUN", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
{"DELL", "Universal Xport", "*", BLIST_NO_ULD_ATTACH},
diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 9466474f..86c10ed 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -1206,8 +1206,7 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
* that no LUN is present, so don't add sdev in these cases.
* Two specific examples are:
* 1) NetApp targets: return PQ=1, PDT=0x1f
- * 2) IBM/2145 targets: return PQ=1, PDT=0
- * 3) USB UFI: returns PDT=0x1f, with the PQ bits being "reserved"
+ * 2) USB UFI: returns PDT=0x1f, with the PQ bits being "reserved"
* in the UFI 1.0 spec (we cannot rely on reserved bits).
*
* References:
@@ -1221,8 +1220,8 @@ static int scsi_probe_and_add_lun(struct scsi_target *starget,
* PDT=00h Direct-access device (floppy)
* PDT=1Fh none (no FDD connected to the requested logical unit)
*/
- if (((result[0] >> 5) == 1 ||
- (starget->pdt_1f_for_no_lun && (result[0] & 0x1f) == 0x1f)) &&
+ if (((result[0] >> 5) == 1 || starget->pdt_1f_for_no_lun) &&
+ (result[0] & 0x1f) == 0x1f &&
!scsi_is_wlun(lun)) {
SCSI_LOG_SCAN_BUS(3, sdev_printk(KERN_INFO, sdev,
"scsi scan: peripheral device type"
diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
index cc39cbe..4d23e5a 100644
--- a/drivers/scsi/scsi_transport_iscsi.c
+++ b/drivers/scsi/scsi_transport_iscsi.c
@@ -1679,6 +1679,13 @@ static const char *iscsi_session_state_name(int state)
return name;
}
+static char *iscsi_session_target_state_name[] = {
+ [ISCSI_SESSION_TARGET_UNBOUND] = "UNBOUND",
+ [ISCSI_SESSION_TARGET_ALLOCATED] = "ALLOCATED",
+ [ISCSI_SESSION_TARGET_SCANNED] = "SCANNED",
+ [ISCSI_SESSION_TARGET_UNBINDING] = "UNBINDING",
+};
+
int iscsi_session_chkready(struct iscsi_cls_session *session)
{
int err;
@@ -1807,9 +1814,13 @@ static int iscsi_user_scan_session(struct device *dev, void *data)
if ((scan_data->channel == SCAN_WILD_CARD ||
scan_data->channel == 0) &&
(scan_data->id == SCAN_WILD_CARD ||
- scan_data->id == id))
+ scan_data->id == id)) {
scsi_scan_target(&session->dev, 0, id,
scan_data->lun, scan_data->rescan);
+ spin_lock_irqsave(&session->lock, flags);
+ session->target_state = ISCSI_SESSION_TARGET_SCANNED;
+ spin_unlock_irqrestore(&session->lock, flags);
+ }
}
user_scan_exit:
@@ -1998,31 +2009,41 @@ static void __iscsi_unbind_session(struct work_struct *work)
struct iscsi_cls_host *ihost = shost->shost_data;
unsigned long flags;
unsigned int target_id;
+ bool remove_target = true;
ISCSI_DBG_TRANS_SESSION(session, "Unbinding session\n");
/* Prevent new scans and make sure scanning is not in progress */
mutex_lock(&ihost->mutex);
spin_lock_irqsave(&session->lock, flags);
- if (session->target_id == ISCSI_MAX_TARGET) {
+ if (session->target_state == ISCSI_SESSION_TARGET_ALLOCATED) {
+ remove_target = false;
+ } else if (session->target_state != ISCSI_SESSION_TARGET_SCANNED) {
spin_unlock_irqrestore(&session->lock, flags);
mutex_unlock(&ihost->mutex);
- goto unbind_session_exit;
+ ISCSI_DBG_TRANS_SESSION(session,
+ "Skipping target unbinding: Session is unbound/unbinding.\n");
+ return;
}
+ session->target_state = ISCSI_SESSION_TARGET_UNBINDING;
target_id = session->target_id;
session->target_id = ISCSI_MAX_TARGET;
spin_unlock_irqrestore(&session->lock, flags);
mutex_unlock(&ihost->mutex);
- scsi_remove_target(&session->dev);
+ if (remove_target)
+ scsi_remove_target(&session->dev);
if (session->ida_used)
ida_simple_remove(&iscsi_sess_ida, target_id);
-unbind_session_exit:
iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION);
ISCSI_DBG_TRANS_SESSION(session, "Completed target removal\n");
+
+ spin_lock_irqsave(&session->lock, flags);
+ session->target_state = ISCSI_SESSION_TARGET_UNBOUND;
+ spin_unlock_irqrestore(&session->lock, flags);
}
static void __iscsi_destroy_session(struct work_struct *work)
@@ -2091,6 +2112,9 @@ int iscsi_add_session(struct iscsi_cls_session *session, unsigned int target_id)
session->ida_used = true;
} else
session->target_id = target_id;
+ spin_lock_irqsave(&session->lock, flags);
+ session->target_state = ISCSI_SESSION_TARGET_ALLOCATED;
+ spin_unlock_irqrestore(&session->lock, flags);
dev_set_name(&session->dev, "session%u", session->sid);
err = device_add(&session->dev);
@@ -4392,6 +4416,19 @@ iscsi_session_attr(discovery_parent_idx, ISCSI_PARAM_DISCOVERY_PARENT_IDX, 0);
iscsi_session_attr(discovery_parent_type, ISCSI_PARAM_DISCOVERY_PARENT_TYPE, 0);
static ssize_t
+show_priv_session_target_state(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct iscsi_cls_session *session = iscsi_dev_to_session(dev->parent);
+
+ return sysfs_emit(buf, "%s\n",
+ iscsi_session_target_state_name[session->target_state]);
+}
+
+static ISCSI_CLASS_ATTR(priv_sess, target_state, S_IRUGO,
+ show_priv_session_target_state, NULL);
+
+static ssize_t
show_priv_session_state(struct device *dev, struct device_attribute *attr,
char *buf)
{
@@ -4493,6 +4530,7 @@ static struct attribute *iscsi_session_attrs[] = {
&dev_attr_sess_boot_target.attr,
&dev_attr_priv_sess_recovery_tmo.attr,
&dev_attr_priv_sess_state.attr,
+ &dev_attr_priv_sess_target_state.attr,
&dev_attr_priv_sess_creator.attr,
&dev_attr_sess_chap_out_idx.attr,
&dev_attr_sess_chap_in_idx.attr,
@@ -4606,6 +4644,8 @@ static umode_t iscsi_session_attr_is_visible(struct kobject *kobj,
return S_IRUGO | S_IWUSR;
else if (attr == &dev_attr_priv_sess_state.attr)
return S_IRUGO;
+ else if (attr == &dev_attr_priv_sess_target_state.attr)
+ return S_IRUGO;
else if (attr == &dev_attr_priv_sess_creator.attr)
return S_IRUGO;
else if (attr == &dev_attr_priv_sess_target_id.attr)
diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
index 0a1734f..1707d6d 100644
--- a/drivers/scsi/ses.c
+++ b/drivers/scsi/ses.c
@@ -433,8 +433,8 @@ int ses_match_host(struct enclosure_device *edev, void *data)
}
#endif /* 0 */
-static void ses_process_descriptor(struct enclosure_component *ecomp,
- unsigned char *desc)
+static int ses_process_descriptor(struct enclosure_component *ecomp,
+ unsigned char *desc, int max_desc_len)
{
int eip = desc[0] & 0x10;
int invalid = desc[0] & 0x80;
@@ -445,22 +445,32 @@ static void ses_process_descriptor(struct enclosure_component *ecomp,
unsigned char *d;
if (invalid)
- return;
+ return 0;
switch (proto) {
case SCSI_PROTOCOL_FCP:
if (eip) {
+ if (max_desc_len <= 7)
+ return 1;
d = desc + 4;
slot = d[3];
}
break;
case SCSI_PROTOCOL_SAS:
+
if (eip) {
+ if (max_desc_len <= 27)
+ return 1;
d = desc + 4;
slot = d[3];
d = desc + 8;
- } else
+ } else {
+ if (max_desc_len <= 23)
+ return 1;
d = desc + 4;
+ }
+
+
/* only take the phy0 addr */
addr = (u64)d[12] << 56 |
(u64)d[13] << 48 |
@@ -477,6 +487,8 @@ static void ses_process_descriptor(struct enclosure_component *ecomp,
}
ecomp->slot = slot;
scomp->addr = addr;
+
+ return 0;
}
struct efd {
@@ -549,7 +561,7 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
/* skip past overall descriptor */
desc_ptr += len + 4;
}
- if (ses_dev->page10)
+ if (ses_dev->page10 && ses_dev->page10_len > 9)
addl_desc_ptr = ses_dev->page10 + 8;
type_ptr = ses_dev->page1_types;
components = 0;
@@ -557,17 +569,22 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
for (j = 0; j < type_ptr[1]; j++) {
char *name = NULL;
struct enclosure_component *ecomp;
+ int max_desc_len;
if (desc_ptr) {
- if (desc_ptr >= buf + page7_len) {
+ if (desc_ptr + 3 >= buf + page7_len) {
desc_ptr = NULL;
} else {
len = (desc_ptr[2] << 8) + desc_ptr[3];
desc_ptr += 4;
- /* Add trailing zero - pushes into
- * reserved space */
- desc_ptr[len] = '\0';
- name = desc_ptr;
+ if (desc_ptr + len > buf + page7_len)
+ desc_ptr = NULL;
+ else {
+ /* Add trailing zero - pushes into
+ * reserved space */
+ desc_ptr[len] = '\0';
+ name = desc_ptr;
+ }
}
}
if (type_ptr[0] == ENCLOSURE_COMPONENT_DEVICE ||
@@ -583,10 +600,14 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
ecomp = &edev->component[components++];
if (!IS_ERR(ecomp)) {
- if (addl_desc_ptr)
- ses_process_descriptor(
- ecomp,
- addl_desc_ptr);
+ if (addl_desc_ptr) {
+ max_desc_len = ses_dev->page10_len -
+ (addl_desc_ptr - ses_dev->page10);
+ if (ses_process_descriptor(ecomp,
+ addl_desc_ptr,
+ max_desc_len))
+ addl_desc_ptr = NULL;
+ }
if (create)
enclosure_component_register(
ecomp);
@@ -603,9 +624,11 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
/* these elements are optional */
type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_TARGET_PORT ||
type_ptr[0] == ENCLOSURE_COMPONENT_SCSI_INITIATOR_PORT ||
- type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS))
+ type_ptr[0] == ENCLOSURE_COMPONENT_CONTROLLER_ELECTRONICS)) {
addl_desc_ptr += addl_desc_ptr[1] + 2;
-
+ if (addl_desc_ptr + 1 >= ses_dev->page10 + ses_dev->page10_len)
+ addl_desc_ptr = NULL;
+ }
}
}
kfree(buf);
@@ -704,6 +727,12 @@ static int ses_intf_add(struct device *cdev,
type_ptr[0] == ENCLOSURE_COMPONENT_ARRAY_DEVICE)
components += type_ptr[1];
}
+
+ if (components == 0) {
+ sdev_printk(KERN_WARNING, sdev, "enclosure has no enumerated components\n");
+ goto err_free;
+ }
+
ses_dev->page1 = buf;
ses_dev->page1_len = len;
buf = NULL;
@@ -827,7 +856,8 @@ static void ses_intf_remove_enclosure(struct scsi_device *sdev)
kfree(ses_dev->page2);
kfree(ses_dev);
- kfree(edev->component[0].scratch);
+ if (edev->components)
+ kfree(edev->component[0].scratch);
put_device(&edev->edev);
enclosure_unregister(edev);
diff --git a/drivers/scsi/snic/snic_debugfs.c b/drivers/scsi/snic/snic_debugfs.c
index 5e0faeb..76baa4f 100644
--- a/drivers/scsi/snic/snic_debugfs.c
+++ b/drivers/scsi/snic/snic_debugfs.c
@@ -451,6 +451,6 @@ void snic_trc_debugfs_init(void)
void
snic_trc_debugfs_term(void)
{
- debugfs_remove(debugfs_lookup(TRC_FILE, snic_glob->trc_root));
- debugfs_remove(debugfs_lookup(TRC_ENABLE_FILE, snic_glob->trc_root));
+ debugfs_lookup_and_remove(TRC_FILE, snic_glob->trc_root);
+ debugfs_lookup_and_remove(TRC_ENABLE_FILE, snic_glob->trc_root);
}
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index 6110dfd..83a3d9f 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1051,6 +1051,22 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
}
/*
+ * Check for "Operating parameters have changed"
+ * due to Hyper-V changing the VHD/VHDX BlockSize
+ * when adding/removing a differencing disk. This
+ * causes discard_granularity to change, so do a
+ * rescan to pick up the new granularity. We don't
+ * want scsi_report_sense() to output a message
+ * that a sysadmin wouldn't know what to do with.
+ */
+ if ((asc == 0x3f) && (ascq != 0x03) &&
+ (ascq != 0x0e)) {
+ process_err_fn = storvsc_device_scan;
+ set_host_byte(scmnd, DID_REQUEUE);
+ goto do_work;
+ }
+
+ /*
* Otherwise, let upper layer deal with the
* error when sense message is present
*/
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 0b06223..bc2b167 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -1185,12 +1185,14 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
* clock scaling is in progress
*/
ufshcd_scsi_block_requests(hba);
+ mutex_lock(&hba->wb_mutex);
down_write(&hba->clk_scaling_lock);
if (!hba->clk_scaling.is_allowed ||
ufshcd_wait_for_doorbell_clr(hba, DOORBELL_CLR_TOUT_US)) {
ret = -EBUSY;
up_write(&hba->clk_scaling_lock);
+ mutex_unlock(&hba->wb_mutex);
ufshcd_scsi_unblock_requests(hba);
goto out;
}
@@ -1202,12 +1204,15 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba)
return ret;
}
-static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
+static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err, bool scale_up)
{
- if (writelock)
- up_write(&hba->clk_scaling_lock);
- else
- up_read(&hba->clk_scaling_lock);
+ up_write(&hba->clk_scaling_lock);
+
+ /* Enable Write Booster if we have scaled up else disable it */
+ ufshcd_wb_toggle(hba, scale_up);
+
+ mutex_unlock(&hba->wb_mutex);
+
ufshcd_scsi_unblock_requests(hba);
ufshcd_release(hba);
}
@@ -1224,7 +1229,6 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, bool writelock)
static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
{
int ret = 0;
- bool is_writelock = true;
ret = ufshcd_clock_scaling_prepare(hba);
if (ret)
@@ -1253,13 +1257,8 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
}
}
- /* Enable Write Booster if we have scaled up else disable it */
- downgrade_write(&hba->clk_scaling_lock);
- is_writelock = false;
- ufshcd_wb_toggle(hba, scale_up);
-
out_unprepare:
- ufshcd_clock_scaling_unprepare(hba, is_writelock);
+ ufshcd_clock_scaling_unprepare(hba, ret, scale_up);
return ret;
}
@@ -5919,9 +5918,11 @@ static void ufshcd_force_error_recovery(struct ufs_hba *hba)
static void ufshcd_clk_scaling_allow(struct ufs_hba *hba, bool allow)
{
+ mutex_lock(&hba->wb_mutex);
down_write(&hba->clk_scaling_lock);
hba->clk_scaling.is_allowed = allow;
up_write(&hba->clk_scaling_lock);
+ mutex_unlock(&hba->wb_mutex);
}
static void ufshcd_clk_scaling_suspend(struct ufs_hba *hba, bool suspend)
@@ -9480,6 +9481,7 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
/* Initialize mutex for exception event control */
mutex_init(&hba->ee_ctrl_mutex);
+ mutex_init(&hba->wb_mutex);
init_rwsem(&hba->clk_scaling_lock);
ufshcd_init_clk_gating(hba);
@@ -9732,5 +9734,6 @@ module_exit(ufshcd_core_exit);
MODULE_AUTHOR("Santosh Yaragnavi <santosh.sy@samsung.com>");
MODULE_AUTHOR("Vinayak Holikatti <h.vinayak@samsung.com>");
MODULE_DESCRIPTION("Generic UFS host controller driver Core");
+MODULE_SOFTDEP("pre: governor_simpleondemand");
MODULE_LICENSE("GPL");
MODULE_VERSION(UFSHCD_DRIVER_VERSION);
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index d470a52f..c8513cc 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -763,6 +763,7 @@ struct ufs_hba_monitor {
* @urgent_bkops_lvl: keeps track of urgent bkops level for device
* @is_urgent_bkops_lvl_checked: keeps track if the urgent bkops level for
* device is known or not.
+ * @wb_mutex: used to serialize devfreq and sysfs write booster toggling
* @scsi_block_reqs_cnt: reference counting for scsi block requests
* @crypto_capabilities: Content of crypto capabilities register (0x100)
* @crypto_cap_array: Array of crypto capabilities
@@ -892,6 +893,7 @@ struct ufs_hba {
enum bkops_status urgent_bkops_lvl;
bool is_urgent_bkops_lvl_checked;
+ struct mutex wb_mutex;
struct rw_semaphore clk_scaling_lock;
unsigned char desc_size[QUERY_DESC_IDN_MAX];
atomic_t scsi_block_reqs_cnt;
diff --git a/drivers/sh/intc/chip.c b/drivers/sh/intc/chip.c
index 358df75..828d81e 100644
--- a/drivers/sh/intc/chip.c
+++ b/drivers/sh/intc/chip.c
@@ -72,7 +72,7 @@ static int intc_set_affinity(struct irq_data *data,
if (!cpumask_intersects(cpumask, cpu_online_mask))
return -1;
- cpumask_copy(irq_data_get_affinity_mask(data), cpumask);
+ irq_data_update_affinity(data, cpumask);
return IRQ_SET_MASK_OK_NOCOPY;
}
diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c
index 28144c6..32ed9dc 100644
--- a/drivers/soc/imx/soc-imx8m.c
+++ b/drivers/soc/imx/soc-imx8m.c
@@ -66,8 +66,8 @@ static u32 __init imx8mq_soc_revision(void)
ocotp_base = of_iomap(np, 0);
WARN_ON(!ocotp_base);
clk = of_clk_get_by_name(np, NULL);
- if (!clk) {
- WARN_ON(!clk);
+ if (IS_ERR(clk)) {
+ WARN_ON(IS_ERR(clk));
return 0;
}
diff --git a/drivers/soc/qcom/apr.c b/drivers/soc/qcom/apr.c
index 173427b..8fd823b 100644
--- a/drivers/soc/qcom/apr.c
+++ b/drivers/soc/qcom/apr.c
@@ -317,9 +317,10 @@ static int apr_add_device(struct device *dev, struct device_node *np,
goto out;
}
+ /* Protection domain is optional, it does not exist on older platforms */
ret = of_property_read_string_index(np, "qcom,protection-domain",
1, &adev->service_path);
- if (ret < 0) {
+ if (ret < 0 && ret != -EINVAL) {
dev_err(dev, "Failed to read second value of qcom,protection-domain\n");
goto out;
}
diff --git a/drivers/soc/qcom/cpr.c b/drivers/soc/qcom/cpr.c
index 84dd934..e61cff3d 100644
--- a/drivers/soc/qcom/cpr.c
+++ b/drivers/soc/qcom/cpr.c
@@ -1710,12 +1710,16 @@ static int cpr_probe(struct platform_device *pdev)
ret = of_genpd_add_provider_simple(dev->of_node, &drv->pd);
if (ret)
- return ret;
+ goto err_remove_genpd;
platform_set_drvdata(pdev, drv);
cpr_debugfs_init(drv);
return 0;
+
+err_remove_genpd:
+ pm_genpd_remove(&drv->pd);
+ return ret;
}
static int cpr_remove(struct platform_device *pdev)
diff --git a/drivers/soundwire/bus_type.c b/drivers/soundwire/bus_type.c
index 04b3529..963498d 100644
--- a/drivers/soundwire/bus_type.c
+++ b/drivers/soundwire/bus_type.c
@@ -105,20 +105,19 @@ static int sdw_drv_probe(struct device *dev)
if (ret)
return ret;
- mutex_lock(&slave->sdw_dev_lock);
-
ret = drv->probe(slave, id);
if (ret) {
name = drv->name;
if (!name)
name = drv->driver.name;
- mutex_unlock(&slave->sdw_dev_lock);
dev_err(dev, "Probe of %s failed: %d\n", name, ret);
dev_pm_domain_detach(dev, false);
return ret;
}
+ mutex_lock(&slave->sdw_dev_lock);
+
/* device is probed so let's read the properties now */
if (drv->ops && drv->ops->read_prop)
drv->ops->read_prop(slave);
@@ -167,14 +166,12 @@ static int sdw_drv_remove(struct device *dev)
int ret = 0;
mutex_lock(&slave->sdw_dev_lock);
-
slave->probed = false;
+ mutex_unlock(&slave->sdw_dev_lock);
if (drv->remove)
ret = drv->remove(slave);
- mutex_unlock(&slave->sdw_dev_lock);
-
dev_pm_domain_detach(dev, false);
return ret;
diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c
index 18d2f9b..7b340f3 100644
--- a/drivers/soundwire/cadence_master.c
+++ b/drivers/soundwire/cadence_master.c
@@ -127,7 +127,8 @@ MODULE_PARM_DESC(cdns_mcp_int_mask, "Cadence MCP IntMask");
#define CDNS_MCP_CMD_BASE 0x80
#define CDNS_MCP_RESP_BASE 0x80
-#define CDNS_MCP_CMD_LEN 0x20
+/* FIFO can hold 8 commands */
+#define CDNS_MCP_CMD_LEN 8
#define CDNS_MCP_CMD_WORD_LEN 0x4
#define CDNS_MCP_CMD_SSP_TAG BIT(31)
@@ -555,6 +556,29 @@ cdns_fill_msg_resp(struct sdw_cdns *cdns,
return SDW_CMD_OK;
}
+static void cdns_read_response(struct sdw_cdns *cdns)
+{
+ u32 num_resp, cmd_base;
+ int i;
+
+ /* RX_FIFO_AVAIL can be 2 entries more than the FIFO size */
+ BUILD_BUG_ON(ARRAY_SIZE(cdns->response_buf) < CDNS_MCP_CMD_LEN + 2);
+
+ num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT);
+ num_resp &= CDNS_MCP_RX_FIFO_AVAIL;
+ if (num_resp > ARRAY_SIZE(cdns->response_buf)) {
+ dev_warn(cdns->dev, "RX AVAIL %d too long\n", num_resp);
+ num_resp = ARRAY_SIZE(cdns->response_buf);
+ }
+
+ cmd_base = CDNS_MCP_CMD_BASE;
+
+ for (i = 0; i < num_resp; i++) {
+ cdns->response_buf[i] = cdns_readl(cdns, cmd_base);
+ cmd_base += CDNS_MCP_CMD_WORD_LEN;
+ }
+}
+
static enum sdw_command_response
_cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd,
int offset, int count, bool defer)
@@ -596,6 +620,10 @@ _cdns_xfer_msg(struct sdw_cdns *cdns, struct sdw_msg *msg, int cmd,
dev_err(cdns->dev, "IO transfer timed out, cmd %d device %d addr %x len %d\n",
cmd, msg->dev_num, msg->addr, msg->len);
msg->len = 0;
+
+ /* Drain anything in the RX_FIFO */
+ cdns_read_response(cdns);
+
return SDW_CMD_TIMEOUT;
}
@@ -764,22 +792,6 @@ EXPORT_SYMBOL(cdns_reset_page_addr);
* IRQ handling
*/
-static void cdns_read_response(struct sdw_cdns *cdns)
-{
- u32 num_resp, cmd_base;
- int i;
-
- num_resp = cdns_readl(cdns, CDNS_MCP_FIFOSTAT);
- num_resp &= CDNS_MCP_RX_FIFO_AVAIL;
-
- cmd_base = CDNS_MCP_CMD_BASE;
-
- for (i = 0; i < num_resp; i++) {
- cdns->response_buf[i] = cdns_readl(cdns, cmd_base);
- cmd_base += CDNS_MCP_CMD_WORD_LEN;
- }
-}
-
static int cdns_update_slave_status(struct sdw_cdns *cdns,
u64 slave_intstat)
{
diff --git a/drivers/soundwire/cadence_master.h b/drivers/soundwire/cadence_master.h
index e587aed..e437a60 100644
--- a/drivers/soundwire/cadence_master.h
+++ b/drivers/soundwire/cadence_master.h
@@ -8,6 +8,12 @@
#define SDW_CADENCE_GSYNC_KHZ 4 /* 4 kHz */
#define SDW_CADENCE_GSYNC_HZ (SDW_CADENCE_GSYNC_KHZ * 1000)
+/*
+ * The Cadence IP supports up to 32 entries in the FIFO, though implementations
+ * can configure the IP to have a smaller FIFO.
+ */
+#define CDNS_MCP_IP_MAX_CMD_LEN 32
+
/**
* struct sdw_cdns_pdi: PDI (Physical Data Interface) instance
*
@@ -119,7 +125,12 @@ struct sdw_cdns {
struct sdw_bus bus;
unsigned int instance;
- u32 response_buf[0x80];
+ /*
+ * The datasheet says the RX FIFO AVAIL can be 2 entries more
+ * than the FIFO capacity, so allow for this.
+ */
+ u32 response_buf[CDNS_MCP_IP_MAX_CMD_LEN + 2];
+
struct completion tx_complete;
struct sdw_defer *defer;
diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
index 83e352b..4fc2323 100644
--- a/drivers/spi/Kconfig
+++ b/drivers/spi/Kconfig
@@ -272,7 +272,6 @@
tristate "Baikal-T1 SPI driver for DW SPI core"
depends on MIPS_BAIKAL_T1 || COMPILE_TEST
select MULTIPLEXER
- select MUX_MMIO
help
Baikal-T1 SoC is equipped with three DW APB SSI-based MMIO SPI
controllers. Two of them are pretty much normal: with IRQ, DMA,
diff --git a/drivers/spi/spi-bcm63xx-hsspi.c b/drivers/spi/spi-bcm63xx-hsspi.c
index b871fd8..02f56fc 100644
--- a/drivers/spi/spi-bcm63xx-hsspi.c
+++ b/drivers/spi/spi-bcm63xx-hsspi.c
@@ -163,6 +163,7 @@ static int bcm63xx_hsspi_do_txrx(struct spi_device *spi, struct spi_transfer *t)
int step_size = HSSPI_BUFFER_LEN;
const u8 *tx = t->tx_buf;
u8 *rx = t->rx_buf;
+ u32 val = 0;
bcm63xx_hsspi_set_clk(bs, spi, t->speed_hz);
bcm63xx_hsspi_set_cs(bs, spi->chip_select, true);
@@ -178,11 +179,16 @@ static int bcm63xx_hsspi_do_txrx(struct spi_device *spi, struct spi_transfer *t)
step_size -= HSSPI_OPCODE_LEN;
if ((opcode == HSSPI_OP_READ && t->rx_nbits == SPI_NBITS_DUAL) ||
- (opcode == HSSPI_OP_WRITE && t->tx_nbits == SPI_NBITS_DUAL))
+ (opcode == HSSPI_OP_WRITE && t->tx_nbits == SPI_NBITS_DUAL)) {
opcode |= HSSPI_OP_MULTIBIT;
- __raw_writel(1 << MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT |
- 1 << MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT | 0xff,
+ if (t->rx_nbits == SPI_NBITS_DUAL)
+ val |= 1 << MODE_CTRL_MULTIDATA_RD_SIZE_SHIFT;
+ if (t->tx_nbits == SPI_NBITS_DUAL)
+ val |= 1 << MODE_CTRL_MULTIDATA_WR_SIZE_SHIFT;
+ }
+
+ __raw_writel(val | 0xff,
bs->regs + HSSPI_PROFILE_MODE_CTRL_REG(chip_select));
while (pending > 0) {
diff --git a/drivers/spi/spi-dw-core.c b/drivers/spi/spi-dw-core.c
index a305074..59e22c6 100644
--- a/drivers/spi/spi-dw-core.c
+++ b/drivers/spi/spi-dw-core.c
@@ -357,7 +357,7 @@ static void dw_spi_irq_setup(struct dw_spi *dws)
* will be adjusted at the final stage of the IRQ-based SPI transfer
* execution so not to lose the leftover of the incoming data.
*/
- level = min_t(u16, dws->fifo_len / 2, dws->tx_len);
+ level = min_t(unsigned int, dws->fifo_len / 2, dws->tx_len);
dw_writel(dws, DW_SPI_TXFTLR, level);
dw_writel(dws, DW_SPI_RXFTLR, level - 1);
diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c
index 47cbe73..dc188f9 100644
--- a/drivers/spi/spi-synquacer.c
+++ b/drivers/spi/spi-synquacer.c
@@ -472,10 +472,9 @@ static int synquacer_spi_transfer_one(struct spi_master *master,
read_fifo(sspi);
}
- if (status < 0) {
- dev_err(sspi->dev, "failed to transfer. status: 0x%x\n",
- status);
- return status;
+ if (status == 0) {
+ dev_err(sspi->dev, "failed to transfer. Timeout.\n");
+ return -ETIMEDOUT;
}
return 0;
diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
index d233e24..922d778 100644
--- a/drivers/spi/spidev.c
+++ b/drivers/spi/spidev.c
@@ -592,7 +592,6 @@ static int spidev_open(struct inode *inode, struct file *filp)
if (!spidev->tx_buffer) {
spidev->tx_buffer = kmalloc(bufsiz, GFP_KERNEL);
if (!spidev->tx_buffer) {
- dev_dbg(&spidev->spi->dev, "open/ENOMEM\n");
status = -ENOMEM;
goto err_find_dev;
}
@@ -601,7 +600,6 @@ static int spidev_open(struct inode *inode, struct file *filp)
if (!spidev->rx_buffer) {
spidev->rx_buffer = kmalloc(bufsiz, GFP_KERNEL);
if (!spidev->rx_buffer) {
- dev_dbg(&spidev->spi->dev, "open/ENOMEM\n");
status = -ENOMEM;
goto err_alloc_rx_buf;
}
diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c
index b6abd37..edd20a0 100644
--- a/drivers/staging/emxx_udc/emxx_udc.c
+++ b/drivers/staging/emxx_udc/emxx_udc.c
@@ -2590,10 +2590,15 @@ static int nbu2ss_ep_queue(struct usb_ep *_ep,
req->unaligned = false;
if (req->unaligned) {
- if (!ep->virt_buf)
+ if (!ep->virt_buf) {
ep->virt_buf = dma_alloc_coherent(udc->dev, PAGE_SIZE,
&ep->phys_buf,
GFP_ATOMIC | GFP_DMA);
+ if (!ep->virt_buf) {
+ spin_unlock_irqrestore(&udc->lock, flags);
+ return -ENOMEM;
+ }
+ }
if (ep->epnum > 0) {
if (ep->direct == USB_DIR_IN)
memcpy(ep->virt_buf, req->req.buf,
diff --git a/drivers/staging/mt7621-dts/gbpc1.dts b/drivers/staging/mt7621-dts/gbpc1.dts
index 02fd9be..cf5d6e9 100644
--- a/drivers/staging/mt7621-dts/gbpc1.dts
+++ b/drivers/staging/mt7621-dts/gbpc1.dts
@@ -19,7 +19,7 @@ chosen {
bootargs = "console=ttyS0,57600";
};
- palmbus: palmbus@1E000000 {
+ palmbus: palmbus@1e000000 {
i2c@900 {
status = "okay";
};
diff --git a/drivers/staging/mt7621-dts/mt7621.dtsi b/drivers/staging/mt7621-dts/mt7621.dtsi
index 59a9ce2..04c4d6e 100644
--- a/drivers/staging/mt7621-dts/mt7621.dtsi
+++ b/drivers/staging/mt7621-dts/mt7621.dtsi
@@ -47,10 +47,10 @@ mmc_fixed_1v8_io: fixedregulator@1 {
regulator-always-on;
};
- palmbus: palmbus@1E000000 {
+ palmbus: palmbus@1e000000 {
compatible = "palmbus";
- reg = <0x1E000000 0x100000>;
- ranges = <0x0 0x1E000000 0x0FFFFF>;
+ reg = <0x1e000000 0x100000>;
+ ranges = <0x0 0x1e000000 0x0fffff>;
#address-cells = <1>;
#size-cells = <1>;
@@ -301,11 +301,11 @@ rstctrl: rstctrl {
#reset-cells = <1>;
};
- sdhci: sdhci@1E130000 {
+ sdhci: sdhci@1e130000 {
status = "disabled";
compatible = "mediatek,mt7620-mmc";
- reg = <0x1E130000 0x4000>;
+ reg = <0x1e130000 0x4000>;
bus-width = <4>;
max-frequency = <48000000>;
@@ -327,7 +327,7 @@ sdhci: sdhci@1E130000 {
interrupts = <GIC_SHARED 20 IRQ_TYPE_LEVEL_HIGH>;
};
- xhci: xhci@1E1C0000 {
+ xhci: xhci@1e1c0000 {
status = "okay";
compatible = "mediatek,mt8173-xhci";
diff --git a/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c b/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
index 756d8db..cd1c4f6 100644
--- a/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
+++ b/drivers/staging/rtl8192e/rtl8192e/rtl_dm.c
@@ -185,7 +185,6 @@ static void _rtl92e_dm_init_fsync(struct net_device *dev);
static void _rtl92e_dm_deinit_fsync(struct net_device *dev);
static void _rtl92e_dm_check_txrateandretrycount(struct net_device *dev);
-static void _rtl92e_dm_check_ac_dc_power(struct net_device *dev);
static void _rtl92e_dm_check_fsync(struct net_device *dev);
static void _rtl92e_dm_check_rf_ctrl_gpio(void *data);
static void _rtl92e_dm_fsync_timer_callback(struct timer_list *t);
@@ -238,8 +237,6 @@ void rtl92e_dm_watchdog(struct net_device *dev)
if (priv->being_init_adapter)
return;
- _rtl92e_dm_check_ac_dc_power(dev);
-
_rtl92e_dm_check_txrateandretrycount(dev);
_rtl92e_dm_check_edca_turbo(dev);
@@ -257,30 +254,6 @@ void rtl92e_dm_watchdog(struct net_device *dev)
_rtl92e_dm_cts_to_self(dev);
}
-static void _rtl92e_dm_check_ac_dc_power(struct net_device *dev)
-{
- struct r8192_priv *priv = rtllib_priv(dev);
- static const char ac_dc_script[] = "/etc/acpi/wireless-rtl-ac-dc-power.sh";
- char *argv[] = {(char *)ac_dc_script, DRV_NAME, NULL};
- static char *envp[] = {"HOME=/",
- "TERM=linux",
- "PATH=/usr/bin:/bin",
- NULL};
-
- if (priv->ResetProgress == RESET_TYPE_SILENT) {
- RT_TRACE((COMP_INIT | COMP_POWER | COMP_RF),
- "GPIOChangeRFWorkItemCallBack(): Silent Reset!!!!!!!\n");
- return;
- }
-
- if (priv->rtllib->state != RTLLIB_LINKED)
- return;
- call_usermodehelper(ac_dc_script, argv, envp, UMH_WAIT_PROC);
-
- return;
-};
-
-
void rtl92e_init_adaptive_rate(struct net_device *dev)
{
@@ -1800,10 +1773,6 @@ static void _rtl92e_dm_check_rf_ctrl_gpio(void *data)
u8 tmp1byte;
enum rt_rf_power_state eRfPowerStateToSet;
bool bActuallySet = false;
- char *argv[3];
- static const char RadioPowerPath[] = "/etc/acpi/events/RadioPower.sh";
- static char *envp[] = {"HOME=/", "TERM=linux", "PATH=/usr/bin:/bin",
- NULL};
bActuallySet = false;
@@ -1835,14 +1804,6 @@ static void _rtl92e_dm_check_rf_ctrl_gpio(void *data)
mdelay(1000);
priv->bHwRfOffAction = 1;
rtl92e_set_rf_state(dev, eRfPowerStateToSet, RF_CHANGE_BY_HW);
- if (priv->bHwRadioOff)
- argv[1] = "RFOFF";
- else
- argv[1] = "RFON";
-
- argv[0] = (char *)RadioPowerPath;
- argv[2] = NULL;
- call_usermodehelper(RadioPowerPath, argv, envp, UMH_WAIT_PROC);
}
}
diff --git a/drivers/staging/rtl8723bs/core/rtw_ap.c b/drivers/staging/rtl8723bs/core/rtw_ap.c
index 6064dd6..674592e 100644
--- a/drivers/staging/rtl8723bs/core/rtw_ap.c
+++ b/drivers/staging/rtl8723bs/core/rtw_ap.c
@@ -891,7 +891,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
&ie_len,
(pbss_network->ie_length - _BEACON_IE_OFFSET_)
);
- if (p != NULL) {
+ if (p) {
memcpy(supportRate, p + 2, ie_len);
supportRateNum = ie_len;
}
@@ -903,7 +903,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
&ie_len,
pbss_network->ie_length - _BEACON_IE_OFFSET_
);
- if (p != NULL) {
+ if (p) {
memcpy(supportRate + supportRateNum, p + 2, ie_len);
supportRateNum += ie_len;
}
@@ -991,7 +991,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
break;
}
- if ((p == NULL) || (ie_len == 0))
+ if (!p || ie_len == 0)
break;
}
@@ -1021,7 +1021,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
break;
}
- if ((p == NULL) || (ie_len == 0))
+ if (!p || ie_len == 0)
break;
}
}
@@ -1145,7 +1145,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
psta = rtw_get_stainfo(&padapter->stapriv, pbss_network->mac_address);
if (!psta) {
psta = rtw_alloc_stainfo(&padapter->stapriv, pbss_network->mac_address);
- if (psta == NULL)
+ if (!psta)
return _FAIL;
}
@@ -1275,7 +1275,7 @@ u8 rtw_ap_set_pairwise_key(struct adapter *padapter, struct sta_info *psta)
}
psetstakey_para = rtw_zmalloc(sizeof(struct set_stakey_parm));
- if (psetstakey_para == NULL) {
+ if (!psetstakey_para) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1311,12 +1311,12 @@ static int rtw_ap_set_key(
int res = _SUCCESS;
pcmd = rtw_zmalloc(sizeof(struct cmd_obj));
- if (pcmd == NULL) {
+ if (!pcmd) {
res = _FAIL;
goto exit;
}
psetkeyparm = rtw_zmalloc(sizeof(struct setkey_parm));
- if (psetkeyparm == NULL) {
+ if (!psetkeyparm) {
kfree(pcmd);
res = _FAIL;
goto exit;
@@ -1474,11 +1474,11 @@ static void update_bcn_wps_ie(struct adapter *padapter)
&wps_ielen
);
- if (pwps_ie == NULL || wps_ielen == 0)
+ if (!pwps_ie || wps_ielen == 0)
return;
pwps_ie_src = pmlmepriv->wps_beacon_ie;
- if (pwps_ie_src == NULL)
+ if (!pwps_ie_src)
return;
wps_offset = (uint)(pwps_ie - ie);
diff --git a/drivers/staging/rtl8723bs/core/rtw_cmd.c b/drivers/staging/rtl8723bs/core/rtw_cmd.c
index 93e3a4c..5f4f603 100644
--- a/drivers/staging/rtl8723bs/core/rtw_cmd.c
+++ b/drivers/staging/rtl8723bs/core/rtw_cmd.c
@@ -251,7 +251,7 @@ int _rtw_enqueue_cmd(struct __queue *queue, struct cmd_obj *obj)
{
unsigned long irqL;
- if (obj == NULL)
+ if (!obj)
goto exit;
/* spin_lock_bh(&queue->lock); */
@@ -319,7 +319,7 @@ int rtw_enqueue_cmd(struct cmd_priv *pcmdpriv, struct cmd_obj *cmd_obj)
int res = _FAIL;
struct adapter *padapter = pcmdpriv->padapter;
- if (cmd_obj == NULL)
+ if (!cmd_obj)
goto exit;
cmd_obj->padapter = padapter;
@@ -484,7 +484,7 @@ int rtw_cmd_thread(void *context)
/* call callback function for post-processed */
if (pcmd->cmdcode < ARRAY_SIZE(rtw_cmd_callback)) {
pcmd_callback = rtw_cmd_callback[pcmd->cmdcode].callback;
- if (pcmd_callback == NULL) {
+ if (!pcmd_callback) {
rtw_free_cmd_obj(pcmd);
} else {
/* todo: !!! fill rsp_buf to pcmd->rsp if (pcmd->rsp!= NULL) */
@@ -503,7 +503,7 @@ int rtw_cmd_thread(void *context)
/* free all cmd_obj resources */
do {
pcmd = rtw_dequeue_cmd(pcmdpriv);
- if (pcmd == NULL) {
+ if (!pcmd) {
rtw_unregister_cmd_alive(padapter);
break;
}
@@ -542,11 +542,11 @@ u8 rtw_sitesurvey_cmd(struct adapter *padapter, struct ndis_802_11_ssid *ssid,
rtw_lps_ctrl_wk_cmd(padapter, LPS_CTRL_SCAN, 1);
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL)
+ if (!ph2c)
return _FAIL;
psurveyPara = rtw_zmalloc(sizeof(struct sitesurvey_parm));
- if (psurveyPara == NULL) {
+ if (!psurveyPara) {
kfree(ph2c);
return _FAIL;
}
@@ -604,13 +604,13 @@ u8 rtw_setdatarate_cmd(struct adapter *padapter, u8 *rateset)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pbsetdataratepara = rtw_zmalloc(sizeof(struct setdatarate_parm));
- if (pbsetdataratepara == NULL) {
+ if (!pbsetdataratepara) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -640,7 +640,7 @@ u8 rtw_createbss_cmd(struct adapter *padapter)
u8 res = _SUCCESS;
pcmd = rtw_zmalloc(sizeof(struct cmd_obj));
- if (pcmd == NULL) {
+ if (!pcmd) {
res = _FAIL;
goto exit;
}
@@ -673,7 +673,7 @@ int rtw_startbss_cmd(struct adapter *padapter, int flags)
} else {
/* need enqueue, prepare cmd_obj and enqueue */
pcmd = rtw_zmalloc(sizeof(struct cmd_obj));
- if (pcmd == NULL) {
+ if (!pcmd) {
res = _FAIL;
goto exit;
}
@@ -725,7 +725,7 @@ u8 rtw_joinbss_cmd(struct adapter *padapter, struct wlan_network *pnetwork)
u8 *ptmp = NULL;
pcmd = rtw_zmalloc(sizeof(struct cmd_obj));
- if (pcmd == NULL) {
+ if (!pcmd) {
res = _FAIL;
goto exit;
}
@@ -837,7 +837,7 @@ u8 rtw_disassoc_cmd(struct adapter *padapter, u32 deauth_timeout_ms, bool enqueu
/* prepare cmd parameter */
param = rtw_zmalloc(sizeof(*param));
- if (param == NULL) {
+ if (!param) {
res = _FAIL;
goto exit;
}
@@ -846,7 +846,7 @@ u8 rtw_disassoc_cmd(struct adapter *padapter, u32 deauth_timeout_ms, bool enqueu
if (enqueue) {
/* need enqueue, prepare cmd_obj and enqueue */
cmdobj = rtw_zmalloc(sizeof(*cmdobj));
- if (cmdobj == NULL) {
+ if (!cmdobj) {
res = _FAIL;
kfree(param);
goto exit;
@@ -874,7 +874,7 @@ u8 rtw_setopmode_cmd(struct adapter *padapter, enum ndis_802_11_network_infrast
psetop = rtw_zmalloc(sizeof(struct setopmode_parm));
- if (psetop == NULL) {
+ if (!psetop) {
res = _FAIL;
goto exit;
}
@@ -882,7 +882,7 @@ u8 rtw_setopmode_cmd(struct adapter *padapter, enum ndis_802_11_network_infrast
if (enqueue) {
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
kfree(psetop);
res = _FAIL;
goto exit;
@@ -910,7 +910,7 @@ u8 rtw_setstakey_cmd(struct adapter *padapter, struct sta_info *sta, u8 unicast_
u8 res = _SUCCESS;
psetstakey_para = rtw_zmalloc(sizeof(struct set_stakey_parm));
- if (psetstakey_para == NULL) {
+ if (!psetstakey_para) {
res = _FAIL;
goto exit;
}
@@ -932,14 +932,14 @@ u8 rtw_setstakey_cmd(struct adapter *padapter, struct sta_info *sta, u8 unicast_
if (enqueue) {
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
kfree(psetstakey_para);
res = _FAIL;
goto exit;
}
psetstakey_rsp = rtw_zmalloc(sizeof(struct set_stakey_rsp));
- if (psetstakey_rsp == NULL) {
+ if (!psetstakey_rsp) {
kfree(ph2c);
kfree(psetstakey_para);
res = _FAIL;
@@ -977,20 +977,20 @@ u8 rtw_clearstakey_cmd(struct adapter *padapter, struct sta_info *sta, u8 enqueu
}
} else {
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
psetstakey_para = rtw_zmalloc(sizeof(struct set_stakey_parm));
- if (psetstakey_para == NULL) {
+ if (!psetstakey_para) {
kfree(ph2c);
res = _FAIL;
goto exit;
}
psetstakey_rsp = rtw_zmalloc(sizeof(struct set_stakey_rsp));
- if (psetstakey_rsp == NULL) {
+ if (!psetstakey_rsp) {
kfree(ph2c);
kfree(psetstakey_para);
res = _FAIL;
@@ -1022,13 +1022,13 @@ u8 rtw_addbareq_cmd(struct adapter *padapter, u8 tid, u8 *addr)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
paddbareq_parm = rtw_zmalloc(sizeof(struct addBaReq_parm));
- if (paddbareq_parm == NULL) {
+ if (!paddbareq_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1054,13 +1054,13 @@ u8 rtw_reset_securitypriv_cmd(struct adapter *padapter)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1089,13 +1089,13 @@ u8 rtw_free_assoc_resources_cmd(struct adapter *padapter)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1125,13 +1125,13 @@ u8 rtw_dynamic_chk_wk_cmd(struct adapter *padapter)
/* only primary padapter does this cmd */
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1173,7 +1173,7 @@ u8 rtw_set_chplan_cmd(struct adapter *padapter, u8 chplan, u8 enqueue, u8 swconf
/* prepare cmd parameter */
setChannelPlan_param = rtw_zmalloc(sizeof(struct SetChannelPlan_param));
- if (setChannelPlan_param == NULL) {
+ if (!setChannelPlan_param) {
res = _FAIL;
goto exit;
}
@@ -1182,7 +1182,7 @@ u8 rtw_set_chplan_cmd(struct adapter *padapter, u8 chplan, u8 enqueue, u8 swconf
if (enqueue) {
/* need enqueue, prepare cmd_obj and enqueue */
pcmdobj = rtw_zmalloc(sizeof(struct cmd_obj));
- if (pcmdobj == NULL) {
+ if (!pcmdobj) {
kfree(setChannelPlan_param);
res = _FAIL;
goto exit;
@@ -1432,13 +1432,13 @@ u8 rtw_lps_ctrl_wk_cmd(struct adapter *padapter, u8 lps_ctrl_type, u8 enqueue)
if (enqueue) {
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1474,13 +1474,13 @@ u8 rtw_dm_in_lps_wk_cmd(struct adapter *padapter)
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1540,13 +1540,13 @@ u8 rtw_dm_ra_mask_wk_cmd(struct adapter *padapter, u8 *psta)
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1575,13 +1575,13 @@ u8 rtw_ps_cmd(struct adapter *padapter)
u8 res = _SUCCESS;
ppscmd = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ppscmd == NULL) {
+ if (!ppscmd) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ppscmd);
res = _FAIL;
goto exit;
@@ -1647,13 +1647,13 @@ u8 rtw_chk_hi_queue_cmd(struct adapter *padapter)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1741,13 +1741,13 @@ u8 rtw_c2h_packet_wk_cmd(struct adapter *padapter, u8 *pbuf, u16 length)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1776,13 +1776,13 @@ u8 rtw_c2h_wk_cmd(struct adapter *padapter, u8 *c2h_evt)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
pdrvextra_cmd_parm = rtw_zmalloc(sizeof(struct drvextra_cmd_parm));
- if (pdrvextra_cmd_parm == NULL) {
+ if (!pdrvextra_cmd_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -1957,7 +1957,7 @@ void rtw_createbss_cmd_callback(struct adapter *padapter, struct cmd_obj *pcmd)
struct wlan_bssid_ex *pnetwork = (struct wlan_bssid_ex *)pcmd->parmbuf;
struct wlan_network *tgt_network = &(pmlmepriv->cur_network);
- if (pcmd->parmbuf == NULL)
+ if (!pcmd->parmbuf)
goto exit;
if (pcmd->res != H2C_SUCCESS)
@@ -1980,9 +1980,9 @@ void rtw_createbss_cmd_callback(struct adapter *padapter, struct cmd_obj *pcmd)
} else {
pwlan = rtw_alloc_network(pmlmepriv);
spin_lock_bh(&(pmlmepriv->scanned_queue.lock));
- if (pwlan == NULL) {
+ if (!pwlan) {
pwlan = rtw_get_oldest_wlan_network(&pmlmepriv->scanned_queue);
- if (pwlan == NULL) {
+ if (!pwlan) {
spin_unlock_bh(&(pmlmepriv->scanned_queue.lock));
goto createbss_cmd_fail;
}
diff --git a/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c b/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c
index 5cfde71..8c11daf 100644
--- a/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c
+++ b/drivers/staging/rtl8723bs/core/rtw_ioctl_set.c
@@ -370,7 +370,7 @@ u8 rtw_set_802_11_bssid_list_scan(struct adapter *padapter, struct ndis_802_11_s
struct mlme_priv *pmlmepriv = &padapter->mlmepriv;
u8 res = true;
- if (padapter == NULL) {
+ if (!padapter) {
res = false;
goto exit;
}
@@ -481,7 +481,7 @@ u16 rtw_get_cur_max_rate(struct adapter *adapter)
return 0;
psta = rtw_get_stainfo(&adapter->stapriv, get_bssid(pmlmepriv));
- if (psta == NULL)
+ if (!psta)
return 0;
short_GI = query_ra_short_GI(psta);
diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme.c b/drivers/staging/rtl8723bs/core/rtw_mlme.c
index 952c3e1..26c4004 100644
--- a/drivers/staging/rtl8723bs/core/rtw_mlme.c
+++ b/drivers/staging/rtl8723bs/core/rtw_mlme.c
@@ -439,7 +439,7 @@ struct wlan_network *rtw_get_oldest_wlan_network(struct __queue *scanned_queue)
pwlan = list_entry(plist, struct wlan_network, list);
if (!pwlan->fixed) {
- if (oldest == NULL || time_after(oldest->last_scanned, pwlan->last_scanned))
+ if (!oldest || time_after(oldest->last_scanned, pwlan->last_scanned))
oldest = pwlan;
}
}
@@ -542,7 +542,7 @@ void rtw_update_scanned_network(struct adapter *adapter, struct wlan_bssid_ex *t
/* TODO: don't select network in the same ess as oldest if it's new enough*/
}
- if (oldest == NULL || time_after(oldest->last_scanned, pnetwork->last_scanned))
+ if (!oldest || time_after(oldest->last_scanned, pnetwork->last_scanned))
oldest = pnetwork;
}
@@ -1816,7 +1816,7 @@ static int rtw_check_join_candidate(struct mlme_priv *mlme
goto exit;
}
- if (*candidate == NULL || (*candidate)->network.rssi < competitor->network.rssi) {
+ if (!*candidate || (*candidate)->network.rssi < competitor->network.rssi) {
*candidate = competitor;
updated = true;
}
diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
index 1a4b4c7..e923f30 100644
--- a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+++ b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
@@ -742,11 +742,11 @@ unsigned int OnAuth(struct adapter *padapter, union recv_frame *precv_frame)
}
pstat = rtw_get_stainfo(pstapriv, sa);
- if (pstat == NULL) {
+ if (!pstat) {
/* allocate a new one */
pstat = rtw_alloc_stainfo(pstapriv, sa);
- if (pstat == NULL) {
+ if (!pstat) {
status = WLAN_STATUS_AP_UNABLE_TO_HANDLE_NEW_STA;
goto auth_fail;
}
@@ -814,7 +814,7 @@ unsigned int OnAuth(struct adapter *padapter, union recv_frame *precv_frame)
p = rtw_get_ie(pframe + WLAN_HDR_A3_LEN + 4 + _AUTH_IE_OFFSET_, WLAN_EID_CHALLENGE, (int *)&ie_len,
len - WLAN_HDR_A3_LEN - _AUTH_IE_OFFSET_ - 4);
- if ((p == NULL) || (ie_len <= 0)) {
+ if (!p || ie_len <= 0) {
status = WLAN_STATUS_CHALLENGE_FAIL;
goto auth_fail;
}
@@ -1034,7 +1034,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame)
/* check if the supported rate is ok */
p = rtw_get_ie(pframe + WLAN_HDR_A3_LEN + ie_offset, WLAN_EID_SUPP_RATES, &ie_len, pkt_len - WLAN_HDR_A3_LEN - ie_offset);
- if (p == NULL) {
+ if (!p) {
/* use our own rate set as statoin used */
/* memcpy(supportRate, AP_BSSRATE, AP_BSSRATE_LEN); */
/* supportRateNum = AP_BSSRATE_LEN; */
@@ -1047,7 +1047,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame)
p = rtw_get_ie(pframe + WLAN_HDR_A3_LEN + ie_offset, WLAN_EID_EXT_SUPP_RATES, &ie_len,
pkt_len - WLAN_HDR_A3_LEN - ie_offset);
- if (p != NULL) {
+ if (p) {
if (supportRateNum <= sizeof(supportRate)) {
memcpy(supportRate+supportRateNum, p+2, ie_len);
@@ -1294,7 +1294,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame)
/* get a unique AID */
if (pstat->aid == 0) {
for (pstat->aid = 1; pstat->aid <= NUM_STA; pstat->aid++)
- if (pstapriv->sta_aid[pstat->aid - 1] == NULL)
+ if (!pstapriv->sta_aid[pstat->aid - 1])
break;
/* if (pstat->aid > NUM_STA) { */
@@ -1940,7 +1940,7 @@ static struct xmit_frame *_alloc_mgtxmitframe(struct xmit_priv *pxmitpriv, bool
goto exit;
pxmitbuf = rtw_alloc_xmitbuf_ext(pxmitpriv);
- if (pxmitbuf == NULL) {
+ if (!pxmitbuf) {
rtw_free_xmitframe(pxmitpriv, pmgntframe);
pmgntframe = NULL;
goto exit;
@@ -2293,7 +2293,7 @@ void issue_probersp(struct adapter *padapter, unsigned char *da, u8 is_valid_p2p
struct wlan_bssid_ex *cur_network = &(pmlmeinfo->network);
unsigned int rate_len;
- if (da == NULL)
+ if (!da)
return;
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
@@ -2617,7 +2617,7 @@ void issue_auth(struct adapter *padapter, struct sta_info *psta, unsigned short
__le16 le_tmp;
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
- if (pmgntframe == NULL)
+ if (!pmgntframe)
return;
/* update attribute */
@@ -2748,7 +2748,7 @@ void issue_asocrsp(struct adapter *padapter, unsigned short status, struct sta_i
__le16 lestatus, le_tmp;
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
- if (pmgntframe == NULL)
+ if (!pmgntframe)
return;
/* update attribute */
@@ -2836,7 +2836,7 @@ void issue_asocrsp(struct adapter *padapter, unsigned short status, struct sta_i
break;
}
- if ((pbuf == NULL) || (ie_len == 0)) {
+ if (!pbuf || ie_len == 0) {
break;
}
}
@@ -2880,7 +2880,7 @@ void issue_assocreq(struct adapter *padapter)
u8 vs_ie_length = 0;
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
- if (pmgntframe == NULL)
+ if (!pmgntframe)
goto exit;
/* update attribute */
@@ -3057,7 +3057,7 @@ static int _issue_nulldata(struct adapter *padapter, unsigned char *da,
pmlmeinfo = &(pmlmeext->mlmext_info);
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
- if (pmgntframe == NULL)
+ if (!pmgntframe)
goto exit;
/* update attribute */
@@ -3196,7 +3196,7 @@ static int _issue_qos_nulldata(struct adapter *padapter, unsigned char *da,
struct mlme_ext_info *pmlmeinfo = &(pmlmeext->mlmext_info);
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
- if (pmgntframe == NULL)
+ if (!pmgntframe)
goto exit;
/* update attribute */
@@ -3309,7 +3309,7 @@ static int _issue_deauth(struct adapter *padapter, unsigned char *da,
__le16 le_tmp;
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
- if (pmgntframe == NULL) {
+ if (!pmgntframe) {
goto exit;
}
@@ -3635,7 +3635,7 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter)
action = ACT_PUBLIC_BSSCOEXIST;
pmgntframe = alloc_mgtxmitframe(pxmitpriv);
- if (pmgntframe == NULL) {
+ if (!pmgntframe) {
return;
}
@@ -3702,7 +3702,7 @@ static void issue_action_BSSCoexistPacket(struct adapter *padapter)
pbss_network = (struct wlan_bssid_ex *)&pnetwork->network;
p = rtw_get_ie(pbss_network->ies + _FIXED_IE_LENGTH_, WLAN_EID_HT_CAPABILITY, &len, pbss_network->ie_length - _FIXED_IE_LENGTH_);
- if ((p == NULL) || (len == 0)) {/* non-HT */
+ if (!p || len == 0) {/* non-HT */
if (pbss_network->configuration.ds_config <= 0)
continue;
@@ -3765,7 +3765,7 @@ unsigned int send_delba(struct adapter *padapter, u8 initiator, u8 *addr)
return _SUCCESS;
psta = rtw_get_stainfo(pstapriv, addr);
- if (psta == NULL)
+ if (!psta)
return _SUCCESS;
if (initiator == 0) {/* recipient */
@@ -4637,13 +4637,13 @@ void report_del_sta_event(struct adapter *padapter, unsigned char *MacAddr, unsi
struct cmd_priv *pcmdpriv = &padapter->cmdpriv;
pcmd_obj = rtw_zmalloc(sizeof(struct cmd_obj));
- if (pcmd_obj == NULL) {
+ if (!pcmd_obj) {
return;
}
cmdsz = (sizeof(struct stadel_event) + sizeof(struct C2HEvent_Header));
pevtcmd = rtw_zmalloc(cmdsz);
- if (pevtcmd == NULL) {
+ if (!pevtcmd) {
kfree(pcmd_obj);
return;
}
@@ -4689,12 +4689,12 @@ void report_add_sta_event(struct adapter *padapter, unsigned char *MacAddr, int
struct cmd_priv *pcmdpriv = &padapter->cmdpriv;
pcmd_obj = rtw_zmalloc(sizeof(struct cmd_obj));
- if (pcmd_obj == NULL)
+ if (!pcmd_obj)
return;
cmdsz = (sizeof(struct stassoc_event) + sizeof(struct C2HEvent_Header));
pevtcmd = rtw_zmalloc(cmdsz);
- if (pevtcmd == NULL) {
+ if (!pevtcmd) {
kfree(pcmd_obj);
return;
}
@@ -5143,12 +5143,12 @@ void survey_timer_hdl(struct timer_list *t)
}
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
goto exit_survey_timer_hdl;
}
psurveyPara = rtw_zmalloc(sizeof(struct sitesurvey_parm));
- if (psurveyPara == NULL) {
+ if (!psurveyPara) {
kfree(ph2c);
goto exit_survey_timer_hdl;
}
@@ -5777,7 +5777,7 @@ u8 chk_bmc_sleepq_cmd(struct adapter *padapter)
u8 res = _SUCCESS;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
@@ -5801,13 +5801,13 @@ u8 set_tx_beacon_cmd(struct adapter *padapter)
int len_diff = 0;
ph2c = rtw_zmalloc(sizeof(struct cmd_obj));
- if (ph2c == NULL) {
+ if (!ph2c) {
res = _FAIL;
goto exit;
}
ptxBeacon_parm = rtw_zmalloc(sizeof(struct Tx_Beacon_param));
- if (ptxBeacon_parm == NULL) {
+ if (!ptxBeacon_parm) {
kfree(ph2c);
res = _FAIL;
goto exit;
@@ -5867,7 +5867,7 @@ u8 mlme_evt_hdl(struct adapter *padapter, unsigned char *pbuf)
void (*event_callback)(struct adapter *dev, u8 *pbuf);
struct evt_priv *pevt_priv = &(padapter->evtpriv);
- if (pbuf == NULL)
+ if (!pbuf)
goto _abort_event_;
peventbuf = (uint *)pbuf;
diff --git a/drivers/staging/rtl8723bs/core/rtw_security.c b/drivers/staging/rtl8723bs/core/rtw_security.c
index b050bf6..ac73141 100644
--- a/drivers/staging/rtl8723bs/core/rtw_security.c
+++ b/drivers/staging/rtl8723bs/core/rtw_security.c
@@ -51,7 +51,7 @@ void rtw_wep_encrypt(struct adapter *padapter, u8 *pxmitframe)
struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
struct arc4_ctx *ctx = &psecuritypriv->xmit_arc4_ctx;
- if (((struct xmit_frame *)pxmitframe)->buf_addr == NULL)
+ if (!((struct xmit_frame *)pxmitframe)->buf_addr)
return;
hw_hdr_offset = TXDESC_OFFSET;
@@ -476,7 +476,7 @@ u32 rtw_tkip_encrypt(struct adapter *padapter, u8 *pxmitframe)
struct arc4_ctx *ctx = &psecuritypriv->xmit_arc4_ctx;
u32 res = _SUCCESS;
- if (((struct xmit_frame *)pxmitframe)->buf_addr == NULL)
+ if (!((struct xmit_frame *)pxmitframe)->buf_addr)
return _FAIL;
hw_hdr_offset = TXDESC_OFFSET;
@@ -1043,7 +1043,7 @@ u32 rtw_aes_encrypt(struct adapter *padapter, u8 *pxmitframe)
u32 res = _SUCCESS;
- if (((struct xmit_frame *)pxmitframe)->buf_addr == NULL)
+ if (!((struct xmit_frame *)pxmitframe)->buf_addr)
return _FAIL;
hw_hdr_offset = TXDESC_OFFSET;
diff --git a/drivers/staging/rtl8723bs/include/rtw_security.h b/drivers/staging/rtl8723bs/include/rtw_security.h
index a68b738..7587fa8 100644
--- a/drivers/staging/rtl8723bs/include/rtw_security.h
+++ b/drivers/staging/rtl8723bs/include/rtw_security.h
@@ -107,13 +107,13 @@ struct security_priv {
u32 dot118021XGrpPrivacy; /* This specify the privacy algthm. used for Grp key */
u32 dot118021XGrpKeyid; /* key id used for Grp Key (tx key index) */
- union Keytype dot118021XGrpKey[BIP_MAX_KEYID]; /* 802.1x Group Key, for inx0 and inx1 */
- union Keytype dot118021XGrptxmickey[BIP_MAX_KEYID];
- union Keytype dot118021XGrprxmickey[BIP_MAX_KEYID];
+ union Keytype dot118021XGrpKey[BIP_MAX_KEYID + 1]; /* 802.1x Group Key, for inx0 and inx1 */
+ union Keytype dot118021XGrptxmickey[BIP_MAX_KEYID + 1];
+ union Keytype dot118021XGrprxmickey[BIP_MAX_KEYID + 1];
union pn48 dot11Grptxpn; /* PN48 used for Grp Key xmit. */
union pn48 dot11Grprxpn; /* PN48 used for Grp Key recv. */
u32 dot11wBIPKeyid; /* key id used for BIP Key (tx key index) */
- union Keytype dot11wBIPKey[6]; /* BIP Key, for index4 and index5 */
+ union Keytype dot11wBIPKey[BIP_MAX_KEYID + 1]; /* BIP Key, for index4 and index5 */
union pn48 dot11wBIPtxpn; /* PN48 used for Grp Key xmit. */
union pn48 dot11wBIPrxpn; /* PN48 used for Grp Key recv. */
diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
index 499ac3a..b33424a 100644
--- a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+++ b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
@@ -113,13 +113,10 @@ static struct ieee80211_supported_band *rtw_spt_band_alloc(
struct ieee80211_supported_band *spt_band = NULL;
int n_channels, n_bitrates;
- if (band == NL80211_BAND_2GHZ)
- {
+ if (band == NL80211_BAND_2GHZ) {
n_channels = RTW_2G_CHANNELS_NUM;
n_bitrates = RTW_G_RATES_NUM;
- }
- else
- {
+ } else {
goto exit;
}
@@ -135,8 +132,7 @@ static struct ieee80211_supported_band *rtw_spt_band_alloc(
spt_band->n_channels = n_channels;
spt_band->n_bitrates = n_bitrates;
- if (band == NL80211_BAND_2GHZ)
- {
+ if (band == NL80211_BAND_2GHZ) {
rtw_2g_channels_init(spt_band->channels);
rtw_2g_rates_init(spt_band->bitrates);
}
@@ -235,8 +231,7 @@ struct cfg80211_bss *rtw_cfg80211_inform_bss(struct adapter *padapter, struct wl
{
u16 wapi_len = 0;
- if (rtw_get_wapi_ie(pnetwork->network.ies, pnetwork->network.ie_length, NULL, &wapi_len) > 0)
- {
+ if (rtw_get_wapi_ie(pnetwork->network.ies, pnetwork->network.ie_length, NULL, &wapi_len) > 0) {
if (wapi_len > 0)
goto exit;
}
@@ -244,8 +239,7 @@ struct cfg80211_bss *rtw_cfg80211_inform_bss(struct adapter *padapter, struct wl
/* To reduce PBC Overlap rate */
/* spin_lock_bh(&pwdev_priv->scan_req_lock); */
- if (adapter_wdev_data(padapter)->scan_request)
- {
+ if (adapter_wdev_data(padapter)->scan_request) {
u8 *psr = NULL, sr = 0;
struct ndis_802_11_ssid *pssid = &pnetwork->network.ssid;
struct cfg80211_scan_request *request = adapter_wdev_data(padapter)->scan_request;
@@ -258,14 +252,12 @@ struct cfg80211_bss *rtw_cfg80211_inform_bss(struct adapter *padapter, struct wl
if (wpsie && wpsielen > 0)
psr = rtw_get_wps_attr_content(wpsie, wpsielen, WPS_ATTR_SELECTED_REGISTRAR, (u8 *)(&sr), NULL);
- if (sr != 0)
- {
- if (request->n_ssids == 1 && request->n_channels == 1) /* it means under processing WPS */
- {
+ if (sr != 0) {
+ /* it means under processing WPS */
+ if (request->n_ssids == 1 && request->n_channels == 1) {
if (ssids[0].ssid_len != 0 &&
(pssid->ssid_length != ssids[0].ssid_len ||
- memcmp(pssid->ssid, ssids[0].ssid, ssids[0].ssid_len)))
- {
+ memcmp(pssid->ssid, ssids[0].ssid, ssids[0].ssid_len))) {
if (psr)
*psr = 0; /* clear sr */
}
@@ -358,7 +350,7 @@ int rtw_cfg80211_check_bss(struct adapter *padapter)
bss = cfg80211_get_bss(padapter->rtw_wdev->wiphy, notify_channel,
pnetwork->mac_address, pnetwork->ssid.ssid,
pnetwork->ssid.ssid_length,
- WLAN_CAPABILITY_ESS, WLAN_CAPABILITY_ESS);
+ IEEE80211_BSS_TYPE_ANY, IEEE80211_PRIVACY_ANY);
cfg80211_put_bss(padapter->rtw_wdev->wiphy, bss);
@@ -375,23 +367,18 @@ void rtw_cfg80211_ibss_indicate_connect(struct adapter *padapter)
struct ieee80211_channel *chan;
if (pwdev->iftype != NL80211_IFTYPE_ADHOC)
- {
return;
- }
if (!rtw_cfg80211_check_bss(padapter)) {
struct wlan_bssid_ex *pnetwork = &(padapter->mlmeextpriv.mlmext_info.network);
struct wlan_network *scanned = pmlmepriv->cur_network_scanned;
- if (check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) == true)
- {
+ if (check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) == true) {
memcpy(&cur_network->network, pnetwork, sizeof(struct wlan_bssid_ex));
rtw_cfg80211_inform_bss(padapter, cur_network);
- }
- else
- {
- if (scanned == NULL) {
+ } else {
+ if (!scanned) {
rtw_warn_on(1);
return;
}
@@ -432,7 +419,7 @@ void rtw_cfg80211_indicate_connect(struct adapter *padapter)
struct wlan_bssid_ex *pnetwork = &(padapter->mlmeextpriv.mlmext_info.network);
struct wlan_network *scanned = pmlmepriv->cur_network_scanned;
- if (scanned == NULL) {
+ if (!scanned) {
rtw_warn_on(1);
goto check_bss;
}
@@ -473,9 +460,7 @@ void rtw_cfg80211_indicate_connect(struct adapter *padapter)
roam_info.resp_ie_len =
pmlmepriv->assoc_rsp_len-sizeof(struct ieee80211_hdr_3addr)-6;
cfg80211_roamed(padapter->pnetdev, &roam_info, GFP_ATOMIC);
- }
- else
- {
+ } else {
cfg80211_connect_result(padapter->pnetdev, cur_network->network.mac_address
, pmlmepriv->assoc_req+sizeof(struct ieee80211_hdr_3addr)+2
, pmlmepriv->assoc_req_len-sizeof(struct ieee80211_hdr_3addr)-2
@@ -527,51 +512,41 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
param->u.crypt.err = 0;
param->u.crypt.alg[IEEE_CRYPT_ALG_NAME_LEN - 1] = '\0';
- if (param_len != sizeof(struct ieee_param) + param->u.crypt.key_len)
- {
+ if (param_len != sizeof(struct ieee_param) + param->u.crypt.key_len) {
ret = -EINVAL;
goto exit;
}
if (param->sta_addr[0] == 0xff && param->sta_addr[1] == 0xff &&
param->sta_addr[2] == 0xff && param->sta_addr[3] == 0xff &&
- param->sta_addr[4] == 0xff && param->sta_addr[5] == 0xff)
- {
- if (param->u.crypt.idx >= WEP_KEYS)
- {
+ param->sta_addr[4] == 0xff && param->sta_addr[5] == 0xff) {
+ if (param->u.crypt.idx >= WEP_KEYS) {
ret = -EINVAL;
goto exit;
}
- }
- else
- {
+ } else {
psta = rtw_get_stainfo(pstapriv, param->sta_addr);
if (!psta)
/* ret = -EINVAL; */
goto exit;
}
- if (strcmp(param->u.crypt.alg, "none") == 0 && (psta == NULL))
+ if (strcmp(param->u.crypt.alg, "none") == 0 && !psta)
goto exit;
- if (strcmp(param->u.crypt.alg, "WEP") == 0 && (psta == NULL))
- {
+ if (strcmp(param->u.crypt.alg, "WEP") == 0 && !psta) {
wep_key_idx = param->u.crypt.idx;
wep_key_len = param->u.crypt.key_len;
- if ((wep_key_idx >= WEP_KEYS) || (wep_key_len <= 0))
- {
+ if ((wep_key_idx >= WEP_KEYS) || (wep_key_len <= 0)) {
ret = -EINVAL;
goto exit;
}
if (wep_key_len > 0)
- {
wep_key_len = wep_key_len <= 5 ? 5 : 13;
- }
- if (psecuritypriv->bWepDefaultKeyIdxSet == 0)
- {
+ if (psecuritypriv->bWepDefaultKeyIdxSet == 0) {
/* wep default key has not been set, so use this key index as default key. */
psecuritypriv->dot11AuthAlgrthm = dot11AuthAlgrthm_Auto;
@@ -579,8 +554,7 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
psecuritypriv->dot11PrivacyAlgrthm = _WEP40_;
psecuritypriv->dot118021XGrpPrivacy = _WEP40_;
- if (wep_key_len == 13)
- {
+ if (wep_key_len == 13) {
psecuritypriv->dot11PrivacyAlgrthm = _WEP104_;
psecuritypriv->dot118021XGrpPrivacy = _WEP104_;
}
@@ -598,24 +572,18 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
}
-
- if (!psta && check_fwstate(pmlmepriv, WIFI_AP_STATE)) /* group key */
- {
- if (param->u.crypt.set_tx == 0) /* group key */
- {
- if (strcmp(param->u.crypt.alg, "WEP") == 0)
- {
+ /* group key */
+ if (!psta && check_fwstate(pmlmepriv, WIFI_AP_STATE)) {
+ /* group key */
+ if (param->u.crypt.set_tx == 0) {
+ if (strcmp(param->u.crypt.alg, "WEP") == 0) {
memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
psecuritypriv->dot118021XGrpPrivacy = _WEP40_;
if (param->u.crypt.key_len == 13)
- {
psecuritypriv->dot118021XGrpPrivacy = _WEP104_;
- }
- }
- else if (strcmp(param->u.crypt.alg, "TKIP") == 0)
- {
+ } else if (strcmp(param->u.crypt.alg, "TKIP") == 0) {
psecuritypriv->dot118021XGrpPrivacy = _TKIP_;
memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
@@ -627,15 +595,11 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
psecuritypriv->busetkipkey = true;
- }
- else if (strcmp(param->u.crypt.alg, "CCMP") == 0)
- {
+ } else if (strcmp(param->u.crypt.alg, "CCMP") == 0) {
psecuritypriv->dot118021XGrpPrivacy = _AES_;
memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
- }
- else
- {
+ } else {
psecuritypriv->dot118021XGrpPrivacy = _NO_PRIVACY_;
}
@@ -648,8 +612,7 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
rtw_ap_set_group_key(padapter, param->u.crypt.key, psecuritypriv->dot118021XGrpPrivacy, param->u.crypt.idx);
pbcmc_sta = rtw_get_bcmc_stainfo(padapter);
- if (pbcmc_sta)
- {
+ if (pbcmc_sta) {
pbcmc_sta->ieee8021x_blocked = false;
pbcmc_sta->dot118021XPrivacy = psecuritypriv->dot118021XGrpPrivacy;/* rx will use bmc_sta's dot118021XPrivacy */
}
@@ -660,24 +623,16 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
}
- if (psecuritypriv->dot11AuthAlgrthm == dot11AuthAlgrthm_8021X && psta) /* psk/802_1x */
- {
- if (check_fwstate(pmlmepriv, WIFI_AP_STATE))
- {
- if (param->u.crypt.set_tx == 1) /* pairwise key */
- {
+ if (psecuritypriv->dot11AuthAlgrthm == dot11AuthAlgrthm_8021X && psta) { /* psk/802_1x */
+ if (check_fwstate(pmlmepriv, WIFI_AP_STATE)) {
+ if (param->u.crypt.set_tx == 1) { /* pairwise key */
memcpy(psta->dot118021x_UncstKey.skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
- if (strcmp(param->u.crypt.alg, "WEP") == 0)
- {
+ if (strcmp(param->u.crypt.alg, "WEP") == 0) {
psta->dot118021XPrivacy = _WEP40_;
if (param->u.crypt.key_len == 13)
- {
psta->dot118021XPrivacy = _WEP104_;
- }
- }
- else if (strcmp(param->u.crypt.alg, "TKIP") == 0)
- {
+ } else if (strcmp(param->u.crypt.alg, "TKIP") == 0) {
psta->dot118021XPrivacy = _TKIP_;
/* DEBUG_ERR("set key length :param->u.crypt.key_len =%d\n", param->u.crypt.key_len); */
@@ -687,14 +642,10 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
psecuritypriv->busetkipkey = true;
- }
- else if (strcmp(param->u.crypt.alg, "CCMP") == 0)
- {
+ } else if (strcmp(param->u.crypt.alg, "CCMP") == 0) {
psta->dot118021XPrivacy = _AES_;
- }
- else
- {
+ } else {
psta->dot118021XPrivacy = _NO_PRIVACY_;
}
@@ -704,21 +655,14 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
psta->bpairwise_key_installed = true;
- }
- else/* group key??? */
- {
- if (strcmp(param->u.crypt.alg, "WEP") == 0)
- {
+ } else { /* group key??? */
+ if (strcmp(param->u.crypt.alg, "WEP") == 0) {
memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
psecuritypriv->dot118021XGrpPrivacy = _WEP40_;
if (param->u.crypt.key_len == 13)
- {
psecuritypriv->dot118021XGrpPrivacy = _WEP104_;
- }
- }
- else if (strcmp(param->u.crypt.alg, "TKIP") == 0)
- {
+ } else if (strcmp(param->u.crypt.alg, "TKIP") == 0) {
psecuritypriv->dot118021XGrpPrivacy = _TKIP_;
memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
@@ -730,15 +674,11 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
psecuritypriv->busetkipkey = true;
- }
- else if (strcmp(param->u.crypt.alg, "CCMP") == 0)
- {
+ } else if (strcmp(param->u.crypt.alg, "CCMP") == 0) {
psecuritypriv->dot118021XGrpPrivacy = _AES_;
memcpy(grpkey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
- }
- else
- {
+ } else {
psecuritypriv->dot118021XGrpPrivacy = _NO_PRIVACY_;
}
@@ -751,8 +691,7 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
rtw_ap_set_group_key(padapter, param->u.crypt.key, psecuritypriv->dot118021XGrpPrivacy, param->u.crypt.idx);
pbcmc_sta = rtw_get_bcmc_stainfo(padapter);
- if (pbcmc_sta)
- {
+ if (pbcmc_sta) {
pbcmc_sta->ieee8021x_blocked = false;
pbcmc_sta->dot118021XPrivacy = psecuritypriv->dot118021XGrpPrivacy;/* rx will use bmc_sta's dot118021XPrivacy */
}
@@ -772,6 +711,7 @@ static int rtw_cfg80211_ap_set_encryption(struct net_device *dev, struct ieee_pa
static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param *param, u32 param_len)
{
int ret = 0;
+ u8 max_idx;
u32 wep_key_idx, wep_key_len;
struct adapter *padapter = rtw_netdev_priv(dev);
struct mlme_priv *pmlmepriv = &padapter->mlmepriv;
@@ -780,43 +720,39 @@ static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param
param->u.crypt.err = 0;
param->u.crypt.alg[IEEE_CRYPT_ALG_NAME_LEN - 1] = '\0';
- if (param_len < (u32) ((u8 *) param->u.crypt.key - (u8 *) param) + param->u.crypt.key_len)
- {
+ if (param_len < (u32) ((u8 *) param->u.crypt.key - (u8 *) param) + param->u.crypt.key_len) {
ret = -EINVAL;
goto exit;
}
- if (param->sta_addr[0] == 0xff && param->sta_addr[1] == 0xff &&
- param->sta_addr[2] == 0xff && param->sta_addr[3] == 0xff &&
- param->sta_addr[4] == 0xff && param->sta_addr[5] == 0xff)
- {
- if (param->u.crypt.idx >= WEP_KEYS
- || param->u.crypt.idx >= BIP_MAX_KEYID
- )
- {
- ret = -EINVAL;
- goto exit;
- }
- } else {
- {
+ if (param->sta_addr[0] != 0xff || param->sta_addr[1] != 0xff ||
+ param->sta_addr[2] != 0xff || param->sta_addr[3] != 0xff ||
+ param->sta_addr[4] != 0xff || param->sta_addr[5] != 0xff) {
ret = -EINVAL;
goto exit;
}
- }
if (strcmp(param->u.crypt.alg, "WEP") == 0)
- {
+ max_idx = WEP_KEYS - 1;
+ else
+ max_idx = BIP_MAX_KEYID;
+
+ if (param->u.crypt.idx > max_idx) {
+ netdev_err(dev, "Error crypt.idx %d > %d\n", param->u.crypt.idx, max_idx);
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ if (strcmp(param->u.crypt.alg, "WEP") == 0) {
wep_key_idx = param->u.crypt.idx;
wep_key_len = param->u.crypt.key_len;
- if ((wep_key_idx >= WEP_KEYS) || (wep_key_len <= 0))
- {
+ if (wep_key_len <= 0) {
ret = -EINVAL;
goto exit;
}
- if (psecuritypriv->bWepDefaultKeyIdxSet == 0)
- {
+ if (psecuritypriv->bWepDefaultKeyIdxSet == 0) {
/* wep default key has not been set, so use this key index as default key. */
wep_key_len = wep_key_len <= 5 ? 5 : 13;
@@ -825,8 +761,7 @@ static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param
psecuritypriv->dot11PrivacyAlgrthm = _WEP40_;
psecuritypriv->dot118021XGrpPrivacy = _WEP40_;
- if (wep_key_len == 13)
- {
+ if (wep_key_len == 13) {
psecuritypriv->dot11PrivacyAlgrthm = _WEP104_;
psecuritypriv->dot118021XGrpPrivacy = _WEP104_;
}
@@ -843,13 +778,11 @@ static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param
goto exit;
}
- if (padapter->securitypriv.dot11AuthAlgrthm == dot11AuthAlgrthm_8021X) /* 802_1x */
- {
+ if (padapter->securitypriv.dot11AuthAlgrthm == dot11AuthAlgrthm_8021X) { /* 802_1x */
struct sta_info *psta, *pbcmc_sta;
struct sta_priv *pstapriv = &padapter->stapriv;
- if (check_fwstate(pmlmepriv, WIFI_STATION_STATE | WIFI_MP_STATE) == true) /* sta mode */
- {
+ if (check_fwstate(pmlmepriv, WIFI_STATION_STATE | WIFI_MP_STATE) == true) { /* sta mode */
psta = rtw_get_stainfo(pstapriv, get_bssid(pmlmepriv));
if (psta) {
/* Jeff: don't disable ieee8021x_blocked while clearing key */
@@ -858,18 +791,15 @@ static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param
if ((padapter->securitypriv.ndisencryptstatus == Ndis802_11Encryption2Enabled) ||
- (padapter->securitypriv.ndisencryptstatus == Ndis802_11Encryption3Enabled))
- {
+ (padapter->securitypriv.ndisencryptstatus == Ndis802_11Encryption3Enabled)) {
psta->dot118021XPrivacy = padapter->securitypriv.dot11PrivacyAlgrthm;
}
- if (param->u.crypt.set_tx == 1)/* pairwise key */
- {
+ if (param->u.crypt.set_tx == 1) { /* pairwise key */
memcpy(psta->dot118021x_UncstKey.skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
- if (strcmp(param->u.crypt.alg, "TKIP") == 0)/* set mic key */
- {
+ if (strcmp(param->u.crypt.alg, "TKIP") == 0) { /* set mic key */
/* DEBUG_ERR(("\nset key length :param->u.crypt.key_len =%d\n", param->u.crypt.key_len)); */
memcpy(psta->dot11tkiptxmickey.skey, &(param->u.crypt.key[16]), 8);
memcpy(psta->dot11tkiprxmickey.skey, &(param->u.crypt.key[24]), 8);
@@ -879,11 +809,8 @@ static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param
}
rtw_setstakey_cmd(padapter, psta, true, true);
- }
- else/* group key */
- {
- if (strcmp(param->u.crypt.alg, "TKIP") == 0 || strcmp(param->u.crypt.alg, "CCMP") == 0)
- {
+ } else { /* group key */
+ if (strcmp(param->u.crypt.alg, "TKIP") == 0 || strcmp(param->u.crypt.alg, "CCMP") == 0) {
memcpy(padapter->securitypriv.dot118021XGrpKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
memcpy(padapter->securitypriv.dot118021XGrptxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[16]), 8);
memcpy(padapter->securitypriv.dot118021XGrprxmickey[param->u.crypt.idx].skey, &(param->u.crypt.key[24]), 8);
@@ -891,9 +818,7 @@ static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param
padapter->securitypriv.dot118021XGrpKeyid = param->u.crypt.idx;
rtw_set_key(padapter, &padapter->securitypriv, param->u.crypt.idx, 1, true);
- }
- else if (strcmp(param->u.crypt.alg, "BIP") == 0)
- {
+ } else if (strcmp(param->u.crypt.alg, "BIP") == 0) {
/* save the IGTK key, length 16 bytes */
memcpy(padapter->securitypriv.dot11wBIPKey[param->u.crypt.idx].skey, param->u.crypt.key, (param->u.crypt.key_len > 16 ? 16 : param->u.crypt.key_len));
/*
@@ -907,25 +832,19 @@ static int rtw_cfg80211_set_encryption(struct net_device *dev, struct ieee_param
}
pbcmc_sta = rtw_get_bcmc_stainfo(padapter);
- if (pbcmc_sta == NULL)
- {
+ if (!pbcmc_sta) {
/* DEBUG_ERR(("Set OID_802_11_ADD_KEY: bcmc stainfo is null\n")); */
- }
- else
- {
+ } else {
/* Jeff: don't disable ieee8021x_blocked while clearing key */
if (strcmp(param->u.crypt.alg, "none") != 0)
pbcmc_sta->ieee8021x_blocked = false;
if ((padapter->securitypriv.ndisencryptstatus == Ndis802_11Encryption2Enabled) ||
- (padapter->securitypriv.ndisencryptstatus == Ndis802_11Encryption3Enabled))
- {
+ (padapter->securitypriv.ndisencryptstatus == Ndis802_11Encryption3Enabled)) {
pbcmc_sta->dot118021XPrivacy = padapter->securitypriv.dot11PrivacyAlgrthm;
}
}
- }
- else if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)) /* adhoc mode */
- {
+ } else if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)) { /* adhoc mode */
}
}
@@ -947,7 +866,7 @@ static int cfg80211_rtw_add_key(struct wiphy *wiphy, struct net_device *ndev,
param_len = sizeof(struct ieee_param) + params->key_len;
param = rtw_malloc(param_len);
- if (param == NULL)
+ if (!param)
return -1;
memset(param, 0, param_len);
@@ -983,39 +902,29 @@ static int cfg80211_rtw_add_key(struct wiphy *wiphy, struct net_device *ndev,
if (!mac_addr || is_broadcast_ether_addr(mac_addr))
- {
param->u.crypt.set_tx = 0; /* for wpa/wpa2 group key */
- } else {
+ else
param->u.crypt.set_tx = 1; /* for wpa/wpa2 pairwise key */
- }
param->u.crypt.idx = key_index;
if (params->seq_len && params->seq)
- {
memcpy(param->u.crypt.seq, (u8 *)params->seq, params->seq_len);
- }
- if (params->key_len && params->key)
- {
+ if (params->key_len && params->key) {
param->u.crypt.key_len = params->key_len;
memcpy(param->u.crypt.key, (u8 *)params->key, params->key_len);
}
- if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true)
- {
+ if (check_fwstate(pmlmepriv, WIFI_STATION_STATE) == true) {
ret = rtw_cfg80211_set_encryption(ndev, param, param_len);
- }
- else if (check_fwstate(pmlmepriv, WIFI_AP_STATE) == true)
- {
+ } else if (check_fwstate(pmlmepriv, WIFI_AP_STATE) == true) {
if (mac_addr)
memcpy(param->sta_addr, (void *)mac_addr, ETH_ALEN);
ret = rtw_cfg80211_ap_set_encryption(ndev, param, param_len);
- }
- else if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true
- || check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) == true)
- {
+ } else if (check_fwstate(pmlmepriv, WIFI_ADHOC_STATE) == true
+ || check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) == true) {
ret = rtw_cfg80211_set_encryption(ndev, param, param_len);
}
@@ -1041,8 +950,7 @@ static int cfg80211_rtw_del_key(struct wiphy *wiphy, struct net_device *ndev,
struct adapter *padapter = rtw_netdev_priv(ndev);
struct security_priv *psecuritypriv = &padapter->securitypriv;
- if (key_index == psecuritypriv->dot11PrivacyKeyIndex)
- {
+ if (key_index == psecuritypriv->dot11PrivacyKeyIndex) {
/* clear the flag of wep default key set. */
psecuritypriv->bWepDefaultKeyIdxSet = 0;
}
@@ -1058,16 +966,14 @@ static int cfg80211_rtw_set_default_key(struct wiphy *wiphy,
struct adapter *padapter = rtw_netdev_priv(ndev);
struct security_priv *psecuritypriv = &padapter->securitypriv;
- if ((key_index < WEP_KEYS) && ((psecuritypriv->dot11PrivacyAlgrthm == _WEP40_) || (psecuritypriv->dot11PrivacyAlgrthm == _WEP104_))) /* set wep default key */
- {
+ if ((key_index < WEP_KEYS) && ((psecuritypriv->dot11PrivacyAlgrthm == _WEP40_) || (psecuritypriv->dot11PrivacyAlgrthm == _WEP104_))) { /* set wep default key */
psecuritypriv->ndisencryptstatus = Ndis802_11Encryption1Enabled;
psecuritypriv->dot11PrivacyKeyIndex = key_index;
psecuritypriv->dot11PrivacyAlgrthm = _WEP40_;
psecuritypriv->dot118021XGrpPrivacy = _WEP40_;
- if (psecuritypriv->dot11DefKeylen[key_index] == 13)
- {
+ if (psecuritypriv->dot11DefKeylen[key_index] == 13) {
psecuritypriv->dot11PrivacyAlgrthm = _WEP104_;
psecuritypriv->dot118021XGrpPrivacy = _WEP104_;
}
@@ -1098,16 +1004,14 @@ static int cfg80211_rtw_get_station(struct wiphy *wiphy,
}
psta = rtw_get_stainfo(pstapriv, (u8 *)mac);
- if (psta == NULL) {
+ if (!psta) {
ret = -ENOENT;
goto exit;
}
/* for infra./P2PClient mode */
if (check_fwstate(pmlmepriv, WIFI_STATION_STATE)
- && check_fwstate(pmlmepriv, _FW_LINKED)
- )
- {
+ && check_fwstate(pmlmepriv, _FW_LINKED)) {
struct wlan_network *cur_network = &(pmlmepriv->cur_network);
if (memcmp((u8 *)mac, cur_network->network.mac_address, ETH_ALEN)) {
@@ -1133,9 +1037,7 @@ static int cfg80211_rtw_get_station(struct wiphy *wiphy,
if ((check_fwstate(pmlmepriv, WIFI_ADHOC_STATE)
|| check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE)
|| check_fwstate(pmlmepriv, WIFI_AP_STATE))
- && check_fwstate(pmlmepriv, _FW_LINKED)
- )
- {
+ && check_fwstate(pmlmepriv, _FW_LINKED)) {
/* TODO: should acquire station info... */
}
@@ -1155,8 +1057,7 @@ static int cfg80211_rtw_change_iface(struct wiphy *wiphy,
struct mlme_ext_priv *pmlmeext = &(padapter->mlmeextpriv);
int ret = 0;
- if (adapter_to_dvobj(padapter)->processing_dev_remove == true)
- {
+ if (adapter_to_dvobj(padapter)->processing_dev_remove == true) {
ret = -EPERM;
goto exit;
}
@@ -1175,8 +1076,7 @@ static int cfg80211_rtw_change_iface(struct wiphy *wiphy,
old_type = rtw_wdev->iftype;
- if (old_type != type)
- {
+ if (old_type != type) {
pmlmeext->action_public_rxseq = 0xffff;
pmlmeext->action_public_dialog_token = 0xff;
}
@@ -1198,8 +1098,7 @@ static int cfg80211_rtw_change_iface(struct wiphy *wiphy,
rtw_wdev->iftype = type;
- if (rtw_set_802_11_infrastructure_mode(padapter, networkType) == false)
- {
+ if (rtw_set_802_11_infrastructure_mode(padapter, networkType) == false) {
rtw_wdev->iftype = old_type;
ret = -EPERM;
goto exit;
@@ -1239,8 +1138,8 @@ void rtw_cfg80211_unlink_bss(struct adapter *padapter, struct wlan_network *pnet
bss = cfg80211_get_bss(wiphy, NULL/*notify_channel*/,
select_network->mac_address, select_network->ssid.ssid,
- select_network->ssid.ssid_length, 0/*WLAN_CAPABILITY_ESS*/,
- 0/*WLAN_CAPABILITY_ESS*/);
+ select_network->ssid.ssid_length, IEEE80211_BSS_TYPE_ANY,
+ IEEE80211_PRIVACY_ANY);
if (bss) {
cfg80211_unlink_bss(wiphy, bss);
@@ -1264,9 +1163,7 @@ void rtw_cfg80211_surveydone_event_callback(struct adapter *padapter)
/* report network only if the current channel set contains the channel to which this network belongs */
if (rtw_ch_set_search_ch(padapter->mlmeextpriv.channel_set, pnetwork->network.configuration.ds_config) >= 0
- && true == rtw_validate_ssid(&(pnetwork->network.ssid))
- )
- {
+ && true == rtw_validate_ssid(&(pnetwork->network.ssid))) {
/* ev =translate_scan(padapter, a, pnetwork, ev, stop); */
rtw_cfg80211_inform_bss(padapter, pnetwork);
}
@@ -1283,13 +1180,10 @@ static int rtw_cfg80211_set_probe_req_wpsp2pie(struct adapter *padapter, char *b
u8 *wps_ie;
struct mlme_priv *pmlmepriv = &(padapter->mlmepriv);
- if (len > 0)
- {
+ if (len > 0) {
wps_ie = rtw_get_wps_ie(buf, len, NULL, &wps_ielen);
- if (wps_ie)
- {
- if (pmlmepriv->wps_probe_req_ie)
- {
+ if (wps_ie) {
+ if (pmlmepriv->wps_probe_req_ie) {
pmlmepriv->wps_probe_req_ie_len = 0;
kfree(pmlmepriv->wps_probe_req_ie);
pmlmepriv->wps_probe_req_ie = NULL;
@@ -1327,7 +1221,7 @@ static int cfg80211_rtw_scan(struct wiphy *wiphy
struct rtw_wdev_priv *pwdev_priv;
struct mlme_priv *pmlmepriv;
- if (ndev == NULL) {
+ if (!ndev) {
ret = -EINVAL;
goto exit;
}
@@ -1341,10 +1235,8 @@ static int cfg80211_rtw_scan(struct wiphy *wiphy
pwdev_priv->scan_request = request;
spin_unlock_bh(&pwdev_priv->scan_req_lock);
- if (check_fwstate(pmlmepriv, WIFI_AP_STATE) == true)
- {
- if (check_fwstate(pmlmepriv, WIFI_UNDER_WPS|_FW_UNDER_SURVEY|_FW_UNDER_LINKING) == true)
- {
+ if (check_fwstate(pmlmepriv, WIFI_AP_STATE) == true) {
+ if (check_fwstate(pmlmepriv, WIFI_UNDER_WPS|_FW_UNDER_SURVEY|_FW_UNDER_LINKING) == true) {
need_indicate_scan_done = true;
goto check_need_indicate_scan_done;
}
@@ -1367,15 +1259,13 @@ static int cfg80211_rtw_scan(struct wiphy *wiphy
goto check_need_indicate_scan_done;
}
- if (pmlmepriv->LinkDetectInfo.bBusyTraffic == true)
- {
+ if (pmlmepriv->LinkDetectInfo.bBusyTraffic == true) {
static unsigned long lastscantime = 0;
unsigned long passtime;
passtime = jiffies_to_msecs(jiffies - lastscantime);
lastscantime = jiffies;
- if (passtime > 12000)
- {
+ if (passtime > 12000) {
need_indicate_scan_done = true;
goto check_need_indicate_scan_done;
}
@@ -1414,9 +1304,7 @@ static int cfg80211_rtw_scan(struct wiphy *wiphy
} else if (request->n_channels <= 4) {
for (j = request->n_channels - 1; j >= 0; j--)
for (i = 0; i < survey_times; i++)
- {
memcpy(&ch[j*survey_times+i], &ch[j], sizeof(struct rtw_ieee80211_channel));
- }
_status = rtw_sitesurvey_cmd(padapter, ssid, RTW_SSID_SCAN_AMOUNT, ch, survey_times * request->n_channels);
} else {
_status = rtw_sitesurvey_cmd(padapter, ssid, RTW_SSID_SCAN_AMOUNT, NULL, 0);
@@ -1425,14 +1313,11 @@ static int cfg80211_rtw_scan(struct wiphy *wiphy
if (_status == false)
- {
ret = -1;
- }
check_need_indicate_scan_done:
kfree(ssid);
- if (need_indicate_scan_done)
- {
+ if (need_indicate_scan_done) {
rtw_cfg80211_surveydone_event_callback(padapter);
rtw_cfg80211_indicate_scan_done(padapter, false);
}
@@ -1458,9 +1343,7 @@ static int rtw_cfg80211_set_wpa_version(struct security_priv *psecuritypriv, u32
if (wpa_version & (NL80211_WPA_VERSION_1 | NL80211_WPA_VERSION_2))
- {
psecuritypriv->ndisauthtype = Ndis802_11AuthModeWPAPSK;
- }
return 0;
@@ -1571,7 +1454,7 @@ static int rtw_cfg80211_set_wpa_ie(struct adapter *padapter, u8 *pie, size_t iel
u8 *pwpa, *pwpa2;
u8 null_addr[] = {0, 0, 0, 0, 0, 0};
- if (pie == NULL || !ielen) {
+ if (!pie || !ielen) {
/* Treat this as normal case, but need to clear WIFI_UNDER_WPS */
_clr_fwstate_(&padapter->mlmepriv, WIFI_UNDER_WPS);
goto exit;
@@ -1583,7 +1466,7 @@ static int rtw_cfg80211_set_wpa_ie(struct adapter *padapter, u8 *pie, size_t iel
}
buf = rtw_zmalloc(ielen);
- if (buf == NULL) {
+ if (!buf) {
ret = -ENOMEM;
goto exit;
}
@@ -1619,8 +1502,7 @@ static int rtw_cfg80211_set_wpa_ie(struct adapter *padapter, u8 *pie, size_t iel
if (pairwise_cipher == 0)
pairwise_cipher = WPA_CIPHER_NONE;
- switch (group_cipher)
- {
+ switch (group_cipher) {
case WPA_CIPHER_NONE:
padapter->securitypriv.dot118021XGrpPrivacy = _NO_PRIVACY_;
padapter->securitypriv.ndisencryptstatus = Ndis802_11EncryptionDisabled;
@@ -1643,8 +1525,7 @@ static int rtw_cfg80211_set_wpa_ie(struct adapter *padapter, u8 *pie, size_t iel
break;
}
- switch (pairwise_cipher)
- {
+ switch (pairwise_cipher) {
case WPA_CIPHER_NONE:
padapter->securitypriv.dot11PrivacyAlgrthm = _NO_PRIVACY_;
padapter->securitypriv.ndisencryptstatus = Ndis802_11EncryptionDisabled;
@@ -1765,8 +1646,7 @@ static int cfg80211_rtw_leave_ibss(struct wiphy *wiphy, struct net_device *ndev)
rtw_wdev->iftype = NL80211_IFTYPE_STATION;
- if (rtw_set_802_11_infrastructure_mode(padapter, Ndis802_11Infrastructure) == false)
- {
+ if (rtw_set_802_11_infrastructure_mode(padapter, Ndis802_11Infrastructure) == false) {
rtw_wdev->iftype = old_type;
ret = -EPERM;
goto leave_ibss;
@@ -1826,9 +1706,8 @@ static int cfg80211_rtw_connect(struct wiphy *wiphy, struct net_device *ndev,
ret = -EBUSY;
goto exit;
}
- if (check_fwstate(pmlmepriv, _FW_UNDER_SURVEY) == true) {
+ if (check_fwstate(pmlmepriv, _FW_UNDER_SURVEY) == true)
rtw_scan_abort(padapter);
- }
psecuritypriv->ndisencryptstatus = Ndis802_11EncryptionDisabled;
psecuritypriv->dot11PrivacyAlgrthm = _NO_PRIVACY_;
@@ -1873,7 +1752,7 @@ static int cfg80211_rtw_connect(struct wiphy *wiphy, struct net_device *ndev,
wep_key_len = wep_key_len <= 5 ? 5 : 13;
wep_total_len = wep_key_len + FIELD_OFFSET(struct ndis_802_11_wep, key_material);
pwep = rtw_malloc(wep_total_len);
- if (pwep == NULL) {
+ if (!pwep) {
ret = -ENOMEM;
goto exit;
}
@@ -2321,9 +2200,8 @@ static int rtw_cfg80211_add_monitor_if(struct adapter *padapter, char *name, str
mon_ndev->ieee80211_ptr = mon_wdev;
ret = cfg80211_register_netdevice(mon_ndev);
- if (ret) {
+ if (ret)
goto out;
- }
*ndev = pwdev_priv->pmon_ndev = mon_ndev;
memcpy(pwdev_priv->ifname_mon, name, IFNAMSIZ+1);
@@ -2436,11 +2314,10 @@ static int rtw_add_beacon(struct adapter *adapter, const u8 *head, size_t head_l
rtw_ies_remove_ie(pbuf, &len, _BEACON_IE_OFFSET_, WLAN_EID_VENDOR_SPECIFIC, P2P_OUI, 4);
rtw_ies_remove_ie(pbuf, &len, _BEACON_IE_OFFSET_, WLAN_EID_VENDOR_SPECIFIC, WFD_OUI, 4);
- if (rtw_check_beacon_data(adapter, pbuf, len) == _SUCCESS) {
+ if (rtw_check_beacon_data(adapter, pbuf, len) == _SUCCESS)
ret = 0;
- } else {
+ else
ret = -EINVAL;
- }
kfree(pbuf);
@@ -2708,7 +2585,7 @@ static int cfg80211_rtw_mgmt_tx(struct wiphy *wiphy,
struct adapter *padapter;
struct rtw_wdev_priv *pwdev_priv;
- if (ndev == NULL) {
+ if (!ndev) {
ret = -EINVAL;
goto exit;
}
diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
index 295121c..0d2cb3e 100644
--- a/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
+++ b/drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
@@ -60,6 +60,7 @@ static int wpa_set_auth_algs(struct net_device *dev, u32 value)
static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param, u32 param_len)
{
int ret = 0;
+ u8 max_idx;
u32 wep_key_idx, wep_key_len, wep_total_len;
struct ndis_802_11_wep *pwep = NULL;
struct adapter *padapter = rtw_netdev_priv(dev);
@@ -74,19 +75,22 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
goto exit;
}
- if (param->sta_addr[0] == 0xff && param->sta_addr[1] == 0xff &&
- param->sta_addr[2] == 0xff && param->sta_addr[3] == 0xff &&
- param->sta_addr[4] == 0xff && param->sta_addr[5] == 0xff) {
- if (param->u.crypt.idx >= WEP_KEYS ||
- param->u.crypt.idx >= BIP_MAX_KEYID) {
- ret = -EINVAL;
- goto exit;
- }
- } else {
- {
- ret = -EINVAL;
- goto exit;
- }
+ if (param->sta_addr[0] != 0xff || param->sta_addr[1] != 0xff ||
+ param->sta_addr[2] != 0xff || param->sta_addr[3] != 0xff ||
+ param->sta_addr[4] != 0xff || param->sta_addr[5] != 0xff) {
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ if (strcmp(param->u.crypt.alg, "WEP") == 0)
+ max_idx = WEP_KEYS - 1;
+ else
+ max_idx = BIP_MAX_KEYID;
+
+ if (param->u.crypt.idx > max_idx) {
+ netdev_err(dev, "Error crypt.idx %d > %d\n", param->u.crypt.idx, max_idx);
+ ret = -EINVAL;
+ goto exit;
}
if (strcmp(param->u.crypt.alg, "WEP") == 0) {
@@ -98,9 +102,6 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
wep_key_idx = param->u.crypt.idx;
wep_key_len = param->u.crypt.key_len;
- if (wep_key_idx > WEP_KEYS)
- return -EINVAL;
-
if (wep_key_len > 0) {
wep_key_len = wep_key_len <= 5 ? 5 : 13;
wep_total_len = wep_key_len + FIELD_OFFSET(struct ndis_802_11_wep, key_material);
@@ -153,7 +154,7 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
if (check_fwstate(pmlmepriv, WIFI_STATION_STATE | WIFI_MP_STATE) == true) { /* sta mode */
psta = rtw_get_stainfo(pstapriv, get_bssid(pmlmepriv));
- if (psta == NULL) {
+ if (!psta) {
/* DEBUG_ERR(("Set wpa_set_encryption: Obtain Sta_info fail\n")); */
} else {
/* Jeff: don't disable ieee8021x_blocked while clearing key */
@@ -206,7 +207,7 @@ static int wpa_set_encryption(struct net_device *dev, struct ieee_param *param,
}
pbcmc_sta = rtw_get_bcmc_stainfo(padapter);
- if (pbcmc_sta == NULL) {
+ if (!pbcmc_sta) {
/* DEBUG_ERR(("Set OID_802_11_ADD_KEY: bcmc stainfo is null\n")); */
} else {
/* Jeff: don't disable ieee8021x_blocked while clearing key */
@@ -236,9 +237,9 @@ static int rtw_set_wpa_ie(struct adapter *padapter, char *pie, unsigned short ie
int ret = 0;
u8 null_addr[] = {0, 0, 0, 0, 0, 0};
- if ((ielen > MAX_WPA_IE_LEN) || (pie == NULL)) {
+ if (ielen > MAX_WPA_IE_LEN || !pie) {
_clr_fwstate_(&padapter->mlmepriv, WIFI_UNDER_WPS);
- if (pie == NULL)
+ if (!pie)
return ret;
else
return -EINVAL;
@@ -246,7 +247,7 @@ static int rtw_set_wpa_ie(struct adapter *padapter, char *pie, unsigned short ie
if (ielen) {
buf = rtw_zmalloc(ielen);
- if (buf == NULL) {
+ if (!buf) {
ret = -ENOMEM;
goto exit;
}
@@ -491,7 +492,7 @@ static int wpa_supplicant_ioctl(struct net_device *dev, struct iw_point *p)
return -EINVAL;
param = rtw_malloc(p->length);
- if (param == NULL)
+ if (!param)
return -ENOMEM;
if (copy_from_user(param, p->pointer, p->length)) {
@@ -571,7 +572,7 @@ static int rtw_set_encryption(struct net_device *dev, struct ieee_param *param,
goto exit;
}
- if (strcmp(param->u.crypt.alg, "none") == 0 && (psta == NULL)) {
+ if (strcmp(param->u.crypt.alg, "none") == 0 && !psta) {
/* todo:clear default encryption keys */
psecuritypriv->dot11AuthAlgrthm = dot11AuthAlgrthm_Open;
@@ -583,7 +584,7 @@ static int rtw_set_encryption(struct net_device *dev, struct ieee_param *param,
}
- if (strcmp(param->u.crypt.alg, "WEP") == 0 && (psta == NULL)) {
+ if (strcmp(param->u.crypt.alg, "WEP") == 0 && !psta) {
wep_key_idx = param->u.crypt.idx;
wep_key_len = param->u.crypt.key_len;
@@ -1227,7 +1228,7 @@ static int rtw_hostapd_ioctl(struct net_device *dev, struct iw_point *p)
return -EINVAL;
param = rtw_malloc(p->length);
- if (param == NULL)
+ if (!param)
return -ENOMEM;
if (copy_from_user(param, p->pointer, p->length)) {
diff --git a/drivers/staging/rtl8723bs/os_dep/os_intfs.c b/drivers/staging/rtl8723bs/os_dep/os_intfs.c
index 23f4f70..279347b 100644
--- a/drivers/staging/rtl8723bs/os_dep/os_intfs.c
+++ b/drivers/staging/rtl8723bs/os_dep/os_intfs.c
@@ -488,7 +488,7 @@ void rtw_unregister_netdevs(struct dvobj_priv *dvobj)
padapter = dvobj->padapters;
- if (padapter == NULL)
+ if (!padapter)
return;
pnetdev = padapter->pnetdev;
@@ -594,7 +594,7 @@ struct dvobj_priv *devobj_init(void)
struct dvobj_priv *pdvobj = NULL;
pdvobj = rtw_zmalloc(sizeof(*pdvobj));
- if (pdvobj == NULL)
+ if (!pdvobj)
return NULL;
mutex_init(&pdvobj->hw_init_mutex);
diff --git a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
index 81db7fb..97893bb 100644
--- a/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
+++ b/drivers/staging/vc04_services/include/linux/raspberrypi/vchiq.h
@@ -82,7 +82,7 @@ struct vchiq_service_params_kernel {
struct vchiq_instance;
-extern enum vchiq_status vchiq_initialise(struct vchiq_instance **pinstance);
+extern int vchiq_initialise(struct vchiq_instance **pinstance);
extern enum vchiq_status vchiq_shutdown(struct vchiq_instance *instance);
extern enum vchiq_status vchiq_connect(struct vchiq_instance *instance);
extern enum vchiq_status vchiq_open_service(struct vchiq_instance *instance,
diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
index e8e39a1..69f342e 100644
--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
+++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.h
@@ -152,10 +152,10 @@ extern struct vchiq_arm_state*
vchiq_platform_get_arm_state(struct vchiq_state *state);
-extern enum vchiq_status
+extern int
vchiq_use_internal(struct vchiq_state *state, struct vchiq_service *service,
enum USE_TYPE_E use_type);
-extern enum vchiq_status
+extern int
vchiq_release_internal(struct vchiq_state *state,
struct vchiq_service *service);
diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c
index 6bc3aaf..62004e3 100644
--- a/drivers/target/iscsi/iscsi_target_parameters.c
+++ b/drivers/target/iscsi/iscsi_target_parameters.c
@@ -1262,18 +1262,20 @@ static struct iscsi_param *iscsi_check_key(
return param;
if (!(param->phase & phase)) {
- pr_err("Key \"%s\" may not be negotiated during ",
- param->name);
+ char *phase_name;
+
switch (phase) {
case PHASE_SECURITY:
- pr_debug("Security phase.\n");
+ phase_name = "Security";
break;
case PHASE_OPERATIONAL:
- pr_debug("Operational phase.\n");
+ phase_name = "Operational";
break;
default:
- pr_debug("Unknown phase.\n");
+ phase_name = "Unknown";
}
+ pr_err("Key \"%s\" may not be negotiated during %s phase.\n",
+ param->name, phase_name);
return NULL;
}
diff --git a/drivers/target/target_core_file.c b/drivers/target/target_core_file.c
index ef4a8e1..0148607 100644
--- a/drivers/target/target_core_file.c
+++ b/drivers/target/target_core_file.c
@@ -332,7 +332,7 @@ static int fd_do_rw(struct se_cmd *cmd, struct file *fd,
len += sg->length;
}
- iov_iter_bvec(&iter, READ, bvec, sgl_nents, len);
+ iov_iter_bvec(&iter, is_write, bvec, sgl_nents, len);
if (is_write)
ret = vfs_iter_write(fd, &iter, &pos, 0);
else
@@ -469,7 +469,7 @@ fd_execute_write_same(struct se_cmd *cmd)
len += se_dev->dev_attrib.block_size;
}
- iov_iter_bvec(&iter, READ, bvec, nolb, len);
+ iov_iter_bvec(&iter, WRITE, bvec, nolb, len);
ret = vfs_iter_write(fd_dev->fd_file, &iter, &pos, 0);
kfree(bvec);
diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c
index bac1114..2b95b45 100644
--- a/drivers/target/target_core_tmr.c
+++ b/drivers/target/target_core_tmr.c
@@ -73,8 +73,8 @@ static bool __target_check_io_state(struct se_cmd *se_cmd,
{
struct se_session *sess = se_cmd->se_sess;
- assert_spin_locked(&sess->sess_cmd_lock);
- WARN_ON_ONCE(!irqs_disabled());
+ lockdep_assert_held(&sess->sess_cmd_lock);
+
/*
* If command already reached CMD_T_COMPLETE state within
* target_complete_cmd() or CMD_T_FABRIC_STOP due to shutdown,
diff --git a/drivers/tee/amdtee/core.c b/drivers/tee/amdtee/core.c
index 297dc62..372d647 100644
--- a/drivers/tee/amdtee/core.c
+++ b/drivers/tee/amdtee/core.c
@@ -267,35 +267,34 @@ int amdtee_open_session(struct tee_context *ctx,
goto out;
}
+ /* Open session with loaded TA */
+ handle_open_session(arg, &session_info, param);
+ if (arg->ret != TEEC_SUCCESS) {
+ pr_err("open_session failed %d\n", arg->ret);
+ handle_unload_ta(ta_handle);
+ kref_put(&sess->refcount, destroy_session);
+ goto out;
+ }
+
/* Find an empty session index for the given TA */
spin_lock(&sess->lock);
i = find_first_zero_bit(sess->sess_mask, TEE_NUM_SESSIONS);
- if (i < TEE_NUM_SESSIONS)
+ if (i < TEE_NUM_SESSIONS) {
+ sess->session_info[i] = session_info;
+ set_session_id(ta_handle, i, &arg->session);
set_bit(i, sess->sess_mask);
+ }
spin_unlock(&sess->lock);
if (i >= TEE_NUM_SESSIONS) {
pr_err("reached maximum session count %d\n", TEE_NUM_SESSIONS);
+ handle_close_session(ta_handle, session_info);
handle_unload_ta(ta_handle);
kref_put(&sess->refcount, destroy_session);
rc = -ENOMEM;
goto out;
}
- /* Open session with loaded TA */
- handle_open_session(arg, &session_info, param);
- if (arg->ret != TEEC_SUCCESS) {
- pr_err("open_session failed %d\n", arg->ret);
- spin_lock(&sess->lock);
- clear_bit(i, sess->sess_mask);
- spin_unlock(&sess->lock);
- handle_unload_ta(ta_handle);
- kref_put(&sess->refcount, destroy_session);
- goto out;
- }
-
- sess->session_info[i] = session_info;
- set_session_id(ta_handle, i, &arg->session);
out:
free_pages((u64)ta, get_order(ta_size));
return rc;
diff --git a/drivers/thermal/gov_fair_share.c b/drivers/thermal/gov_fair_share.c
index 1e5abf4..a4c3079 100644
--- a/drivers/thermal/gov_fair_share.c
+++ b/drivers/thermal/gov_fair_share.c
@@ -25,10 +25,10 @@ static int get_trip_level(struct thermal_zone_device *tz)
int trip_temp;
enum thermal_trip_type trip_type;
- if (tz->trips == 0 || !tz->ops->get_trip_temp)
+ if (tz->num_trips == 0 || !tz->ops->get_trip_temp)
return 0;
- for (count = 0; count < tz->trips; count++) {
+ for (count = 0; count < tz->num_trips; count++) {
tz->ops->get_trip_temp(tz, count, &trip_temp);
if (tz->temperature < trip_temp)
break;
@@ -49,11 +49,7 @@ static int get_trip_level(struct thermal_zone_device *tz)
static long get_target_state(struct thermal_zone_device *tz,
struct thermal_cooling_device *cdev, int percentage, int level)
{
- unsigned long max_state;
-
- cdev->ops->get_max_state(cdev, &max_state);
-
- return (long)(percentage * level * max_state) / (100 * tz->trips);
+ return (long)(percentage * level * cdev->max_state) / (100 * tz->num_trips);
}
/**
diff --git a/drivers/thermal/gov_power_allocator.c b/drivers/thermal/gov_power_allocator.c
index 13e3757..1d50524 100644
--- a/drivers/thermal/gov_power_allocator.c
+++ b/drivers/thermal/gov_power_allocator.c
@@ -527,7 +527,7 @@ static void get_governor_trips(struct thermal_zone_device *tz,
last_active = INVALID_TRIP;
last_passive = INVALID_TRIP;
- for (i = 0; i < tz->trips; i++) {
+ for (i = 0; i < tz->num_trips; i++) {
enum thermal_trip_type type;
int ret;
@@ -668,7 +668,7 @@ static int power_allocator_bind(struct thermal_zone_device *tz)
get_governor_trips(tz, params);
- if (tz->trips > 0) {
+ if (tz->num_trips > 0) {
ret = tz->ops->get_trip_temp(tz,
params->trip_max_desired_temperature,
&control_temp);
diff --git a/drivers/thermal/hisi_thermal.c b/drivers/thermal/hisi_thermal.c
index 9a21ac0..29ff1e6 100644
--- a/drivers/thermal/hisi_thermal.c
+++ b/drivers/thermal/hisi_thermal.c
@@ -435,10 +435,6 @@ static int hi3660_thermal_probe(struct hisi_thermal_data *data)
data->sensor[0].irq_name = "tsensor_a73";
data->sensor[0].data = data;
- data->sensor[1].id = HI3660_LITTLE_SENSOR;
- data->sensor[1].irq_name = "tsensor_a53";
- data->sensor[1].data = data;
-
return 0;
}
diff --git a/drivers/thermal/intel/Kconfig b/drivers/thermal/intel/Kconfig
index c83ea5d..e0d65e4 100644
--- a/drivers/thermal/intel/Kconfig
+++ b/drivers/thermal/intel/Kconfig
@@ -64,7 +64,8 @@
config INTEL_BXT_PMIC_THERMAL
tristate "Intel Broxton PMIC thermal driver"
- depends on X86 && INTEL_SOC_PMIC_BXTWC && REGMAP
+ depends on X86 && INTEL_SOC_PMIC_BXTWC
+ select REGMAP
help
Select this driver for Intel Broxton PMIC with ADC channels monitoring
system temperature measurements and alerts.
diff --git a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
index 62c0aa5..0a4eaa3 100644
--- a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
+++ b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.c
@@ -44,11 +44,13 @@ static int int340x_thermal_get_trip_temp(struct thermal_zone_device *zone,
int trip, int *temp)
{
struct int34x_thermal_zone *d = zone->devdata;
- int i;
+ int i, ret = 0;
if (d->override_ops && d->override_ops->get_trip_temp)
return d->override_ops->get_trip_temp(zone, trip, temp);
+ mutex_lock(&d->trip_mutex);
+
if (trip < d->aux_trip_nr)
*temp = d->aux_trips[trip];
else if (trip == d->crt_trip_id)
@@ -66,10 +68,12 @@ static int int340x_thermal_get_trip_temp(struct thermal_zone_device *zone,
}
}
if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT)
- return -EINVAL;
+ ret = -EINVAL;
}
- return 0;
+ mutex_unlock(&d->trip_mutex);
+
+ return ret;
}
static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
@@ -77,11 +81,13 @@ static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
enum thermal_trip_type *type)
{
struct int34x_thermal_zone *d = zone->devdata;
- int i;
+ int i, ret = 0;
if (d->override_ops && d->override_ops->get_trip_type)
return d->override_ops->get_trip_type(zone, trip, type);
+ mutex_lock(&d->trip_mutex);
+
if (trip < d->aux_trip_nr)
*type = THERMAL_TRIP_PASSIVE;
else if (trip == d->crt_trip_id)
@@ -99,10 +105,12 @@ static int int340x_thermal_get_trip_type(struct thermal_zone_device *zone,
}
}
if (i == INT340X_THERMAL_MAX_ACT_TRIP_COUNT)
- return -EINVAL;
+ ret = -EINVAL;
}
- return 0;
+ mutex_unlock(&d->trip_mutex);
+
+ return ret;
}
static int int340x_thermal_set_trip_temp(struct thermal_zone_device *zone,
@@ -180,6 +188,8 @@ int int340x_thermal_read_trips(struct int34x_thermal_zone *int34x_zone)
int trip_cnt = int34x_zone->aux_trip_nr;
int i;
+ mutex_lock(&int34x_zone->trip_mutex);
+
int34x_zone->crt_trip_id = -1;
if (!int340x_thermal_get_trip_config(int34x_zone->adev->handle, "_CRT",
&int34x_zone->crt_temp))
@@ -207,6 +217,8 @@ int int340x_thermal_read_trips(struct int34x_thermal_zone *int34x_zone)
int34x_zone->act_trips[i].valid = true;
}
+ mutex_unlock(&int34x_zone->trip_mutex);
+
return trip_cnt;
}
EXPORT_SYMBOL_GPL(int340x_thermal_read_trips);
@@ -230,6 +242,8 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
if (!int34x_thermal_zone)
return ERR_PTR(-ENOMEM);
+ mutex_init(&int34x_thermal_zone->trip_mutex);
+
int34x_thermal_zone->adev = adev;
int34x_thermal_zone->override_ops = override_ops;
@@ -281,6 +295,7 @@ struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *adev,
acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table);
kfree(int34x_thermal_zone->aux_trips);
err_trip_alloc:
+ mutex_destroy(&int34x_thermal_zone->trip_mutex);
kfree(int34x_thermal_zone);
return ERR_PTR(ret);
}
@@ -292,6 +307,7 @@ void int340x_thermal_zone_remove(struct int34x_thermal_zone
thermal_zone_device_unregister(int34x_thermal_zone->zone);
acpi_lpat_free_conversion_table(int34x_thermal_zone->lpat_table);
kfree(int34x_thermal_zone->aux_trips);
+ mutex_destroy(&int34x_thermal_zone->trip_mutex);
kfree(int34x_thermal_zone);
}
EXPORT_SYMBOL_GPL(int340x_thermal_zone_remove);
diff --git a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h
index 3b4971d..8f9872a 100644
--- a/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h
+++ b/drivers/thermal/intel/int340x_thermal/int340x_thermal_zone.h
@@ -32,6 +32,7 @@ struct int34x_thermal_zone {
struct thermal_zone_device_ops *override_ops;
void *priv_data;
struct acpi_lpat_conversion_table *lpat_table;
+ struct mutex trip_mutex;
};
struct int34x_thermal_zone *int340x_thermal_zone_add(struct acpi_device *,
diff --git a/drivers/thermal/intel/intel_pch_thermal.c b/drivers/thermal/intel/intel_pch_thermal.c
index 527c91f..768c660 100644
--- a/drivers/thermal/intel/intel_pch_thermal.c
+++ b/drivers/thermal/intel/intel_pch_thermal.c
@@ -29,6 +29,7 @@
#define PCH_THERMAL_DID_CNL_LP 0x02F9 /* CNL-LP PCH */
#define PCH_THERMAL_DID_CML_H 0X06F9 /* CML-H PCH */
#define PCH_THERMAL_DID_LWB 0xA1B1 /* Lewisburg PCH */
+#define PCH_THERMAL_DID_WBG 0x8D24 /* Wellsburg PCH */
/* Wildcat Point-LP PCH Thermal registers */
#define WPT_TEMP 0x0000 /* Temperature */
@@ -345,6 +346,7 @@ enum board_ids {
board_cnl,
board_cml,
board_lwb,
+ board_wbg,
};
static const struct board_info {
@@ -375,6 +377,10 @@ static const struct board_info {
.name = "pch_lewisburg",
.ops = &pch_dev_ops_wpt,
},
+ [board_wbg] = {
+ .name = "pch_wellsburg",
+ .ops = &pch_dev_ops_wpt,
+ },
};
static int intel_pch_thermal_probe(struct pci_dev *pdev,
@@ -490,6 +496,8 @@ static const struct pci_device_id intel_pch_thermal_id[] = {
.driver_data = board_cml, },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCH_THERMAL_DID_LWB),
.driver_data = board_lwb, },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCH_THERMAL_DID_WBG),
+ .driver_data = board_wbg, },
{ 0, },
};
MODULE_DEVICE_TABLE(pci, intel_pch_thermal_id);
diff --git a/drivers/thermal/intel/intel_powerclamp.c b/drivers/thermal/intel/intel_powerclamp.c
index 5b19e2d..08ea6cd 100644
--- a/drivers/thermal/intel/intel_powerclamp.c
+++ b/drivers/thermal/intel/intel_powerclamp.c
@@ -57,6 +57,7 @@
static unsigned int target_mwait;
static struct dentry *debug_dir;
+static bool poll_pkg_cstate_enable;
/* user selected target */
static unsigned int set_target_ratio;
@@ -262,6 +263,9 @@ static unsigned int get_compensation(int ratio)
{
unsigned int comp = 0;
+ if (!poll_pkg_cstate_enable)
+ return 0;
+
/* we only use compensation if all adjacent ones are good */
if (ratio == 1 &&
cal_data[ratio].confidence >= CONFIDENCE_OK &&
@@ -534,7 +538,8 @@ static int start_power_clamp(void)
control_cpu = cpumask_first(cpu_online_mask);
clamping = true;
- schedule_delayed_work(&poll_pkg_cstate_work, 0);
+ if (poll_pkg_cstate_enable)
+ schedule_delayed_work(&poll_pkg_cstate_work, 0);
/* start one kthread worker per online cpu */
for_each_online_cpu(cpu) {
@@ -603,11 +608,15 @@ static int powerclamp_get_max_state(struct thermal_cooling_device *cdev,
static int powerclamp_get_cur_state(struct thermal_cooling_device *cdev,
unsigned long *state)
{
- if (true == clamping)
- *state = pkg_cstate_ratio_cur;
- else
+ if (clamping) {
+ if (poll_pkg_cstate_enable)
+ *state = pkg_cstate_ratio_cur;
+ else
+ *state = set_target_ratio;
+ } else {
/* to save power, do not poll idle ratio while not clamping */
*state = -1; /* indicates invalid state */
+ }
return 0;
}
@@ -732,6 +741,9 @@ static int __init powerclamp_init(void)
goto exit_unregister;
}
+ if (topology_max_packages() == 1 && topology_max_die_per_package() == 1)
+ poll_pkg_cstate_enable = true;
+
cooling_dev = thermal_cooling_device_register("intel_powerclamp", NULL,
&powerclamp_cooling_ops);
if (IS_ERR(cooling_dev)) {
diff --git a/drivers/thermal/intel/intel_quark_dts_thermal.c b/drivers/thermal/intel/intel_quark_dts_thermal.c
index 3eafc6b..b43fbd5 100644
--- a/drivers/thermal/intel/intel_quark_dts_thermal.c
+++ b/drivers/thermal/intel/intel_quark_dts_thermal.c
@@ -415,22 +415,14 @@ MODULE_DEVICE_TABLE(x86cpu, qrk_thermal_ids);
static int __init intel_quark_thermal_init(void)
{
- int err = 0;
-
if (!x86_match_cpu(qrk_thermal_ids) || !iosf_mbi_available())
return -ENODEV;
soc_dts = alloc_soc_dts();
- if (IS_ERR(soc_dts)) {
- err = PTR_ERR(soc_dts);
- goto err_free;
- }
+ if (IS_ERR(soc_dts))
+ return PTR_ERR(soc_dts);
return 0;
-
-err_free:
- free_soc_dts(soc_dts);
- return err;
}
static void __exit intel_quark_thermal_exit(void)
diff --git a/drivers/thermal/intel/intel_soc_dts_iosf.c b/drivers/thermal/intel/intel_soc_dts_iosf.c
index 342b0bb..8651ff1 100644
--- a/drivers/thermal/intel/intel_soc_dts_iosf.c
+++ b/drivers/thermal/intel/intel_soc_dts_iosf.c
@@ -405,7 +405,7 @@ struct intel_soc_dts_sensors *intel_soc_dts_iosf_init(
{
struct intel_soc_dts_sensors *sensors;
bool notification;
- u32 tj_max;
+ int tj_max;
int ret;
int i;
diff --git a/drivers/thermal/qcom/tsens-v0_1.c b/drivers/thermal/qcom/tsens-v0_1.c
index 327f372..8d03672 100644
--- a/drivers/thermal/qcom/tsens-v0_1.c
+++ b/drivers/thermal/qcom/tsens-v0_1.c
@@ -285,7 +285,7 @@ static int calibrate_8939(struct tsens_priv *priv)
u32 p1[10], p2[10];
int mode = 0;
u32 *qfprom_cdata;
- u32 cdata[6];
+ u32 cdata[4];
qfprom_cdata = (u32 *)qfprom_read(priv->dev, "calib");
if (IS_ERR(qfprom_cdata))
@@ -296,8 +296,6 @@ static int calibrate_8939(struct tsens_priv *priv)
cdata[1] = qfprom_cdata[13];
cdata[2] = qfprom_cdata[0];
cdata[3] = qfprom_cdata[1];
- cdata[4] = qfprom_cdata[22];
- cdata[5] = qfprom_cdata[21];
mode = (cdata[0] & MSM8939_CAL_SEL_MASK) >> MSM8939_CAL_SEL_SHIFT;
dev_dbg(priv->dev, "calibration mode is %d\n", mode);
@@ -314,8 +312,6 @@ static int calibrate_8939(struct tsens_priv *priv)
p2[6] = (cdata[2] & MSM8939_S6_P2_MASK) >> MSM8939_S6_P2_SHIFT;
p2[7] = (cdata[3] & MSM8939_S7_P2_MASK) >> MSM8939_S7_P2_SHIFT;
p2[8] = (cdata[3] & MSM8939_S8_P2_MASK) >> MSM8939_S8_P2_SHIFT;
- p2[9] = (cdata[4] & MSM8939_S9_P2_MASK_0_4) >> MSM8939_S9_P2_SHIFT_0_4;
- p2[9] |= ((cdata[5] & MSM8939_S9_P2_MASK_5) >> MSM8939_S9_P2_SHIFT_5) << 5;
for (i = 0; i < priv->num_sensors; i++)
p2[i] = (base1 + p2[i]) << 2;
fallthrough;
@@ -331,7 +327,6 @@ static int calibrate_8939(struct tsens_priv *priv)
p1[6] = (cdata[2] & MSM8939_S6_P1_MASK) >> MSM8939_S6_P1_SHIFT;
p1[7] = (cdata[3] & MSM8939_S7_P1_MASK) >> MSM8939_S7_P1_SHIFT;
p1[8] = (cdata[3] & MSM8939_S8_P1_MASK) >> MSM8939_S8_P1_SHIFT;
- p1[9] = (cdata[4] & MSM8939_S9_P1_MASK) >> MSM8939_S9_P1_SHIFT;
for (i = 0; i < priv->num_sensors; i++)
p1[i] = ((base0) + p1[i]) << 2;
break;
@@ -534,6 +529,21 @@ static int calibrate_9607(struct tsens_priv *priv)
return 0;
}
+static int __init init_8939(struct tsens_priv *priv) {
+ priv->sensor[0].slope = 2911;
+ priv->sensor[1].slope = 2789;
+ priv->sensor[2].slope = 2906;
+ priv->sensor[3].slope = 2763;
+ priv->sensor[4].slope = 2922;
+ priv->sensor[5].slope = 2867;
+ priv->sensor[6].slope = 2833;
+ priv->sensor[7].slope = 2838;
+ priv->sensor[8].slope = 2840;
+ /* priv->sensor[9].slope = 2852; */
+
+ return init_common(priv);
+}
+
/* v0.1: 8916, 8939, 8974, 9607 */
static struct tsens_features tsens_v0_1_feat = {
@@ -596,15 +606,15 @@ struct tsens_plat_data data_8916 = {
};
static const struct tsens_ops ops_8939 = {
- .init = init_common,
+ .init = init_8939,
.calibrate = calibrate_8939,
.get_temp = get_temp_common,
};
struct tsens_plat_data data_8939 = {
- .num_sensors = 10,
+ .num_sensors = 9,
.ops = &ops_8939,
- .hw_ids = (unsigned int []){ 0, 1, 2, 3, 5, 6, 7, 8, 9, 10 },
+ .hw_ids = (unsigned int []){ 0, 1, 2, 3, 5, 6, 7, 8, 9, /* 10 */ },
.feat = &tsens_v0_1_feat,
.fields = tsens_v0_1_regfields,
diff --git a/drivers/thermal/qcom/tsens-v1.c b/drivers/thermal/qcom/tsens-v1.c
index 573e261..faa4576 100644
--- a/drivers/thermal/qcom/tsens-v1.c
+++ b/drivers/thermal/qcom/tsens-v1.c
@@ -78,11 +78,6 @@
#define MSM8976_CAL_SEL_MASK 0x3
-#define MSM8976_CAL_DEGC_PT1 30
-#define MSM8976_CAL_DEGC_PT2 120
-#define MSM8976_SLOPE_FACTOR 1000
-#define MSM8976_SLOPE_DEFAULT 3200
-
/* eeprom layout data for qcs404/405 (v1) */
#define BASE0_MASK 0x000007f8
#define BASE1_MASK 0x0007f800
@@ -142,30 +137,6 @@
#define CAL_SEL_MASK 7
#define CAL_SEL_SHIFT 0
-static void compute_intercept_slope_8976(struct tsens_priv *priv,
- u32 *p1, u32 *p2, u32 mode)
-{
- int i;
-
- priv->sensor[0].slope = 3313;
- priv->sensor[1].slope = 3275;
- priv->sensor[2].slope = 3320;
- priv->sensor[3].slope = 3246;
- priv->sensor[4].slope = 3279;
- priv->sensor[5].slope = 3257;
- priv->sensor[6].slope = 3234;
- priv->sensor[7].slope = 3269;
- priv->sensor[8].slope = 3255;
- priv->sensor[9].slope = 3239;
- priv->sensor[10].slope = 3286;
-
- for (i = 0; i < priv->num_sensors; i++) {
- priv->sensor[i].offset = (p1[i] * MSM8976_SLOPE_FACTOR) -
- (MSM8976_CAL_DEGC_PT1 *
- priv->sensor[i].slope);
- }
-}
-
static int calibrate_v1(struct tsens_priv *priv)
{
u32 base0 = 0, base1 = 0;
@@ -291,7 +262,7 @@ static int calibrate_8976(struct tsens_priv *priv)
break;
}
- compute_intercept_slope_8976(priv, p1, p2, mode);
+ compute_intercept_slope(priv, p1, p2, mode);
kfree(qfprom_cdata);
return 0;
@@ -362,6 +333,22 @@ static const struct reg_field tsens_v1_regfields[MAX_REGFIELDS] = {
[TRDY] = REG_FIELD(TM_TRDY_OFF, 0, 0),
};
+static int __init init_8956(struct tsens_priv *priv) {
+ priv->sensor[0].slope = 3313;
+ priv->sensor[1].slope = 3275;
+ priv->sensor[2].slope = 3320;
+ priv->sensor[3].slope = 3246;
+ priv->sensor[4].slope = 3279;
+ priv->sensor[5].slope = 3257;
+ priv->sensor[6].slope = 3234;
+ priv->sensor[7].slope = 3269;
+ priv->sensor[8].slope = 3255;
+ priv->sensor[9].slope = 3239;
+ priv->sensor[10].slope = 3286;
+
+ return init_common(priv);
+}
+
static const struct tsens_ops ops_generic_v1 = {
.init = init_common,
.calibrate = calibrate_v1,
@@ -374,13 +361,25 @@ struct tsens_plat_data data_tsens_v1 = {
.fields = tsens_v1_regfields,
};
+static const struct tsens_ops ops_8956 = {
+ .init = init_8956,
+ .calibrate = calibrate_8976,
+ .get_temp = get_temp_tsens_valid,
+};
+
+struct tsens_plat_data data_8956 = {
+ .num_sensors = 11,
+ .ops = &ops_8956,
+ .feat = &tsens_v1_feat,
+ .fields = tsens_v1_regfields,
+};
+
static const struct tsens_ops ops_8976 = {
.init = init_common,
.calibrate = calibrate_8976,
.get_temp = get_temp_tsens_valid,
};
-/* Valid for both MSM8956 and MSM8976. */
struct tsens_plat_data data_8976 = {
.num_sensors = 11,
.ops = &ops_8976,
diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
index 99a8d9f..926cd8b 100644
--- a/drivers/thermal/qcom/tsens.c
+++ b/drivers/thermal/qcom/tsens.c
@@ -979,6 +979,12 @@ static const struct of_device_id tsens_table[] = {
.compatible = "qcom,msm8939-tsens",
.data = &data_8939,
}, {
+ .compatible = "qcom,msm8956-tsens",
+ .data = &data_8956,
+ }, {
+ .compatible = "qcom,msm8960-tsens",
+ .data = &data_8960,
+ }, {
.compatible = "qcom,msm8974-tsens",
.data = &data_8974,
}, {
diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
index 1471a2c..c2e5aee 100644
--- a/drivers/thermal/qcom/tsens.h
+++ b/drivers/thermal/qcom/tsens.h
@@ -590,7 +590,7 @@ extern struct tsens_plat_data data_8960;
extern struct tsens_plat_data data_8916, data_8939, data_8974, data_9607;
/* TSENS v1 targets */
-extern struct tsens_plat_data data_tsens_v1, data_8976;
+extern struct tsens_plat_data data_tsens_v1, data_8976, data_8956;
/* TSENS v2 targets */
extern struct tsens_plat_data data_8996, data_tsens_v2;
diff --git a/drivers/thermal/tegra/tegra30-tsensor.c b/drivers/thermal/tegra/tegra30-tsensor.c
index 9b6b693..0588668 100644
--- a/drivers/thermal/tegra/tegra30-tsensor.c
+++ b/drivers/thermal/tegra/tegra30-tsensor.c
@@ -316,7 +316,7 @@ static void tegra_tsensor_get_hw_channel_trips(struct thermal_zone_device *tzd,
*hot_trip = 85000;
*crit_trip = 90000;
- for (i = 0; i < tzd->trips; i++) {
+ for (i = 0; i < tzd->num_trips; i++) {
enum thermal_trip_type type;
int trip_temp;
diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
index 38082fd..052e8e8 100644
--- a/drivers/thermal/thermal_core.c
+++ b/drivers/thermal/thermal_core.c
@@ -503,7 +503,7 @@ void thermal_zone_device_update(struct thermal_zone_device *tz,
tz->notify_event = event;
- for (count = 0; count < tz->trips; count++)
+ for (count = 0; count < tz->num_trips; count++)
handle_thermal_trip(tz, count);
}
EXPORT_SYMBOL_GPL(thermal_zone_device_update);
@@ -625,10 +625,9 @@ int thermal_zone_bind_cooling_device(struct thermal_zone_device *tz,
struct thermal_instance *pos;
struct thermal_zone_device *pos1;
struct thermal_cooling_device *pos2;
- unsigned long max_state;
- int result, ret;
+ int result;
- if (trip >= tz->trips || trip < 0)
+ if (trip >= tz->num_trips || trip < 0)
return -EINVAL;
list_for_each_entry(pos1, &thermal_tz_list, node) {
@@ -643,15 +642,11 @@ int thermal_zone_bind_cooling_device(struct thermal_zone_device *tz,
if (tz != pos1 || cdev != pos2)
return -EINVAL;
- ret = cdev->ops->get_max_state(cdev, &max_state);
- if (ret)
- return ret;
-
/* lower default 0, upper default max_state */
lower = lower == THERMAL_NO_LIMIT ? 0 : lower;
- upper = upper == THERMAL_NO_LIMIT ? max_state : upper;
+ upper = upper == THERMAL_NO_LIMIT ? cdev->max_state : upper;
- if (lower > upper || upper > max_state)
+ if (lower > upper || upper > cdev->max_state)
return -EINVAL;
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
@@ -809,7 +804,7 @@ static void __bind(struct thermal_zone_device *tz, int mask,
{
int i, ret;
- for (i = 0; i < tz->trips; i++) {
+ for (i = 0; i < tz->num_trips; i++) {
if (mask & (1 << i)) {
unsigned long upper, lower;
@@ -918,12 +913,22 @@ __thermal_cooling_device_register(struct device_node *np,
cdev->updated = false;
cdev->device.class = &thermal_class;
cdev->devdata = devdata;
+
+ ret = cdev->ops->get_max_state(cdev, &cdev->max_state);
+ if (ret) {
+ kfree(cdev->type);
+ goto out_ida_remove;
+ }
+
thermal_cooling_device_setup_sysfs(cdev);
+
ret = dev_set_name(&cdev->device, "cooling_device%d", cdev->id);
if (ret) {
+ kfree(cdev->type);
thermal_cooling_device_destroy_sysfs(cdev);
- goto out_kfree_type;
+ goto out_ida_remove;
}
+
ret = device_register(&cdev->device);
if (ret)
goto out_kfree_type;
@@ -949,6 +954,8 @@ __thermal_cooling_device_register(struct device_node *np,
thermal_cooling_device_destroy_sysfs(cdev);
kfree(cdev->type);
put_device(&cdev->device);
+
+ /* thermal_release() takes care of the rest */
cdev = NULL;
out_ida_remove:
ida_simple_remove(&thermal_cdev_ida, id);
@@ -1056,7 +1063,7 @@ static void __unbind(struct thermal_zone_device *tz, int mask,
{
int i;
- for (i = 0; i < tz->trips; i++)
+ for (i = 0; i < tz->num_trips; i++)
if (mask & (1 << i))
thermal_zone_unbind_cooling_device(tz, i, cdev);
}
@@ -1161,7 +1168,7 @@ static void bind_tz(struct thermal_zone_device *tz)
/**
* thermal_zone_device_register() - register a new thermal zone device
* @type: the thermal zone device type
- * @trips: the number of trip points the thermal zone support
+ * @num_trips: the number of trip points the thermal zone support
* @mask: a bit string indicating the writeablility of trip points
* @devdata: private device data
* @ops: standard thermal zone device callbacks
@@ -1183,7 +1190,7 @@ static void bind_tz(struct thermal_zone_device *tz)
* IS_ERR*() helpers.
*/
struct thermal_zone_device *
-thermal_zone_device_register(const char *type, int trips, int mask,
+thermal_zone_device_register(const char *type, int num_trips, int mask,
void *devdata, struct thermal_zone_device_ops *ops,
struct thermal_zone_params *tzp, int passive_delay,
int polling_delay)
@@ -1197,27 +1204,27 @@ thermal_zone_device_register(const char *type, int trips, int mask,
struct thermal_governor *governor;
if (!type || strlen(type) == 0) {
- pr_err("Error: No thermal zone type defined\n");
+ pr_err("No thermal zone type defined\n");
return ERR_PTR(-EINVAL);
}
if (type && strlen(type) >= THERMAL_NAME_LENGTH) {
- pr_err("Error: Thermal zone name (%s) too long, should be under %d chars\n",
+ pr_err("Thermal zone name (%s) too long, should be under %d chars\n",
type, THERMAL_NAME_LENGTH);
return ERR_PTR(-EINVAL);
}
- if (trips > THERMAL_MAX_TRIPS || trips < 0 || mask >> trips) {
- pr_err("Error: Incorrect number of thermal trips\n");
+ if (num_trips > THERMAL_MAX_TRIPS || num_trips < 0 || mask >> num_trips) {
+ pr_err("Incorrect number of thermal trips\n");
return ERR_PTR(-EINVAL);
}
if (!ops) {
- pr_err("Error: Thermal zone device ops not defined\n");
+ pr_err("Thermal zone device ops not defined\n");
return ERR_PTR(-EINVAL);
}
- if (trips > 0 && (!ops->get_trip_type || !ops->get_trip_temp))
+ if (num_trips > 0 && (!ops->get_trip_type || !ops->get_trip_temp))
return ERR_PTR(-EINVAL);
tz = kzalloc(sizeof(*tz), GFP_KERNEL);
@@ -1243,7 +1250,7 @@ thermal_zone_device_register(const char *type, int trips, int mask,
tz->tzp = tzp;
tz->device.class = &thermal_class;
tz->devdata = devdata;
- tz->trips = trips;
+ tz->num_trips = num_trips;
thermal_set_delay_jiffies(&tz->passive_delay_jiffies, passive_delay);
thermal_set_delay_jiffies(&tz->polling_delay_jiffies, polling_delay);
@@ -1266,7 +1273,7 @@ thermal_zone_device_register(const char *type, int trips, int mask,
if (result)
goto release_device;
- for (count = 0; count < trips; count++) {
+ for (count = 0; count < num_trips; count++) {
if (tz->ops->get_trip_type(tz, count, &trip_type) ||
tz->ops->get_trip_temp(tz, count, &trip_temp) ||
!trip_temp)
diff --git a/drivers/thermal/thermal_helpers.c b/drivers/thermal/thermal_helpers.c
index 3edd047..ee7027b 100644
--- a/drivers/thermal/thermal_helpers.c
+++ b/drivers/thermal/thermal_helpers.c
@@ -90,7 +90,7 @@ int thermal_zone_get_temp(struct thermal_zone_device *tz, int *temp)
ret = tz->ops->get_temp(tz, temp);
if (IS_ENABLED(CONFIG_THERMAL_EMULATION) && tz->emul_temperature) {
- for (count = 0; count < tz->trips; count++) {
+ for (count = 0; count < tz->num_trips; count++) {
ret = tz->ops->get_trip_type(tz, count, &type);
if (!ret && type == THERMAL_TRIP_CRITICAL) {
ret = tz->ops->get_trip_temp(tz, count,
@@ -138,7 +138,7 @@ void thermal_zone_set_trips(struct thermal_zone_device *tz)
if (!tz->ops->set_trips || !tz->ops->get_trip_hyst)
goto exit;
- for (i = 0; i < tz->trips; i++) {
+ for (i = 0; i < tz->num_trips; i++) {
int trip_low;
tz->ops->get_trip_temp(tz, i, &trip_temp);
diff --git a/drivers/thermal/thermal_netlink.c b/drivers/thermal/thermal_netlink.c
index 41c8d47..c70d407 100644
--- a/drivers/thermal/thermal_netlink.c
+++ b/drivers/thermal/thermal_netlink.c
@@ -415,7 +415,7 @@ static int thermal_genl_cmd_tz_get_trip(struct param *p)
mutex_lock(&tz->lock);
- for (i = 0; i < tz->trips; i++) {
+ for (i = 0; i < tz->num_trips; i++) {
enum thermal_trip_type type;
int temp, hyst = 0;
diff --git a/drivers/thermal/thermal_sysfs.c b/drivers/thermal/thermal_sysfs.c
index 1e5a781..de7cdec 100644
--- a/drivers/thermal/thermal_sysfs.c
+++ b/drivers/thermal/thermal_sysfs.c
@@ -416,15 +416,15 @@ static int create_trip_attrs(struct thermal_zone_device *tz, int mask)
int indx;
/* This function works only for zones with at least one trip */
- if (tz->trips <= 0)
+ if (tz->num_trips <= 0)
return -EINVAL;
- tz->trip_type_attrs = kcalloc(tz->trips, sizeof(*tz->trip_type_attrs),
+ tz->trip_type_attrs = kcalloc(tz->num_trips, sizeof(*tz->trip_type_attrs),
GFP_KERNEL);
if (!tz->trip_type_attrs)
return -ENOMEM;
- tz->trip_temp_attrs = kcalloc(tz->trips, sizeof(*tz->trip_temp_attrs),
+ tz->trip_temp_attrs = kcalloc(tz->num_trips, sizeof(*tz->trip_temp_attrs),
GFP_KERNEL);
if (!tz->trip_temp_attrs) {
kfree(tz->trip_type_attrs);
@@ -432,7 +432,7 @@ static int create_trip_attrs(struct thermal_zone_device *tz, int mask)
}
if (tz->ops->get_trip_hyst) {
- tz->trip_hyst_attrs = kcalloc(tz->trips,
+ tz->trip_hyst_attrs = kcalloc(tz->num_trips,
sizeof(*tz->trip_hyst_attrs),
GFP_KERNEL);
if (!tz->trip_hyst_attrs) {
@@ -442,7 +442,7 @@ static int create_trip_attrs(struct thermal_zone_device *tz, int mask)
}
}
- attrs = kcalloc(tz->trips * 3 + 1, sizeof(*attrs), GFP_KERNEL);
+ attrs = kcalloc(tz->num_trips * 3 + 1, sizeof(*attrs), GFP_KERNEL);
if (!attrs) {
kfree(tz->trip_type_attrs);
kfree(tz->trip_temp_attrs);
@@ -451,7 +451,7 @@ static int create_trip_attrs(struct thermal_zone_device *tz, int mask)
return -ENOMEM;
}
- for (indx = 0; indx < tz->trips; indx++) {
+ for (indx = 0; indx < tz->num_trips; indx++) {
/* create trip type attribute */
snprintf(tz->trip_type_attrs[indx].name, THERMAL_NAME_LENGTH,
"trip_point_%d_type", indx);
@@ -478,7 +478,7 @@ static int create_trip_attrs(struct thermal_zone_device *tz, int mask)
tz->trip_temp_attrs[indx].attr.store =
trip_point_temp_store;
}
- attrs[indx + tz->trips] = &tz->trip_temp_attrs[indx].attr.attr;
+ attrs[indx + tz->num_trips] = &tz->trip_temp_attrs[indx].attr.attr;
/* create Optional trip hyst attribute */
if (!tz->ops->get_trip_hyst)
@@ -496,10 +496,10 @@ static int create_trip_attrs(struct thermal_zone_device *tz, int mask)
tz->trip_hyst_attrs[indx].attr.store =
trip_point_hyst_store;
}
- attrs[indx + tz->trips * 2] =
+ attrs[indx + tz->num_trips * 2] =
&tz->trip_hyst_attrs[indx].attr.attr;
}
- attrs[tz->trips * 3] = NULL;
+ attrs[tz->num_trips * 3] = NULL;
tz->trips_attribute_group.attrs = attrs;
@@ -540,7 +540,7 @@ int thermal_zone_create_device_groups(struct thermal_zone_device *tz,
for (i = 0; i < size - 2; i++)
groups[i] = thermal_zone_attribute_groups[i];
- if (tz->trips) {
+ if (tz->num_trips) {
result = create_trip_attrs(tz, mask);
if (result) {
kfree(groups);
@@ -561,7 +561,7 @@ void thermal_zone_destroy_device_groups(struct thermal_zone_device *tz)
if (!tz)
return;
- if (tz->trips)
+ if (tz->num_trips)
destroy_trip_attrs(tz);
kfree(tz->device.groups);
@@ -580,13 +580,8 @@ static ssize_t max_state_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct thermal_cooling_device *cdev = to_cooling_device(dev);
- unsigned long state;
- int ret;
- ret = cdev->ops->get_max_state(cdev, &state);
- if (ret)
- return ret;
- return sprintf(buf, "%ld\n", state);
+ return sprintf(buf, "%ld\n", cdev->max_state);
}
static ssize_t cur_state_show(struct device *dev, struct device_attribute *attr,
@@ -616,6 +611,10 @@ cur_state_store(struct device *dev, struct device_attribute *attr,
if ((long)state < 0)
return -EINVAL;
+ /* Requested state should be less than max_state + 1 */
+ if (state > cdev->max_state)
+ return -EINVAL;
+
mutex_lock(&cdev->lock);
result = cdev->ops->set_cur_state(cdev, state);
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index aa6cf7f..f1e4f39 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -43,7 +43,7 @@
#define QUIRK_AUTO_CLEAR_INT BIT(0)
#define QUIRK_E2E BIT(1)
-static int ring_interrupt_index(struct tb_ring *ring)
+static int ring_interrupt_index(const struct tb_ring *ring)
{
int bit = ring->hop;
if (!ring->is_tx)
@@ -60,13 +60,14 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
{
int reg = REG_RING_INTERRUPT_BASE +
ring_interrupt_index(ring) / 32 * 4;
- int bit = ring_interrupt_index(ring) & 31;
- int mask = 1 << bit;
+ int interrupt_bit = ring_interrupt_index(ring) & 31;
+ int mask = 1 << interrupt_bit;
u32 old, new;
if (ring->irq > 0) {
u32 step, shift, ivr, misc;
void __iomem *ivr_base;
+ int auto_clear_bit;
int index;
if (ring->is_tx)
@@ -74,18 +75,25 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
else
index = ring->hop + ring->nhi->hop_count;
- if (ring->nhi->quirks & QUIRK_AUTO_CLEAR_INT) {
- /*
- * Ask the hardware to clear interrupt status
- * bits automatically since we already know
- * which interrupt was triggered.
- */
- misc = ioread32(ring->nhi->iobase + REG_DMA_MISC);
- if (!(misc & REG_DMA_MISC_INT_AUTO_CLEAR)) {
- misc |= REG_DMA_MISC_INT_AUTO_CLEAR;
- iowrite32(misc, ring->nhi->iobase + REG_DMA_MISC);
- }
- }
+ /*
+ * Intel routers support a bit that isn't part of
+ * the USB4 spec to ask the hardware to clear
+ * interrupt status bits automatically since
+ * we already know which interrupt was triggered.
+ *
+ * Other routers explicitly disable auto-clear
+ * to prevent conditions that may occur where two
+ * MSIX interrupts are simultaneously active and
+ * reading the register clears both of them.
+ */
+ misc = ioread32(ring->nhi->iobase + REG_DMA_MISC);
+ if (ring->nhi->quirks & QUIRK_AUTO_CLEAR_INT)
+ auto_clear_bit = REG_DMA_MISC_INT_AUTO_CLEAR;
+ else
+ auto_clear_bit = REG_DMA_MISC_DISABLE_AUTO_CLEAR;
+ if (!(misc & auto_clear_bit))
+ iowrite32(misc | auto_clear_bit,
+ ring->nhi->iobase + REG_DMA_MISC);
ivr_base = ring->nhi->iobase + REG_INT_VEC_ALLOC_BASE;
step = index / REG_INT_VEC_ALLOC_REGS * REG_INT_VEC_ALLOC_BITS;
@@ -105,7 +113,7 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
dev_dbg(&ring->nhi->pdev->dev,
"%s interrupt at register %#x bit %d (%#x -> %#x)\n",
- active ? "enabling" : "disabling", reg, bit, old, new);
+ active ? "enabling" : "disabling", reg, interrupt_bit, old, new);
if (new == old)
dev_WARN(&ring->nhi->pdev->dev,
@@ -390,14 +398,17 @@ EXPORT_SYMBOL_GPL(tb_ring_poll_complete);
static void ring_clear_msix(const struct tb_ring *ring)
{
+ int bit;
+
if (ring->nhi->quirks & QUIRK_AUTO_CLEAR_INT)
return;
+ bit = ring_interrupt_index(ring) & 31;
if (ring->is_tx)
- ioread32(ring->nhi->iobase + REG_RING_NOTIFY_BASE);
+ iowrite32(BIT(bit), ring->nhi->iobase + REG_RING_INT_CLEAR);
else
- ioread32(ring->nhi->iobase + REG_RING_NOTIFY_BASE +
- 4 * (ring->nhi->hop_count / 32));
+ iowrite32(BIT(bit), ring->nhi->iobase + REG_RING_INT_CLEAR +
+ 4 * (ring->nhi->hop_count / 32));
}
static irqreturn_t ring_msix(int irq, void *data)
diff --git a/drivers/thunderbolt/nhi_regs.h b/drivers/thunderbolt/nhi_regs.h
index 0d4970d..faef165 100644
--- a/drivers/thunderbolt/nhi_regs.h
+++ b/drivers/thunderbolt/nhi_regs.h
@@ -77,12 +77,13 @@ struct ring_desc {
/*
* three bitfields: tx, rx, rx overflow
- * Every bitfield contains one bit for every hop (REG_HOP_COUNT). Registers are
- * cleared on read. New interrupts are fired only after ALL registers have been
+ * Every bitfield contains one bit for every hop (REG_HOP_COUNT).
+ * New interrupts are fired only after ALL registers have been
* read (even those containing only disabled rings).
*/
#define REG_RING_NOTIFY_BASE 0x37800
#define RING_NOTIFY_REG_COUNT(nhi) ((31 + 3 * nhi->hop_count) / 32)
+#define REG_RING_INT_CLEAR 0x37808
/*
* two bitfields: rx, tx
@@ -105,6 +106,7 @@ struct ring_desc {
#define REG_DMA_MISC 0x39864
#define REG_DMA_MISC_INT_AUTO_CLEAR BIT(2)
+#define REG_DMA_MISC_DISABLE_AUTO_CLEAR BIT(17)
#define REG_INMAIL_DATA 0x39900
diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
index 7226940..566c031 100644
--- a/drivers/thunderbolt/retimer.c
+++ b/drivers/thunderbolt/retimer.c
@@ -208,6 +208,22 @@ static ssize_t nvm_authenticate_show(struct device *dev,
return ret;
}
+static void tb_retimer_set_inbound_sbtx(struct tb_port *port)
+{
+ int i;
+
+ for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
+ usb4_port_retimer_set_inbound_sbtx(port, i);
+}
+
+static void tb_retimer_unset_inbound_sbtx(struct tb_port *port)
+{
+ int i;
+
+ for (i = TB_MAX_RETIMER_INDEX; i >= 1; i--)
+ usb4_port_retimer_unset_inbound_sbtx(port, i);
+}
+
static ssize_t nvm_authenticate_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
@@ -234,6 +250,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
rt->auth_status = 0;
if (val) {
+ tb_retimer_set_inbound_sbtx(rt->port);
if (val == AUTHENTICATE_ONLY) {
ret = tb_retimer_nvm_authenticate(rt, true);
} else {
@@ -253,6 +270,7 @@ static ssize_t nvm_authenticate_store(struct device *dev,
}
exit_unlock:
+ tb_retimer_unset_inbound_sbtx(rt->port);
mutex_unlock(&rt->tb->lock);
exit_rpm:
pm_runtime_mark_last_busy(&rt->dev);
@@ -466,8 +484,7 @@ int tb_retimer_scan(struct tb_port *port, bool add)
* Enable sideband channel for each retimer. We can do this
* regardless whether there is device connected or not.
*/
- for (i = 1; i <= TB_MAX_RETIMER_INDEX; i++)
- usb4_port_retimer_set_inbound_sbtx(port, i);
+ tb_retimer_set_inbound_sbtx(port);
/*
* Before doing anything else, read the authentication status.
@@ -490,6 +507,8 @@ int tb_retimer_scan(struct tb_port *port, bool add)
break;
}
+ tb_retimer_unset_inbound_sbtx(port);
+
if (!last_idx)
return 0;
diff --git a/drivers/thunderbolt/sb_regs.h b/drivers/thunderbolt/sb_regs.h
index bda889f..a8a35b0 100644
--- a/drivers/thunderbolt/sb_regs.h
+++ b/drivers/thunderbolt/sb_regs.h
@@ -20,6 +20,7 @@ enum usb4_sb_opcode {
USB4_SB_OPCODE_ROUTER_OFFLINE = 0x4e45534c, /* "LSEN" */
USB4_SB_OPCODE_ENUMERATE_RETIMERS = 0x4d554e45, /* "ENUM" */
USB4_SB_OPCODE_SET_INBOUND_SBTX = 0x5055534c, /* "LSUP" */
+ USB4_SB_OPCODE_UNSET_INBOUND_SBTX = 0x50555355, /* "USUP" */
USB4_SB_OPCODE_QUERY_LAST_RETIMER = 0x5453414c, /* "LAST" */
USB4_SB_OPCODE_GET_NVM_SECTOR_SIZE = 0x53534e47, /* "GNSS" */
USB4_SB_OPCODE_NVM_SET_OFFSET = 0x53504f42, /* "BOPS" */
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index ef64747..8cc9e8c 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -2750,8 +2750,6 @@ int tb_switch_add(struct tb_switch *sw)
}
tb_sw_dbg(sw, "uid: %#llx\n", sw->uid);
- tb_check_quirks(sw);
-
ret = tb_switch_set_uuid(sw);
if (ret) {
dev_err(&sw->dev, "failed to set UUID\n");
@@ -2770,6 +2768,8 @@ int tb_switch_add(struct tb_switch *sw)
}
}
+ tb_check_quirks(sw);
+
tb_switch_default_link_ports(sw);
ret = tb_switch_update_link_attributes(sw);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 8922217..db0d3d3 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -1080,6 +1080,7 @@ int usb4_port_router_online(struct tb_port *port);
int usb4_port_enumerate_retimers(struct tb_port *port);
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index);
+int usb4_port_retimer_unset_inbound_sbtx(struct tb_port *port, u8 index);
int usb4_port_retimer_read(struct tb_port *port, u8 index, u8 reg, void *buf,
u8 size);
int usb4_port_retimer_write(struct tb_port *port, u8 index, u8 reg,
diff --git a/drivers/thunderbolt/tunnel.c b/drivers/thunderbolt/tunnel.c
index 500a0af..42cc4ef 100644
--- a/drivers/thunderbolt/tunnel.c
+++ b/drivers/thunderbolt/tunnel.c
@@ -1262,7 +1262,7 @@ static void tb_usb3_reclaim_available_bandwidth(struct tb_tunnel *tunnel,
return;
} else if (!ret) {
/* Use maximum link rate if the link valid is not set */
- ret = usb4_usb3_port_max_link_rate(tunnel->src_port);
+ ret = tb_usb3_max_link_rate(tunnel->dst_port, tunnel->src_port);
if (ret < 0) {
tb_tunnel_warn(tunnel, "failed to read maximum link rate\n");
return;
diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
index 9098656..36547af 100644
--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -1442,6 +1442,20 @@ int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index)
}
/**
+ * usb4_port_retimer_unset_inbound_sbtx() - Disable sideband channel transactions
+ * @port: USB4 port
+ * @index: Retimer index
+ *
+ * Disables sideband channel transations on SBTX. The reverse of
+ * usb4_port_retimer_set_inbound_sbtx().
+ */
+int usb4_port_retimer_unset_inbound_sbtx(struct tb_port *port, u8 index)
+{
+ return usb4_port_retimer_op(port, index,
+ USB4_SB_OPCODE_UNSET_INBOUND_SBTX, 500);
+}
+
+/**
* usb4_port_retimer_read() - Read from retimer sideband registers
* @port: USB4 port
* @index: Retimer index
@@ -1930,18 +1944,30 @@ static int usb4_usb3_port_write_allocated_bandwidth(struct tb_port *port,
int downstream_bw)
{
u32 val, ubw, dbw, scale;
- int ret;
+ int ret, max_bw;
- /* Read the used scale, hardware default is 0 */
- ret = tb_port_read(port, &scale, TB_CFG_PORT,
- port->cap_adap + ADP_USB3_CS_3, 1);
+ /* Figure out suitable scale */
+ scale = 0;
+ max_bw = max(upstream_bw, downstream_bw);
+ while (scale < 64) {
+ if (mbps_to_usb3_bw(max_bw, scale) < 4096)
+ break;
+ scale++;
+ }
+
+ if (WARN_ON(scale >= 64))
+ return -EINVAL;
+
+ ret = tb_port_write(port, &scale, TB_CFG_PORT,
+ port->cap_adap + ADP_USB3_CS_3, 1);
if (ret)
return ret;
- scale &= ADP_USB3_CS_3_SCALE_MASK;
ubw = mbps_to_usb3_bw(upstream_bw, scale);
dbw = mbps_to_usb3_bw(downstream_bw, scale);
+ tb_port_dbg(port, "scaled bandwidth %u/%u, scale %u\n", ubw, dbw, scale);
+
ret = tb_port_read(port, &val, TB_CFG_PORT,
port->cap_adap + ADP_USB3_CS_2, 1);
if (ret)
diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 609a511..f2f066c 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -43,6 +43,7 @@ struct xencons_info {
int irq;
int vtermno;
grant_ref_t gntref;
+ spinlock_t ring_lock;
};
static LIST_HEAD(xenconsoles);
@@ -89,12 +90,15 @@ static int __write_console(struct xencons_info *xencons,
XENCONS_RING_IDX cons, prod;
struct xencons_interface *intf = xencons->intf;
int sent = 0;
+ unsigned long flags;
+ spin_lock_irqsave(&xencons->ring_lock, flags);
cons = intf->out_cons;
prod = intf->out_prod;
mb(); /* update queue values before going on */
if ((prod - cons) > sizeof(intf->out)) {
+ spin_unlock_irqrestore(&xencons->ring_lock, flags);
pr_err_once("xencons: Illegal ring page indices");
return -EINVAL;
}
@@ -104,6 +108,7 @@ static int __write_console(struct xencons_info *xencons,
wmb(); /* write ring before updating pointer */
intf->out_prod = prod;
+ spin_unlock_irqrestore(&xencons->ring_lock, flags);
if (sent)
notify_daemon(xencons);
@@ -146,16 +151,19 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
int recv = 0;
struct xencons_info *xencons = vtermno_to_xencons(vtermno);
unsigned int eoiflag = 0;
+ unsigned long flags;
if (xencons == NULL)
return -EINVAL;
intf = xencons->intf;
+ spin_lock_irqsave(&xencons->ring_lock, flags);
cons = intf->in_cons;
prod = intf->in_prod;
mb(); /* get pointers before reading ring */
if ((prod - cons) > sizeof(intf->in)) {
+ spin_unlock_irqrestore(&xencons->ring_lock, flags);
pr_err_once("xencons: Illegal ring page indices");
return -EINVAL;
}
@@ -179,10 +187,13 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
xencons->out_cons = intf->out_cons;
xencons->out_cons_same = 0;
}
+ if (!recv && xencons->out_cons_same++ > 1) {
+ eoiflag = XEN_EOI_FLAG_SPURIOUS;
+ }
+ spin_unlock_irqrestore(&xencons->ring_lock, flags);
+
if (recv) {
notify_daemon(xencons);
- } else if (xencons->out_cons_same++ > 1) {
- eoiflag = XEN_EOI_FLAG_SPURIOUS;
}
xen_irq_lateeoi(xencons->irq, eoiflag);
@@ -239,6 +250,7 @@ static int xen_hvm_console_init(void)
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
if (!info)
return -ENOMEM;
+ spin_lock_init(&info->ring_lock);
} else if (info->intf != NULL) {
/* already configured */
return 0;
@@ -275,6 +287,7 @@ static int xen_hvm_console_init(void)
static int xencons_info_pv_init(struct xencons_info *info, int vtermno)
{
+ spin_lock_init(&info->ring_lock);
info->evtchn = xen_start_info->console.domU.evtchn;
/* GFN == MFN for PV guest */
info->intf = gfn_to_virt(xen_start_info->console.domU.mfn);
@@ -325,6 +338,7 @@ static int xen_initial_domain_console_init(void)
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
if (!info)
return -ENOMEM;
+ spin_lock_init(&info->ring_lock);
}
info->irq = bind_virq_to_irq(VIRQ_CONSOLE, 0, false);
@@ -482,6 +496,7 @@ static int xencons_probe(struct xenbus_device *dev,
info = kzalloc(sizeof(struct xencons_info), GFP_KERNEL);
if (!info)
return -ENOMEM;
+ spin_lock_init(&info->ring_lock);
dev_set_drvdata(&dev->dev, info);
info->xbdev = dev;
info->vtermno = xenbus_devid_to_vtermno(devid);
diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
index 1bdc8d6..ec3cd72 100644
--- a/drivers/tty/serial/8250/8250_dma.c
+++ b/drivers/tty/serial/8250/8250_dma.c
@@ -46,19 +46,39 @@ static void __dma_rx_complete(void *param)
struct uart_8250_dma *dma = p->dma;
struct tty_port *tty_port = &p->port.state->port;
struct dma_tx_state state;
+ enum dma_status dma_status;
int count;
- dma->rx_running = 0;
- dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+ /*
+ * New DMA Rx can be started during the completion handler before it
+ * could acquire port's lock and it might still be ongoing. Don't to
+ * anything in such case.
+ */
+ dma_status = dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state);
+ if (dma_status == DMA_IN_PROGRESS)
+ return;
count = dma->rx_size - state.residue;
tty_insert_flip_string(tty_port, dma->rx_buf, count);
p->port.icount.rx += count;
+ dma->rx_running = 0;
tty_flip_buffer_push(tty_port);
}
+static void dma_rx_complete(void *param)
+{
+ struct uart_8250_port *p = param;
+ struct uart_8250_dma *dma = p->dma;
+ unsigned long flags;
+
+ spin_lock_irqsave(&p->port.lock, flags);
+ if (dma->rx_running)
+ __dma_rx_complete(p);
+ spin_unlock_irqrestore(&p->port.lock, flags);
+}
+
int serial8250_tx_dma(struct uart_8250_port *p)
{
struct uart_8250_dma *dma = p->dma;
@@ -134,7 +154,7 @@ int serial8250_rx_dma(struct uart_8250_port *p)
return -EBUSY;
dma->rx_running = 1;
- desc->callback = __dma_rx_complete;
+ desc->callback = dma_rx_complete;
desc->callback_param = p;
dma->rx_cookie = dmaengine_submit(desc);
diff --git a/drivers/tty/serial/8250/8250_em.c b/drivers/tty/serial/8250/8250_em.c
index f8e9999..d94c381 100644
--- a/drivers/tty/serial/8250/8250_em.c
+++ b/drivers/tty/serial/8250/8250_em.c
@@ -106,8 +106,8 @@ static int serial8250_em_probe(struct platform_device *pdev)
memset(&up, 0, sizeof(up));
up.port.mapbase = regs->start;
up.port.irq = irq;
- up.port.type = PORT_UNKNOWN;
- up.port.flags = UPF_BOOT_AUTOCONF | UPF_FIXED_PORT | UPF_IOREMAP;
+ up.port.type = PORT_16750;
+ up.port.flags = UPF_FIXED_PORT | UPF_IOREMAP | UPF_FIXED_TYPE;
up.port.dev = &pdev->dev;
up.port.private_data = priv;
diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
index d502240..c767636 100644
--- a/drivers/tty/serial/8250/8250_exar.c
+++ b/drivers/tty/serial/8250/8250_exar.c
@@ -43,6 +43,12 @@
#define PCI_DEVICE_ID_EXAR_XR17V4358 0x4358
#define PCI_DEVICE_ID_EXAR_XR17V8358 0x8358
+#define PCI_DEVICE_ID_SEALEVEL_710xC 0x1001
+#define PCI_DEVICE_ID_SEALEVEL_720xC 0x1002
+#define PCI_DEVICE_ID_SEALEVEL_740xC 0x1004
+#define PCI_DEVICE_ID_SEALEVEL_780xC 0x1008
+#define PCI_DEVICE_ID_SEALEVEL_716xC 0x1010
+
#define UART_EXAR_INT0 0x80
#define UART_EXAR_8XMODE 0x88 /* 8X sampling rate select */
#define UART_EXAR_SLEEP 0x8b /* Sleep mode */
@@ -623,7 +629,14 @@ exar_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *ent)
maxnr = pci_resource_len(pcidev, bar) >> (board->reg_shift + 3);
- nr_ports = board->num_ports ? board->num_ports : pcidev->device & 0x0f;
+ if (pcidev->vendor == PCI_VENDOR_ID_ACCESSIO)
+ nr_ports = BIT(((pcidev->device & 0x38) >> 3) - 1);
+ else if (board->num_ports)
+ nr_ports = board->num_ports;
+ else if (pcidev->vendor == PCI_VENDOR_ID_SEALEVEL)
+ nr_ports = pcidev->device & 0xff;
+ else
+ nr_ports = pcidev->device & 0x0f;
priv = devm_kzalloc(&pcidev->dev, struct_size(priv, line, nr_ports), GFP_KERNEL);
if (!priv)
@@ -722,22 +735,6 @@ static int __maybe_unused exar_resume(struct device *dev)
static SIMPLE_DEV_PM_OPS(exar_pci_pm, exar_suspend, exar_resume);
-static const struct exar8250_board acces_com_2x = {
- .num_ports = 2,
- .setup = pci_xr17c154_setup,
-};
-
-static const struct exar8250_board acces_com_4x = {
- .num_ports = 4,
- .setup = pci_xr17c154_setup,
-};
-
-static const struct exar8250_board acces_com_8x = {
- .num_ports = 8,
- .setup = pci_xr17c154_setup,
-};
-
-
static const struct exar8250_board pbn_fastcom335_2 = {
.num_ports = 2,
.setup = pci_fastcom335_setup,
@@ -822,13 +819,13 @@ static const struct exar8250_board pbn_exar_XR17V8358 = {
}
static const struct pci_device_id exar_pci_tbl[] = {
- EXAR_DEVICE(ACCESSIO, COM_2S, acces_com_2x),
- EXAR_DEVICE(ACCESSIO, COM_4S, acces_com_4x),
- EXAR_DEVICE(ACCESSIO, COM_8S, acces_com_8x),
- EXAR_DEVICE(ACCESSIO, COM232_8, acces_com_8x),
- EXAR_DEVICE(ACCESSIO, COM_2SM, acces_com_2x),
- EXAR_DEVICE(ACCESSIO, COM_4SM, acces_com_4x),
- EXAR_DEVICE(ACCESSIO, COM_8SM, acces_com_8x),
+ EXAR_DEVICE(ACCESSIO, COM_2S, pbn_exar_XR17C15x),
+ EXAR_DEVICE(ACCESSIO, COM_4S, pbn_exar_XR17C15x),
+ EXAR_DEVICE(ACCESSIO, COM_8S, pbn_exar_XR17C15x),
+ EXAR_DEVICE(ACCESSIO, COM232_8, pbn_exar_XR17C15x),
+ EXAR_DEVICE(ACCESSIO, COM_2SM, pbn_exar_XR17C15x),
+ EXAR_DEVICE(ACCESSIO, COM_4SM, pbn_exar_XR17C15x),
+ EXAR_DEVICE(ACCESSIO, COM_8SM, pbn_exar_XR17C15x),
CONNECT_DEVICE(XR17C152, UART_2_232, pbn_connect),
CONNECT_DEVICE(XR17C154, UART_4_232, pbn_connect),
@@ -864,6 +861,12 @@ static const struct pci_device_id exar_pci_tbl[] = {
EXAR_DEVICE(COMMTECH, 4224PCI335, pbn_fastcom335_4),
EXAR_DEVICE(COMMTECH, 2324PCI335, pbn_fastcom335_4),
EXAR_DEVICE(COMMTECH, 2328PCI335, pbn_fastcom335_8),
+
+ EXAR_DEVICE(SEALEVEL, 710xC, pbn_exar_XR17V35x),
+ EXAR_DEVICE(SEALEVEL, 720xC, pbn_exar_XR17V35x),
+ EXAR_DEVICE(SEALEVEL, 740xC, pbn_exar_XR17V35x),
+ EXAR_DEVICE(SEALEVEL, 780xC, pbn_exar_XR17V35x),
+ EXAR_DEVICE(SEALEVEL, 716xC, pbn_exar_XR17V35x),
{ 0, }
};
MODULE_DEVICE_TABLE(pci, exar_pci_tbl);
diff --git a/drivers/tty/serial/8250/8250_fsl.c b/drivers/tty/serial/8250/8250_fsl.c
index af74f82..6a22f3a9 100644
--- a/drivers/tty/serial/8250/8250_fsl.c
+++ b/drivers/tty/serial/8250/8250_fsl.c
@@ -38,7 +38,7 @@ int fsl8250_handle_irq(struct uart_port *port)
iir = port->serial_in(port, UART_IIR);
if (iir & UART_IIR_NO_INT) {
- spin_unlock(&up->port.lock);
+ spin_unlock_irqrestore(&up->port.lock, flags);
return 0;
}
@@ -46,7 +46,7 @@ int fsl8250_handle_irq(struct uart_port *port)
if (unlikely(up->lsr_saved_flags & UART_LSR_BI)) {
up->lsr_saved_flags &= ~UART_LSR_BI;
port->serial_in(port, UART_RX);
- spin_unlock(&up->port.lock);
+ spin_unlock_irqrestore(&up->port.lock, flags);
return 1;
}
diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig
index da63e76..6ccadfa 100644
--- a/drivers/tty/serial/8250/Kconfig
+++ b/drivers/tty/serial/8250/Kconfig
@@ -253,7 +253,9 @@
tristate "Aspeed Virtual UART"
depends on SERIAL_8250
depends on OF
- depends on REGMAP && MFD_SYSCON
+ depends on MFD_SYSCON
+ depends on ARCH_ASPEED || COMPILE_TEST
+ select REGMAP
help
If you want to use the virtual UART (VUART) device on Aspeed
BMC platforms, enable this option. This enables the 16550A-
diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
index 7c8515f..b91fe25 100644
--- a/drivers/tty/serial/amba-pl011.c
+++ b/drivers/tty/serial/amba-pl011.c
@@ -1472,6 +1472,10 @@ static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq)
struct circ_buf *xmit = &uap->port.state->xmit;
int count = uap->fifosize >> 1;
+ if ((uap->port.rs485.flags & SER_RS485_ENABLED) &&
+ !uap->rs485_tx_started)
+ pl011_rs485_tx_start(uap);
+
if (uap->port.x_char) {
if (!pl011_tx_char(uap, uap->port.x_char, from_irq))
return true;
@@ -1483,10 +1487,6 @@ static bool pl011_tx_chars(struct uart_amba_port *uap, bool from_irq)
return false;
}
- if ((uap->port.rs485.flags & SER_RS485_ENABLED) &&
- !uap->rs485_tx_started)
- pl011_rs485_tx_start(uap);
-
/* If we are using DMA mode, try to send some characters. */
if (pl011_dma_tx_irq(uap))
return true;
diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
index c0a8655..714e6ff 100644
--- a/drivers/tty/serial/atmel_serial.c
+++ b/drivers/tty/serial/atmel_serial.c
@@ -2615,13 +2615,7 @@ static void __init atmel_console_get_options(struct uart_port *port, int *baud,
else if (mr == ATMEL_US_PAR_ODD)
*parity = 'o';
- /*
- * The serial core only rounds down when matching this to a
- * supported baud rate. Make sure we don't end up slightly
- * lower than one of those, as it would make us fall through
- * to a much lower baud rate than we really want.
- */
- *baud = port->uartclk / (16 * (quot - 1));
+ *baud = port->uartclk / (16 * quot);
}
static int __init atmel_console_setup(struct console *co, char *options)
diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
index fc311df..5cabc3c 100644
--- a/drivers/tty/serial/fsl_lpuart.c
+++ b/drivers/tty/serial/fsl_lpuart.c
@@ -573,7 +573,7 @@ static void lpuart_flush_buffer(struct uart_port *port)
sport->dma_tx_nents, DMA_TO_DEVICE);
sport->dma_tx_in_progress = false;
}
- dmaengine_terminate_all(chan);
+ dmaengine_terminate_async(chan);
}
if (lpuart_is_32(sport)) {
@@ -829,11 +829,17 @@ static unsigned int lpuart32_tx_empty(struct uart_port *port)
struct lpuart_port, port);
unsigned long stat = lpuart32_read(port, UARTSTAT);
unsigned long sfifo = lpuart32_read(port, UARTFIFO);
+ unsigned long ctrl = lpuart32_read(port, UARTCTRL);
if (sport->dma_tx_in_progress)
return 0;
- if (stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT)
+ /*
+ * LPUART Transmission Complete Flag may never be set while queuing a break
+ * character, so avoid checking for transmission complete when UARTCTRL_SBK
+ * is asserted.
+ */
+ if ((stat & UARTSTAT_TC && sfifo & UARTFIFO_TXEMPT) || ctrl & UARTCTRL_SBK)
return TIOCSER_TEMT;
return 0;
@@ -1307,7 +1313,8 @@ static void lpuart_dma_rx_free(struct uart_port *port)
struct lpuart_port, port);
struct dma_chan *chan = sport->dma_rx_chan;
- dmaengine_terminate_all(chan);
+ dmaengine_terminate_sync(chan);
+ del_timer_sync(&sport->lpuart_timer);
dma_unmap_sg(chan->device->dev, &sport->rx_sgl, 1, DMA_FROM_DEVICE);
kfree(sport->rx_ring.buf);
sport->rx_ring.tail = 0;
@@ -1406,9 +1413,9 @@ static int lpuart32_config_rs485(struct uart_port *port,
* Note: UART is assumed to be active high.
*/
if (rs485->flags & SER_RS485_RTS_ON_SEND)
- modem &= ~UARTMODEM_TXRTSPOL;
- else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
modem |= UARTMODEM_TXRTSPOL;
+ else if (rs485->flags & SER_RS485_RTS_AFTER_SEND)
+ modem &= ~UARTMODEM_TXRTSPOL;
}
/* Store the new configuration */
@@ -1484,12 +1491,32 @@ static void lpuart_break_ctl(struct uart_port *port, int break_state)
static void lpuart32_break_ctl(struct uart_port *port, int break_state)
{
- unsigned long temp;
+ unsigned long temp, modem;
+ struct tty_struct *tty;
+ unsigned int cflag = 0;
+
+ tty = tty_port_tty_get(&port->state->port);
+ if (tty) {
+ cflag = tty->termios.c_cflag;
+ tty_kref_put(tty);
+ }
temp = lpuart32_read(port, UARTCTRL) & ~UARTCTRL_SBK;
+ modem = lpuart32_read(port, UARTMODIR);
- if (break_state != 0)
+ if (break_state != 0) {
temp |= UARTCTRL_SBK;
+ /*
+ * LPUART CTS has higher priority than SBK, need to disable CTS before
+ * asserting SBK to avoid any interference if flow control is enabled.
+ */
+ if (cflag & CRTSCTS && modem & UARTMODIR_TXCTSE)
+ lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
+ } else {
+ /* Re-enable the CTS when break off. */
+ if (cflag & CRTSCTS && !(modem & UARTMODIR_TXCTSE))
+ lpuart32_write(port, modem | UARTMODIR_TXCTSE, UARTMODIR);
+ }
lpuart32_write(port, temp, UARTCTRL);
}
@@ -1700,12 +1727,6 @@ static void lpuart32_configure(struct lpuart_port *sport)
{
unsigned long temp;
- if (sport->lpuart_dma_rx_use) {
- /* RXWATER must be 0 */
- temp = lpuart32_read(&sport->port, UARTWATER);
- temp &= ~(UARTWATER_WATER_MASK << UARTWATER_RXWATER_OFF);
- lpuart32_write(&sport->port, temp, UARTWATER);
- }
temp = lpuart32_read(&sport->port, UARTCTRL);
if (!sport->lpuart_dma_rx_use)
temp |= UARTCTRL_RIE;
@@ -1759,7 +1780,6 @@ static int lpuart32_startup(struct uart_port *port)
static void lpuart_dma_shutdown(struct lpuart_port *sport)
{
if (sport->lpuart_dma_rx_use) {
- del_timer_sync(&sport->lpuart_timer);
lpuart_dma_rx_free(&sport->port);
sport->lpuart_dma_rx_use = false;
}
@@ -1768,7 +1788,7 @@ static void lpuart_dma_shutdown(struct lpuart_port *sport)
if (wait_event_interruptible(sport->dma_wait,
!sport->dma_tx_in_progress) != false) {
sport->dma_tx_in_progress = false;
- dmaengine_terminate_all(sport->dma_tx_chan);
+ dmaengine_terminate_sync(sport->dma_tx_chan);
}
sport->lpuart_dma_tx_use = false;
}
@@ -1807,6 +1827,15 @@ static void lpuart32_shutdown(struct uart_port *port)
spin_lock_irqsave(&port->lock, flags);
+ /* clear status */
+ temp = lpuart32_read(&sport->port, UARTSTAT);
+ lpuart32_write(&sport->port, temp, UARTSTAT);
+
+ /* disable Rx/Tx DMA */
+ temp = lpuart32_read(port, UARTBAUD);
+ temp &= ~(UARTBAUD_TDMAE | UARTBAUD_RDMAE);
+ lpuart32_write(port, temp, UARTBAUD);
+
/* disable Rx/Tx and interrupts */
temp = lpuart32_read(port, UARTCTRL);
temp &= ~(UARTCTRL_TE | UARTCTRL_RE |
@@ -1910,10 +1939,8 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
* Since timer function acqures sport->port.lock, need to stop before
* acquring same lock because otherwise del_timer_sync() can deadlock.
*/
- if (old && sport->lpuart_dma_rx_use) {
- del_timer_sync(&sport->lpuart_timer);
+ if (old && sport->lpuart_dma_rx_use)
lpuart_dma_rx_free(&sport->port);
- }
spin_lock_irqsave(&sport->port.lock, flags);
@@ -2148,10 +2175,8 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
* Since timer function acqures sport->port.lock, need to stop before
* acquring same lock because otherwise del_timer_sync() can deadlock.
*/
- if (old && sport->lpuart_dma_rx_use) {
- del_timer_sync(&sport->lpuart_timer);
+ if (old && sport->lpuart_dma_rx_use)
lpuart_dma_rx_free(&sport->port);
- }
spin_lock_irqsave(&sport->port.lock, flags);
@@ -2178,9 +2203,15 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
/* update the per-port timeout */
uart_update_timeout(port, termios->c_cflag, baud);
- /* wait transmit engin complete */
- lpuart32_write(&sport->port, 0, UARTMODIR);
- lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+ /*
+ * LPUART Transmission Complete Flag may never be set while queuing a break
+ * character, so skip waiting for transmission complete when UARTCTRL_SBK is
+ * asserted.
+ */
+ if (!(old_ctrl & UARTCTRL_SBK)) {
+ lpuart32_write(&sport->port, 0, UARTMODIR);
+ lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+ }
/* disable transmit and receive */
lpuart32_write(&sport->port, old_ctrl & ~(UARTCTRL_TE | UARTCTRL_RE),
@@ -2837,11 +2868,10 @@ static int __maybe_unused lpuart_suspend(struct device *dev)
* EDMA driver during suspend will forcefully release any
* non-idle DMA channels. If port wakeup is enabled or if port
* is console port or 'no_console_suspend' is set the Rx DMA
- * cannot resume as as expected, hence gracefully release the
+ * cannot resume as expected, hence gracefully release the
* Rx DMA path before suspend and start Rx DMA path on resume.
*/
if (irq_wake) {
- del_timer_sync(&sport->lpuart_timer);
lpuart_dma_rx_free(&sport->port);
}
diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 711edb8..77a4f4a 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -484,7 +484,7 @@ static void imx_uart_stop_tx(struct uart_port *port)
static void imx_uart_stop_rx(struct uart_port *port)
{
struct imx_port *sport = (struct imx_port *)port;
- u32 ucr1, ucr2, ucr4;
+ u32 ucr1, ucr2, ucr4, uts;
ucr1 = imx_uart_readl(sport, UCR1);
ucr2 = imx_uart_readl(sport, UCR2);
@@ -500,7 +500,18 @@ static void imx_uart_stop_rx(struct uart_port *port)
imx_uart_writel(sport, ucr1, UCR1);
imx_uart_writel(sport, ucr4, UCR4);
- ucr2 &= ~UCR2_RXEN;
+ /* See SER_RS485_ENABLED/UTS_LOOP comment in imx_uart_probe() */
+ if (port->rs485.flags & SER_RS485_ENABLED &&
+ port->rs485.flags & SER_RS485_RTS_ON_SEND &&
+ sport->have_rtscts && !sport->have_rtsgpio) {
+ uts = imx_uart_readl(sport, imx_uart_uts_reg(sport));
+ uts |= UTS_LOOP;
+ imx_uart_writel(sport, uts, imx_uart_uts_reg(sport));
+ ucr2 |= UCR2_RXEN;
+ } else {
+ ucr2 &= ~UCR2_RXEN;
+ }
+
imx_uart_writel(sport, ucr2, UCR2);
}
@@ -1383,7 +1394,7 @@ static int imx_uart_startup(struct uart_port *port)
int retval, i;
unsigned long flags;
int dma_is_inited = 0;
- u32 ucr1, ucr2, ucr3, ucr4;
+ u32 ucr1, ucr2, ucr3, ucr4, uts;
retval = clk_prepare_enable(sport->clk_per);
if (retval)
@@ -1488,6 +1499,11 @@ static int imx_uart_startup(struct uart_port *port)
imx_uart_writel(sport, ucr2, UCR2);
}
+ /* See SER_RS485_ENABLED/UTS_LOOP comment in imx_uart_probe() */
+ uts = imx_uart_readl(sport, imx_uart_uts_reg(sport));
+ uts &= ~UTS_LOOP;
+ imx_uart_writel(sport, uts, imx_uart_uts_reg(sport));
+
spin_unlock_irqrestore(&sport->port.lock, flags);
return 0;
@@ -1497,7 +1513,7 @@ static void imx_uart_shutdown(struct uart_port *port)
{
struct imx_port *sport = (struct imx_port *)port;
unsigned long flags;
- u32 ucr1, ucr2, ucr4;
+ u32 ucr1, ucr2, ucr4, uts;
if (sport->dma_is_enabled) {
dmaengine_terminate_sync(sport->dma_chan_tx);
@@ -1541,7 +1557,18 @@ static void imx_uart_shutdown(struct uart_port *port)
spin_lock_irqsave(&sport->port.lock, flags);
ucr1 = imx_uart_readl(sport, UCR1);
- ucr1 &= ~(UCR1_TRDYEN | UCR1_RRDYEN | UCR1_RTSDEN | UCR1_UARTEN | UCR1_RXDMAEN | UCR1_ATDMAEN);
+ ucr1 &= ~(UCR1_TRDYEN | UCR1_RRDYEN | UCR1_RTSDEN | UCR1_RXDMAEN | UCR1_ATDMAEN);
+ /* See SER_RS485_ENABLED/UTS_LOOP comment in imx_uart_probe() */
+ if (port->rs485.flags & SER_RS485_ENABLED &&
+ port->rs485.flags & SER_RS485_RTS_ON_SEND &&
+ sport->have_rtscts && !sport->have_rtsgpio) {
+ uts = imx_uart_readl(sport, imx_uart_uts_reg(sport));
+ uts |= UTS_LOOP;
+ imx_uart_writel(sport, uts, imx_uart_uts_reg(sport));
+ ucr1 |= UCR1_UARTEN;
+ } else {
+ ucr1 &= ~UCR1_UARTEN;
+ }
imx_uart_writel(sport, ucr1, UCR1);
ucr4 = imx_uart_readl(sport, UCR4);
@@ -2189,7 +2216,7 @@ static int imx_uart_probe(struct platform_device *pdev)
void __iomem *base;
u32 dma_buf_conf[2];
int ret = 0;
- u32 ucr1;
+ u32 ucr1, ucr2, uts;
struct resource *res;
int txirq, rxirq, rtsirq;
@@ -2321,6 +2348,36 @@ static int imx_uart_probe(struct platform_device *pdev)
ucr1 &= ~(UCR1_ADEN | UCR1_TRDYEN | UCR1_IDEN | UCR1_RRDYEN | UCR1_RTSDEN);
imx_uart_writel(sport, ucr1, UCR1);
+ /* Disable Ageing Timer interrupt */
+ ucr2 = imx_uart_readl(sport, UCR2);
+ ucr2 &= ~UCR2_ATEN;
+ imx_uart_writel(sport, ucr2, UCR2);
+
+ /*
+ * In case RS485 is enabled without GPIO RTS control, the UART IP
+ * is used to control CTS signal. Keep both the UART and Receiver
+ * enabled, otherwise the UART IP pulls CTS signal always HIGH no
+ * matter how the UCR2 CTSC and CTS bits are set. To prevent any
+ * data from being fed into the RX FIFO, enable loopback mode in
+ * UTS register, which disconnects the RX path from external RXD
+ * pin and connects it to the Transceiver, which is disabled, so
+ * no data can be fed to the RX FIFO that way.
+ */
+ if (sport->port.rs485.flags & SER_RS485_ENABLED &&
+ sport->have_rtscts && !sport->have_rtsgpio) {
+ uts = imx_uart_readl(sport, imx_uart_uts_reg(sport));
+ uts |= UTS_LOOP;
+ imx_uart_writel(sport, uts, imx_uart_uts_reg(sport));
+
+ ucr1 = imx_uart_readl(sport, UCR1);
+ ucr1 |= UCR1_UARTEN;
+ imx_uart_writel(sport, ucr1, UCR1);
+
+ ucr2 = imx_uart_readl(sport, UCR2);
+ ucr2 |= UCR2_RXEN;
+ imx_uart_writel(sport, ucr2, UCR2);
+ }
+
if (!imx_uart_is_imx1(sport) && sport->dte_mode) {
/*
* The DCEDTE bit changes the direction of DSR, DCD, DTR and RI
diff --git a/drivers/tty/serial/pch_uart.c b/drivers/tty/serial/pch_uart.c
index 52cab20..e783a42 100644
--- a/drivers/tty/serial/pch_uart.c
+++ b/drivers/tty/serial/pch_uart.c
@@ -765,7 +765,7 @@ static void pch_dma_tx_complete(void *arg)
}
xmit->tail &= UART_XMIT_SIZE - 1;
async_tx_ack(priv->desc_tx);
- dma_unmap_sg(port->dev, sg, priv->orig_nent, DMA_TO_DEVICE);
+ dma_unmap_sg(port->dev, priv->sg_tx_p, priv->orig_nent, DMA_TO_DEVICE);
priv->tx_dma_use = 0;
priv->nent = 0;
priv->orig_nent = 0;
@@ -1821,7 +1821,7 @@ static void pch_uart_exit_port(struct eg20t_port *priv)
char name[32];
snprintf(name, sizeof(name), "uart%d_regs", priv->port.line);
- debugfs_remove(debugfs_lookup(name, NULL));
+ debugfs_lookup_and_remove(name, NULL);
uart_remove_one_port(&pch_uart_driver, &priv->port);
free_page((unsigned long)priv->rxbuf.buf);
}
diff --git a/drivers/tty/serial/qcom_geni_serial.c b/drivers/tty/serial/qcom_geni_serial.c
index aedc388..ce1c817 100644
--- a/drivers/tty/serial/qcom_geni_serial.c
+++ b/drivers/tty/serial/qcom_geni_serial.c
@@ -866,9 +866,10 @@ static irqreturn_t qcom_geni_serial_isr(int isr, void *dev)
return IRQ_HANDLED;
}
-static void get_tx_fifo_size(struct qcom_geni_serial_port *port)
+static int setup_fifos(struct qcom_geni_serial_port *port)
{
struct uart_port *uport;
+ u32 old_rx_fifo_depth = port->rx_fifo_depth;
uport = &port->uport;
port->tx_fifo_depth = geni_se_get_tx_fifo_depth(&port->se);
@@ -876,6 +877,16 @@ static void get_tx_fifo_size(struct qcom_geni_serial_port *port)
port->rx_fifo_depth = geni_se_get_rx_fifo_depth(&port->se);
uport->fifosize =
(port->tx_fifo_depth * port->tx_fifo_width) / BITS_PER_BYTE;
+
+ if (port->rx_fifo && (old_rx_fifo_depth != port->rx_fifo_depth) && port->rx_fifo_depth) {
+ port->rx_fifo = devm_krealloc(uport->dev, port->rx_fifo,
+ port->rx_fifo_depth * sizeof(u32),
+ GFP_KERNEL);
+ if (!port->rx_fifo)
+ return -ENOMEM;
+ }
+
+ return 0;
}
@@ -890,6 +901,7 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport)
u32 rxstale = DEFAULT_BITS_PER_CHAR * STALE_TIMEOUT;
u32 proto;
u32 pin_swap;
+ int ret;
proto = geni_se_read_proto(&port->se);
if (proto != GENI_SE_UART) {
@@ -899,7 +911,9 @@ static int qcom_geni_serial_port_setup(struct uart_port *uport)
qcom_geni_serial_stop_rx(uport);
- get_tx_fifo_size(port);
+ ret = setup_fifos(port);
+ if (ret)
+ return ret;
writel(rxstale, uport->membase + SE_UART_RX_STALE_CNT);
diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
index 0ab7880..b57cf8d 100644
--- a/drivers/tty/serial/sc16is7xx.c
+++ b/drivers/tty/serial/sc16is7xx.c
@@ -1245,25 +1245,6 @@ static int sc16is7xx_probe(struct device *dev,
}
sched_set_fifo(s->kworker_task);
-#ifdef CONFIG_GPIOLIB
- if (devtype->nr_gpio) {
- /* Setup GPIO cotroller */
- s->gpio.owner = THIS_MODULE;
- s->gpio.parent = dev;
- s->gpio.label = dev_name(dev);
- s->gpio.direction_input = sc16is7xx_gpio_direction_input;
- s->gpio.get = sc16is7xx_gpio_get;
- s->gpio.direction_output = sc16is7xx_gpio_direction_output;
- s->gpio.set = sc16is7xx_gpio_set;
- s->gpio.base = -1;
- s->gpio.ngpio = devtype->nr_gpio;
- s->gpio.can_sleep = 1;
- ret = gpiochip_add_data(&s->gpio, s);
- if (ret)
- goto out_thread;
- }
-#endif
-
/* reset device, purging any pending irq / data */
regmap_write(s->regmap, SC16IS7XX_IOCONTROL_REG << SC16IS7XX_REG_SHIFT,
SC16IS7XX_IOCONTROL_SRESET_BIT);
@@ -1329,6 +1310,25 @@ static int sc16is7xx_probe(struct device *dev,
s->p[u].irda_mode = true;
}
+#ifdef CONFIG_GPIOLIB
+ if (devtype->nr_gpio) {
+ /* Setup GPIO cotroller */
+ s->gpio.owner = THIS_MODULE;
+ s->gpio.parent = dev;
+ s->gpio.label = dev_name(dev);
+ s->gpio.direction_input = sc16is7xx_gpio_direction_input;
+ s->gpio.get = sc16is7xx_gpio_get;
+ s->gpio.direction_output = sc16is7xx_gpio_direction_output;
+ s->gpio.set = sc16is7xx_gpio_set;
+ s->gpio.base = -1;
+ s->gpio.ngpio = devtype->nr_gpio;
+ s->gpio.can_sleep = 1;
+ ret = gpiochip_add_data(&s->gpio, s);
+ if (ret)
+ goto out_thread;
+ }
+#endif
+
/*
* Setup interrupt. We first try to acquire the IRQ line as level IRQ.
* If that succeeds, we can allow sharing the interrupt as well.
@@ -1348,18 +1348,19 @@ static int sc16is7xx_probe(struct device *dev,
if (!ret)
return 0;
-out_ports:
- for (i--; i >= 0; i--) {
- uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
- clear_bit(s->p[i].port.line, &sc16is7xx_lines);
- }
-
#ifdef CONFIG_GPIOLIB
if (devtype->nr_gpio)
gpiochip_remove(&s->gpio);
out_thread:
#endif
+
+out_ports:
+ for (i--; i >= 0; i--) {
+ uart_remove_one_port(&sc16is7xx_uart, &s->p[i].port);
+ clear_bit(s->p[i].port.line, &sc16is7xx_lines);
+ }
+
kthread_stop(s->kworker_task);
out_clk:
diff --git a/drivers/tty/serial/serial-tegra.c b/drivers/tty/serial/serial-tegra.c
index 79187ff..25f34f8 100644
--- a/drivers/tty/serial/serial-tegra.c
+++ b/drivers/tty/serial/serial-tegra.c
@@ -1047,6 +1047,7 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
if (tup->cdata->fifo_mode_enable_status) {
ret = tegra_uart_wait_fifo_mode_enabled(tup);
if (ret < 0) {
+ clk_disable_unprepare(tup->uart_clk);
dev_err(tup->uport.dev,
"Failed to enable FIFO mode: %d\n", ret);
return ret;
@@ -1068,6 +1069,7 @@ static int tegra_uart_hw_init(struct tegra_uart_port *tup)
*/
ret = tegra_set_baudrate(tup, TEGRA_UART_DEFAULT_BAUD);
if (ret < 0) {
+ clk_disable_unprepare(tup->uart_clk);
dev_err(tup->uport.dev, "Failed to set baud rate\n");
return ret;
}
@@ -1227,10 +1229,13 @@ static int tegra_uart_startup(struct uart_port *u)
dev_name(u->dev), tup);
if (ret < 0) {
dev_err(u->dev, "Failed to register ISR for IRQ %d\n", u->irq);
- goto fail_hw_init;
+ goto fail_request_irq;
}
return 0;
+fail_request_irq:
+ /* tup->uart_clk is already enabled in tegra_uart_hw_init */
+ clk_disable_unprepare(tup->uart_clk);
fail_hw_init:
if (!tup->use_rx_pio)
tegra_uart_dma_channel_free(tup, true);
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
index c5c0f39..2531817 100644
--- a/drivers/tty/serial/sh-sci.c
+++ b/drivers/tty/serial/sh-sci.c
@@ -31,6 +31,7 @@
#include <linux/ioport.h>
#include <linux/ktime.h>
#include <linux/major.h>
+#include <linux/minmax.h>
#include <linux/module.h>
#include <linux/mm.h>
#include <linux/of.h>
@@ -2895,6 +2896,13 @@ static int sci_init_single(struct platform_device *dev,
sci_port->irqs[i] = platform_get_irq(dev, i);
}
+ /*
+ * The fourth interrupt on SCI port is transmit end interrupt, so
+ * shuffle the interrupts.
+ */
+ if (p->type == PORT_SCI)
+ swap(sci_port->irqs[SCIx_BRI_IRQ], sci_port->irqs[SCIx_TEI_IRQ]);
+
/* The SCI generates several interrupts. They can be muxed together or
* connected to different interrupt lines. In the muxed case only one
* interrupt resource is specified as there is only one interrupt ID.
@@ -2960,7 +2968,7 @@ static int sci_init_single(struct platform_device *dev,
port->flags = UPF_FIXED_PORT | UPF_BOOT_AUTOCONF | p->flags;
port->fifosize = sci_port->params->fifosize;
- if (port->type == PORT_SCI) {
+ if (port->type == PORT_SCI && !dev->dev.of_node) {
if (sci_port->reg_size >= 0x20)
port->regshift = 2;
else
diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index 6616d4a..64dd643 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -1244,14 +1244,16 @@ static struct tty_struct *tty_driver_lookup_tty(struct tty_driver *driver,
{
struct tty_struct *tty;
- if (driver->ops->lookup)
+ if (driver->ops->lookup) {
if (!file)
tty = ERR_PTR(-EIO);
else
tty = driver->ops->lookup(driver, file, idx);
- else
+ } else {
+ if (idx >= driver->num)
+ return ERR_PTR(-EINVAL);
tty = driver->ttys[idx];
-
+ }
if (!IS_ERR(tty))
tty_kref_get(tty);
return tty;
diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c
index 1850bac..1dc07f92 100644
--- a/drivers/tty/vt/vc_screen.c
+++ b/drivers/tty/vt/vc_screen.c
@@ -386,10 +386,6 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
uni_mode = use_unicode(inode);
attr = use_attributes(inode);
- ret = -ENXIO;
- vc = vcs_vc(inode, &viewed);
- if (!vc)
- goto unlock_out;
ret = -EINVAL;
if (pos < 0)
@@ -407,16 +403,20 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos)
unsigned int this_round, skip = 0;
int size;
+ vc = vcs_vc(inode, &viewed);
+ if (!vc) {
+ ret = -ENXIO;
+ break;
+ }
+
/* Check whether we are above size each round,
* as copy_to_user at the end of this loop
* could sleep.
*/
size = vcs_size(vc, attr, uni_mode);
if (size < 0) {
- if (read)
- break;
ret = size;
- goto unlock_out;
+ break;
}
if (pos >= size)
break;
diff --git a/drivers/usb/cdns3/cdns3-gadget.c b/drivers/usb/cdns3/cdns3-gadget.c
index 1802f68..924c279 100644
--- a/drivers/usb/cdns3/cdns3-gadget.c
+++ b/drivers/usb/cdns3/cdns3-gadget.c
@@ -2610,6 +2610,7 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
u8 req_on_hw_ring = 0;
unsigned long flags;
int ret = 0;
+ int val;
if (!ep || !request || !ep->desc)
return -EINVAL;
@@ -2645,6 +2646,13 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
/* Update ring only if removed request is on pending_req_list list */
if (req_on_hw_ring && link_trb) {
+ /* Stop DMA */
+ writel(EP_CMD_DFLUSH, &priv_dev->regs->ep_cmd);
+
+ /* wait for DFLUSH cleared */
+ readl_poll_timeout_atomic(&priv_dev->regs->ep_cmd, val,
+ !(val & EP_CMD_DFLUSH), 1, 1000);
+
link_trb->buffer = cpu_to_le32(TRB_BUFFER(priv_ep->trb_pool_dma +
((priv_req->end_trb + 1) * TRB_SIZE)));
link_trb->control = cpu_to_le32((le32_to_cpu(link_trb->control) & TRB_CYCLE) |
@@ -2656,6 +2664,10 @@ int cdns3_gadget_ep_dequeue(struct usb_ep *ep,
cdns3_gadget_giveback(priv_ep, priv_req, -ECONNRESET);
+ req = cdns3_next_request(&priv_ep->pending_req_list);
+ if (req)
+ cdns3_rearm_transfer(priv_ep, 1);
+
not_found:
spin_unlock_irqrestore(&priv_dev->lock, flags);
return ret;
diff --git a/drivers/usb/cdns3/cdns3-pci-wrap.c b/drivers/usb/cdns3/cdns3-pci-wrap.c
index deeea61..1f6320d 100644
--- a/drivers/usb/cdns3/cdns3-pci-wrap.c
+++ b/drivers/usb/cdns3/cdns3-pci-wrap.c
@@ -60,6 +60,11 @@ static struct pci_dev *cdns3_get_second_fun(struct pci_dev *pdev)
return NULL;
}
+ if (func->devfn != PCI_DEV_FN_HOST_DEVICE &&
+ func->devfn != PCI_DEV_FN_OTG) {
+ return NULL;
+ }
+
return func;
}
diff --git a/drivers/usb/cdns3/cdnsp-ep0.c b/drivers/usb/cdns3/cdnsp-ep0.c
index 9b8325f..f317d3c 100644
--- a/drivers/usb/cdns3/cdnsp-ep0.c
+++ b/drivers/usb/cdns3/cdnsp-ep0.c
@@ -403,20 +403,6 @@ static int cdnsp_ep0_std_request(struct cdnsp_device *pdev,
case USB_REQ_SET_ISOCH_DELAY:
ret = cdnsp_ep0_set_isoch_delay(pdev, ctrl);
break;
- case USB_REQ_SET_INTERFACE:
- /*
- * Add request into pending list to block sending status stage
- * by libcomposite.
- */
- list_add_tail(&pdev->ep0_preq.list,
- &pdev->ep0_preq.pep->pending_list);
-
- ret = cdnsp_ep0_delegate_req(pdev, ctrl);
- if (ret == -EBUSY)
- ret = 0;
-
- list_del(&pdev->ep0_preq.list);
- break;
default:
ret = cdnsp_ep0_delegate_req(pdev, ctrl);
break;
@@ -428,7 +414,7 @@ static int cdnsp_ep0_std_request(struct cdnsp_device *pdev,
void cdnsp_setup_analyze(struct cdnsp_device *pdev)
{
struct usb_ctrlrequest *ctrl = &pdev->setup;
- int ret = 0;
+ int ret = -EINVAL;
u16 len;
trace_cdnsp_ctrl_req(ctrl);
@@ -438,7 +424,6 @@ void cdnsp_setup_analyze(struct cdnsp_device *pdev)
if (pdev->gadget.state == USB_STATE_NOTATTACHED) {
dev_err(pdev->dev, "ERR: Setup detected in unattached state\n");
- ret = -EINVAL;
goto out;
}
@@ -474,9 +459,6 @@ void cdnsp_setup_analyze(struct cdnsp_device *pdev)
else
ret = cdnsp_ep0_delegate_req(pdev, ctrl);
- if (!len)
- pdev->ep0_stage = CDNSP_STATUS_STAGE;
-
if (ret == USB_GADGET_DELAYED_STATUS) {
trace_cdnsp_ep0_status_stage("delayed");
return;
@@ -484,6 +466,6 @@ void cdnsp_setup_analyze(struct cdnsp_device *pdev)
out:
if (ret < 0)
cdnsp_ep0_stall(pdev);
- else if (pdev->ep0_stage == CDNSP_STATUS_STAGE)
+ else if (!len && pdev->ep0_stage != CDNSP_STATUS_STAGE)
cdnsp_status_stage(pdev);
}
diff --git a/drivers/usb/cdns3/cdnsp-pci.c b/drivers/usb/cdns3/cdnsp-pci.c
index fe8a114..29f433c 100644
--- a/drivers/usb/cdns3/cdnsp-pci.c
+++ b/drivers/usb/cdns3/cdnsp-pci.c
@@ -29,30 +29,23 @@
#define PLAT_DRIVER_NAME "cdns-usbssp"
#define CDNS_VENDOR_ID 0x17cd
-#define CDNS_DEVICE_ID 0x0100
+#define CDNS_DEVICE_ID 0x0200
+#define CDNS_DRD_ID 0x0100
#define CDNS_DRD_IF (PCI_CLASS_SERIAL_USB << 8 | 0x80)
static struct pci_dev *cdnsp_get_second_fun(struct pci_dev *pdev)
{
- struct pci_dev *func;
-
/*
* Gets the second function.
- * It's little tricky, but this platform has two function.
- * The fist keeps resources for Host/Device while the second
- * keeps resources for DRD/OTG.
+ * Platform has two function. The fist keeps resources for
+ * Host/Device while the secon keeps resources for DRD/OTG.
*/
- func = pci_get_device(pdev->vendor, pdev->device, NULL);
- if (!func)
- return NULL;
+ if (pdev->device == CDNS_DEVICE_ID)
+ return pci_get_device(pdev->vendor, CDNS_DRD_ID, NULL);
+ else if (pdev->device == CDNS_DRD_ID)
+ return pci_get_device(pdev->vendor, CDNS_DEVICE_ID, NULL);
- if (func->devfn == pdev->devfn) {
- func = pci_get_device(pdev->vendor, pdev->device, func);
- if (!func)
- return NULL;
- }
-
- return func;
+ return NULL;
}
static int cdnsp_pci_probe(struct pci_dev *pdev,
@@ -232,6 +225,8 @@ static const struct pci_device_id cdnsp_pci_ids[] = {
PCI_CLASS_SERIAL_USB_DEVICE, PCI_ANY_ID },
{ PCI_VENDOR_ID_CDNS, CDNS_DEVICE_ID, PCI_ANY_ID, PCI_ANY_ID,
CDNS_DRD_IF, PCI_ANY_ID },
+ { PCI_VENDOR_ID_CDNS, CDNS_DRD_ID, PCI_ANY_ID, PCI_ANY_ID,
+ CDNS_DRD_IF, PCI_ANY_ID },
{ 0, }
};
diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
index 99440ba..50e3784 100644
--- a/drivers/usb/chipidea/ci.h
+++ b/drivers/usb/chipidea/ci.h
@@ -203,6 +203,7 @@ struct hw_bank {
* @in_lpm: if the core in low power mode
* @wakeup_int: if wakeup interrupt occur
* @rev: The revision number for controller
+ * @mutex: protect code from concorrent running when doing role switch
*/
struct ci_hdrc {
struct device *dev;
@@ -255,6 +256,7 @@ struct ci_hdrc {
bool in_lpm;
bool wakeup_int;
enum ci_revision rev;
+ struct mutex mutex;
};
static inline struct ci_role_driver *ci_role(struct ci_hdrc *ci)
diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
index a56f063..a986997 100644
--- a/drivers/usb/chipidea/core.c
+++ b/drivers/usb/chipidea/core.c
@@ -974,9 +974,16 @@ static ssize_t role_store(struct device *dev,
strlen(ci->roles[role]->name)))
break;
- if (role == CI_ROLE_END || role == ci->role)
+ if (role == CI_ROLE_END)
return -EINVAL;
+ mutex_lock(&ci->mutex);
+
+ if (role == ci->role) {
+ mutex_unlock(&ci->mutex);
+ return n;
+ }
+
pm_runtime_get_sync(dev);
disable_irq(ci->irq);
ci_role_stop(ci);
@@ -985,6 +992,7 @@ static ssize_t role_store(struct device *dev,
ci_handle_vbus_change(ci);
enable_irq(ci->irq);
pm_runtime_put_sync(dev);
+ mutex_unlock(&ci->mutex);
return (ret == 0) ? n : ret;
}
@@ -1020,6 +1028,7 @@ static int ci_hdrc_probe(struct platform_device *pdev)
return -ENOMEM;
spin_lock_init(&ci->lock);
+ mutex_init(&ci->mutex);
ci->dev = dev;
ci->platdata = dev_get_platdata(dev);
ci->imx28_write_fix = !!(ci->platdata->flags &
diff --git a/drivers/usb/chipidea/debug.c b/drivers/usb/chipidea/debug.c
index faf6b078..bbc610e 100644
--- a/drivers/usb/chipidea/debug.c
+++ b/drivers/usb/chipidea/debug.c
@@ -364,5 +364,5 @@ void dbg_create_files(struct ci_hdrc *ci)
*/
void dbg_remove_files(struct ci_hdrc *ci)
{
- debugfs_remove(debugfs_lookup(dev_name(ci->dev), usb_debug_root));
+ debugfs_lookup_and_remove(dev_name(ci->dev), usb_debug_root);
}
diff --git a/drivers/usb/chipidea/otg.c b/drivers/usb/chipidea/otg.c
index 8dd5928..2d9d694e 100644
--- a/drivers/usb/chipidea/otg.c
+++ b/drivers/usb/chipidea/otg.c
@@ -167,8 +167,10 @@ static int hw_wait_vbus_lower_bsv(struct ci_hdrc *ci)
static void ci_handle_id_switch(struct ci_hdrc *ci)
{
- enum ci_role role = ci_otg_role(ci);
+ enum ci_role role;
+ mutex_lock(&ci->mutex);
+ role = ci_otg_role(ci);
if (role != ci->role) {
dev_dbg(ci->dev, "switching from %s to %s\n",
ci_role(ci)->name, ci->roles[role]->name);
@@ -198,6 +200,7 @@ static void ci_handle_id_switch(struct ci_hdrc *ci)
if (role == CI_ROLE_GADGET)
ci_handle_vbus_change(ci);
}
+ mutex_unlock(&ci->mutex);
}
/**
* ci_otg_work - perform otg (vbus/id) event handle
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 98bdae4..2295d69 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -43,6 +43,9 @@
#define USB_PRODUCT_USB5534B 0x5534
#define USB_VENDOR_CYPRESS 0x04b4
#define USB_PRODUCT_CY7C65632 0x6570
+#define USB_VENDOR_TEXAS_INSTRUMENTS 0x0451
+#define USB_PRODUCT_TUSB8041_USB3 0x8140
+#define USB_PRODUCT_TUSB8041_USB2 0x8142
#define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND 0x01
#define HUB_QUIRK_DISABLE_AUTOSUSPEND 0x02
@@ -2379,9 +2382,8 @@ static int usb_enumerate_device_otg(struct usb_device *udev)
* usb_enumerate_device - Read device configs/intfs/otg (usbcore-internal)
* @udev: newly addressed device (in ADDRESS state)
*
- * This is only called by usb_new_device() and usb_authorize_device()
- * and FIXME -- all comments that apply to them apply here wrt to
- * environment.
+ * This is only called by usb_new_device() -- all comments that apply there
+ * apply here wrt to environment.
*
* If the device is WUSB and not authorized, we don't attempt to read
* the string descriptors, as they will be errored out by the device
@@ -5791,6 +5793,16 @@ static const struct usb_device_id hub_id_table[] = {
.idVendor = USB_VENDOR_GENESYS_LOGIC,
.bInterfaceClass = USB_CLASS_HUB,
.driver_info = HUB_QUIRK_CHECK_PORT_AUTOSUSPEND},
+ { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
+ | USB_DEVICE_ID_MATCH_PRODUCT,
+ .idVendor = USB_VENDOR_TEXAS_INSTRUMENTS,
+ .idProduct = USB_PRODUCT_TUSB8041_USB2,
+ .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
+ { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
+ | USB_DEVICE_ID_MATCH_PRODUCT,
+ .idVendor = USB_VENDOR_TEXAS_INSTRUMENTS,
+ .idProduct = USB_PRODUCT_TUSB8041_USB3,
+ .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
{ .match_flags = USB_DEVICE_ID_MATCH_DEV_CLASS,
.bDeviceClass = USB_CLASS_HUB},
{ .match_flags = USB_DEVICE_ID_MATCH_INT_CLASS,
diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
index 079e183..934b3d9 100644
--- a/drivers/usb/core/quirks.c
+++ b/drivers/usb/core/quirks.c
@@ -526,6 +526,9 @@ static const struct usb_device_id usb_quirk_list[] = {
/* DJI CineSSD */
{ USB_DEVICE(0x2ca3, 0x0031), .driver_info = USB_QUIRK_NO_LPM },
+ /* Alcor Link AK9563 SC Reader used in 2022 Lenovo ThinkPads */
+ { USB_DEVICE(0x2ce3, 0x9563), .driver_info = USB_QUIRK_NO_LPM },
+
/* DELL USB GEN2 */
{ USB_DEVICE(0x413c, 0xb062), .driver_info = USB_QUIRK_NO_LPM | USB_QUIRK_RESET_RESUME },
diff --git a/drivers/usb/core/sysfs.c b/drivers/usb/core/sysfs.c
index fa2e49d..60ee046 100644
--- a/drivers/usb/core/sysfs.c
+++ b/drivers/usb/core/sysfs.c
@@ -868,11 +868,7 @@ read_descriptors(struct file *filp, struct kobject *kobj,
size_t srclen, n;
int cfgno;
void *src;
- int retval;
- retval = usb_lock_device_interruptible(udev);
- if (retval < 0)
- return -EINTR;
/* The binary attribute begins with the device descriptor.
* Following that are the raw descriptor entries for all the
* configurations (config plus subsidiary descriptors).
@@ -897,7 +893,6 @@ read_descriptors(struct file *filp, struct kobject *kobj,
off -= srclen;
}
}
- usb_unlock_device(udev);
return count - nleft;
}
diff --git a/drivers/usb/core/usb-acpi.c b/drivers/usb/core/usb-acpi.c
index 50b2fc7..8751276 100644
--- a/drivers/usb/core/usb-acpi.c
+++ b/drivers/usb/core/usb-acpi.c
@@ -37,6 +37,71 @@ bool usb_acpi_power_manageable(struct usb_device *hdev, int index)
}
EXPORT_SYMBOL_GPL(usb_acpi_power_manageable);
+#define UUID_USB_CONTROLLER_DSM "ce2ee385-00e6-48cb-9f05-2edb927c4899"
+#define USB_DSM_DISABLE_U1_U2_FOR_PORT 5
+
+/**
+ * usb_acpi_port_lpm_incapable - check if lpm should be disabled for a port.
+ * @hdev: USB device belonging to the usb hub
+ * @index: zero based port index
+ *
+ * Some USB3 ports may not support USB3 link power management U1/U2 states
+ * due to different retimer setup. ACPI provides _DSM method which returns 0x01
+ * if U1 and U2 states should be disabled. Evaluate _DSM with:
+ * Arg0: UUID = ce2ee385-00e6-48cb-9f05-2edb927c4899
+ * Arg1: Revision ID = 0
+ * Arg2: Function Index = 5
+ * Arg3: (empty)
+ *
+ * Return 1 if USB3 port is LPM incapable, negative on error, otherwise 0
+ */
+
+int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index)
+{
+ union acpi_object *obj;
+ acpi_handle port_handle;
+ int port1 = index + 1;
+ guid_t guid;
+ int ret;
+
+ ret = guid_parse(UUID_USB_CONTROLLER_DSM, &guid);
+ if (ret)
+ return ret;
+
+ port_handle = usb_get_hub_port_acpi_handle(hdev, port1);
+ if (!port_handle) {
+ dev_dbg(&hdev->dev, "port-%d no acpi handle\n", port1);
+ return -ENODEV;
+ }
+
+ if (!acpi_check_dsm(port_handle, &guid, 0,
+ BIT(USB_DSM_DISABLE_U1_U2_FOR_PORT))) {
+ dev_dbg(&hdev->dev, "port-%d no _DSM function %d\n",
+ port1, USB_DSM_DISABLE_U1_U2_FOR_PORT);
+ return -ENODEV;
+ }
+
+ obj = acpi_evaluate_dsm(port_handle, &guid, 0,
+ USB_DSM_DISABLE_U1_U2_FOR_PORT, NULL);
+
+ if (!obj)
+ return -ENODEV;
+
+ if (obj->type != ACPI_TYPE_INTEGER) {
+ dev_dbg(&hdev->dev, "evaluate port-%d _DSM failed\n", port1);
+ ACPI_FREE(obj);
+ return -EINVAL;
+ }
+
+ if (obj->integer.value == 0x01)
+ ret = 1;
+
+ ACPI_FREE(obj);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(usb_acpi_port_lpm_incapable);
+
/**
* usb_acpi_set_power_state - control usb port's power via acpi power
* resource
diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
index 62368c4..cc36f9f 100644
--- a/drivers/usb/core/usb.c
+++ b/drivers/usb/core/usb.c
@@ -1036,7 +1036,7 @@ static void usb_debugfs_init(void)
static void usb_debugfs_cleanup(void)
{
- debugfs_remove(debugfs_lookup("devices", usb_debug_root));
+ debugfs_lookup_and_remove("devices", usb_debug_root);
}
/*
diff --git a/drivers/usb/dwc2/platform.c b/drivers/usb/dwc2/platform.c
index 265d437..7a09476 100644
--- a/drivers/usb/dwc2/platform.c
+++ b/drivers/usb/dwc2/platform.c
@@ -121,13 +121,6 @@ static int dwc2_get_dr_mode(struct dwc2_hsotg *hsotg)
return 0;
}
-static void __dwc2_disable_regulators(void *data)
-{
- struct dwc2_hsotg *hsotg = data;
-
- regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), hsotg->supplies);
-}
-
static int __dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg)
{
struct platform_device *pdev = to_platform_device(hsotg->dev);
@@ -138,11 +131,6 @@ static int __dwc2_lowlevel_hw_enable(struct dwc2_hsotg *hsotg)
if (ret)
return ret;
- ret = devm_add_action_or_reset(&pdev->dev,
- __dwc2_disable_regulators, hsotg);
- if (ret)
- return ret;
-
if (hsotg->clk) {
ret = clk_prepare_enable(hsotg->clk);
if (ret)
@@ -198,7 +186,7 @@ static int __dwc2_lowlevel_hw_disable(struct dwc2_hsotg *hsotg)
if (hsotg->clk)
clk_disable_unprepare(hsotg->clk);
- return 0;
+ return regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), hsotg->supplies);
}
/**
@@ -658,7 +646,7 @@ static int dwc2_driver_probe(struct platform_device *dev)
if (hsotg->params.activate_stm_id_vb_detection)
regulator_disable(hsotg->usb33d);
error:
- if (hsotg->dr_mode != USB_DR_MODE_PERIPHERAL)
+ if (hsotg->ll_hw_enabled)
dwc2_lowlevel_hw_disable(hsotg);
return retval;
}
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index e82e4cb..7256537 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -1092,6 +1092,7 @@ struct dwc3_scratchpad_array {
* address.
* @num_ep_resized: carries the current number endpoints which have had its tx
* fifo resized.
+ * @debug_root: root debugfs directory for this device to put its files in.
*/
struct dwc3 {
struct work_struct drd_work;
@@ -1303,6 +1304,7 @@ struct dwc3 {
int max_cfg_eps;
int last_fifo_depth;
int num_ep_resized;
+ struct dentry *debug_root;
};
#define INCRX_BURST_MODE 0
diff --git a/drivers/usb/dwc3/debug.h b/drivers/usb/dwc3/debug.h
index d223c541..01d0366 100644
--- a/drivers/usb/dwc3/debug.h
+++ b/drivers/usb/dwc3/debug.h
@@ -414,11 +414,14 @@ static inline const char *dwc3_gadget_generic_cmd_status_string(int status)
#ifdef CONFIG_DEBUG_FS
extern void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep);
+extern void dwc3_debugfs_remove_endpoint_dir(struct dwc3_ep *dep);
extern void dwc3_debugfs_init(struct dwc3 *d);
extern void dwc3_debugfs_exit(struct dwc3 *d);
#else
static inline void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep)
{ }
+static inline void dwc3_debugfs_remove_endpoint_dir(struct dwc3_ep *dep)
+{ }
static inline void dwc3_debugfs_init(struct dwc3 *d)
{ }
static inline void dwc3_debugfs_exit(struct dwc3 *d)
diff --git a/drivers/usb/dwc3/debugfs.c b/drivers/usb/dwc3/debugfs.c
index f2b7675..850df0e 100644
--- a/drivers/usb/dwc3/debugfs.c
+++ b/drivers/usb/dwc3/debugfs.c
@@ -873,27 +873,23 @@ static const struct dwc3_ep_file_map dwc3_ep_file_map[] = {
{ "GDBGEPINFO", &dwc3_ep_info_register_fops, },
};
-static void dwc3_debugfs_create_endpoint_files(struct dwc3_ep *dep,
- struct dentry *parent)
+void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep)
{
+ struct dentry *dir;
int i;
+ dir = debugfs_create_dir(dep->name, dep->dwc->debug_root);
for (i = 0; i < ARRAY_SIZE(dwc3_ep_file_map); i++) {
const struct file_operations *fops = dwc3_ep_file_map[i].fops;
const char *name = dwc3_ep_file_map[i].name;
- debugfs_create_file(name, 0444, parent, dep, fops);
+ debugfs_create_file(name, 0444, dir, dep, fops);
}
}
-void dwc3_debugfs_create_endpoint_dir(struct dwc3_ep *dep)
+void dwc3_debugfs_remove_endpoint_dir(struct dwc3_ep *dep)
{
- struct dentry *dir;
- struct dentry *root;
-
- root = debugfs_lookup(dev_name(dep->dwc->dev), usb_debug_root);
- dir = debugfs_create_dir(dep->name, root);
- dwc3_debugfs_create_endpoint_files(dep, dir);
+ debugfs_lookup_and_remove(dep->name, dep->dwc->debug_root);
}
void dwc3_debugfs_init(struct dwc3 *dwc)
@@ -911,6 +907,7 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
dwc->regset->base = dwc->regs - DWC3_GLOBALS_REGS_START;
root = debugfs_create_dir(dev_name(dwc->dev), usb_debug_root);
+ dwc->debug_root = root;
debugfs_create_regset32("regdump", 0444, root, dwc->regset);
debugfs_create_file("lsp_dump", 0644, root, dwc, &dwc3_lsp_fops);
@@ -929,6 +926,6 @@ void dwc3_debugfs_init(struct dwc3 *dwc)
void dwc3_debugfs_exit(struct dwc3 *dwc)
{
- debugfs_remove(debugfs_lookup(dev_name(dwc->dev), usb_debug_root));
+ debugfs_lookup_and_remove(dev_name(dwc->dev), usb_debug_root);
kfree(dwc->regset);
}
diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
index e55d0c7..4bcfcb9 100644
--- a/drivers/usb/dwc3/dwc3-pci.c
+++ b/drivers/usb/dwc3/dwc3-pci.c
@@ -45,7 +45,9 @@
#define PCI_DEVICE_ID_INTEL_ADLS 0x7ae1
#define PCI_DEVICE_ID_INTEL_RPL 0xa70e
#define PCI_DEVICE_ID_INTEL_RPLS 0x7a61
+#define PCI_DEVICE_ID_INTEL_MTLM 0x7eb1
#define PCI_DEVICE_ID_INTEL_MTLP 0x7ec1
+#define PCI_DEVICE_ID_INTEL_MTLS 0x7f6f
#define PCI_DEVICE_ID_INTEL_MTL 0x7e7e
#define PCI_DEVICE_ID_INTEL_TGL 0x9a15
#define PCI_DEVICE_ID_AMD_MR 0x163a
@@ -427,9 +429,15 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPLS),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLM),
+ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLP),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTLS),
+ (kernel_ulong_t) &dwc3_pci_intel_swnode, },
+
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL),
(kernel_ulong_t) &dwc3_pci_intel_swnode, },
diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c
index ec1de6f..28bc748 100644
--- a/drivers/usb/dwc3/dwc3-qcom.c
+++ b/drivers/usb/dwc3/dwc3-qcom.c
@@ -836,7 +836,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev);
/* enable vbus override for device mode */
- if (qcom->mode == USB_DR_MODE_PERIPHERAL)
+ if (qcom->mode != USB_DR_MODE_HOST)
dwc3_qcom_vbus_override_enable(qcom, true);
/* register extcon to override sw_vbus on Vbus change later */
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 4812ba4..eaf64f3 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -1640,6 +1640,44 @@ static int __dwc3_gadget_get_frame(struct dwc3 *dwc)
}
/**
+ * __dwc3_stop_active_transfer - stop the current active transfer
+ * @dep: isoc endpoint
+ * @force: set forcerm bit in the command
+ * @interrupt: command complete interrupt after End Transfer command
+ *
+ * When setting force, the ForceRM bit will be set. In that case
+ * the controller won't update the TRB progress on command
+ * completion. It also won't clear the HWO bit in the TRB.
+ * The command will also not complete immediately in that case.
+ */
+static int __dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, bool interrupt)
+{
+ struct dwc3 *dwc = dep->dwc;
+ struct dwc3_gadget_ep_cmd_params params;
+ u32 cmd;
+ int ret;
+
+ cmd = DWC3_DEPCMD_ENDTRANSFER;
+ cmd |= force ? DWC3_DEPCMD_HIPRI_FORCERM : 0;
+ cmd |= interrupt ? DWC3_DEPCMD_CMDIOC : 0;
+ cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
+ memset(¶ms, 0, sizeof(params));
+ ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);
+ WARN_ON_ONCE(ret);
+ dep->resource_index = 0;
+
+ if (!interrupt) {
+ if (!DWC3_IP_IS(DWC3) || DWC3_VER_IS_PRIOR(DWC3, 310A))
+ mdelay(1);
+ dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
+ } else if (!ret) {
+ dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
+ }
+
+ return ret;
+}
+
+/**
* dwc3_gadget_start_isoc_quirk - workaround invalid frame number
* @dep: isoc endpoint
*
@@ -1808,21 +1846,8 @@ static int __dwc3_gadget_start_isoc(struct dwc3_ep *dep)
* status, issue END_TRANSFER command and retry on the next XferNotReady
* event.
*/
- if (ret == -EAGAIN) {
- struct dwc3_gadget_ep_cmd_params params;
- u32 cmd;
-
- cmd = DWC3_DEPCMD_ENDTRANSFER |
- DWC3_DEPCMD_CMDIOC |
- DWC3_DEPCMD_PARAM(dep->resource_index);
-
- dep->resource_index = 0;
- memset(¶ms, 0, sizeof(params));
-
- ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);
- if (!ret)
- dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
- }
+ if (ret == -EAGAIN)
+ ret = __dwc3_stop_active_transfer(dep, false, true);
return ret;
}
@@ -3081,9 +3106,7 @@ static void dwc3_gadget_free_endpoints(struct dwc3 *dwc)
list_del(&dep->endpoint.ep_list);
}
- debugfs_remove_recursive(debugfs_lookup(dep->name,
- debugfs_lookup(dev_name(dep->dwc->dev),
- usb_debug_root)));
+ dwc3_debugfs_remove_endpoint_dir(dep);
kfree(dep);
}
}
@@ -3607,10 +3630,6 @@ static void dwc3_reset_gadget(struct dwc3 *dwc)
static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
bool interrupt)
{
- struct dwc3_gadget_ep_cmd_params params;
- u32 cmd;
- int ret;
-
if (!(dep->flags & DWC3_EP_TRANSFER_STARTED) ||
(dep->flags & DWC3_EP_END_TRANSFER_PENDING))
return;
@@ -3639,22 +3658,14 @@ static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force,
* enabled, the EndTransfer command will have completed upon
* returning from this function.
*
- * This mode is NOT available on the DWC_usb31 IP.
+ * This mode is NOT available on the DWC_usb31 IP. In this
+ * case, if the IOC bit is not set, then delay by 1ms
+ * after issuing the EndTransfer command. This allows for the
+ * controller to handle the command completely before DWC3
+ * remove requests attempts to unmap USB request buffers.
*/
- cmd = DWC3_DEPCMD_ENDTRANSFER;
- cmd |= force ? DWC3_DEPCMD_HIPRI_FORCERM : 0;
- cmd |= interrupt ? DWC3_DEPCMD_CMDIOC : 0;
- cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);
- memset(¶ms, 0, sizeof(params));
- ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);
- WARN_ON_ONCE(ret);
- dep->resource_index = 0;
-
- if (!interrupt)
- dep->flags &= ~DWC3_EP_TRANSFER_STARTED;
- else
- dep->flags |= DWC3_EP_END_TRANSFER_PENDING;
+ __dwc3_stop_active_transfer(dep, force, interrupt);
}
static void dwc3_clear_stall_all_ep(struct dwc3 *dwc)
diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
index 6c04341..b0c4071 100644
--- a/drivers/usb/early/xhci-dbc.c
+++ b/drivers/usb/early/xhci-dbc.c
@@ -871,7 +871,8 @@ static int xdbc_bulk_write(const char *bytes, int size)
static void early_xdbc_write(struct console *con, const char *str, u32 n)
{
- static char buf[XDBC_MAX_PACKET];
+ /* static variables are zeroed, so buf is always NULL terminated */
+ static char buf[XDBC_MAX_PACKET + 1];
int chunk, ret;
int use_cr = 0;
diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
index 5ade844..5cbf408 100644
--- a/drivers/usb/gadget/configfs.c
+++ b/drivers/usb/gadget/configfs.c
@@ -416,10 +416,9 @@ static int config_usb_cfg_link(
struct usb_composite_dev *cdev = cfg->c.cdev;
struct gadget_info *gi = container_of(cdev, struct gadget_info, cdev);
- struct config_group *group = to_config_group(usb_func_ci);
- struct usb_function_instance *fi = container_of(group,
- struct usb_function_instance, group);
- struct usb_function_instance *a_fi;
+ struct usb_function_instance *fi =
+ to_usb_function_instance(usb_func_ci);
+ struct usb_function_instance *a_fi = NULL, *iter;
struct usb_function *f;
int ret;
@@ -429,11 +428,19 @@ static int config_usb_cfg_link(
* from another gadget or a random directory.
* Also a function instance can only be linked once.
*/
- list_for_each_entry(a_fi, &gi->available_func, cfs_list) {
- if (a_fi == fi)
- break;
+
+ if (gi->composite.gadget_driver.udc_name) {
+ ret = -EINVAL;
+ goto out;
}
- if (a_fi != fi) {
+
+ list_for_each_entry(iter, &gi->available_func, cfs_list) {
+ if (iter != fi)
+ continue;
+ a_fi = iter;
+ break;
+ }
+ if (!a_fi) {
ret = -EINVAL;
goto out;
}
@@ -467,9 +474,8 @@ static void config_usb_cfg_unlink(
struct usb_composite_dev *cdev = cfg->c.cdev;
struct gadget_info *gi = container_of(cdev, struct gadget_info, cdev);
- struct config_group *group = to_config_group(usb_func_ci);
- struct usb_function_instance *fi = container_of(group,
- struct usb_function_instance, group);
+ struct usb_function_instance *fi =
+ to_usb_function_instance(usb_func_ci);
struct usb_function *f;
/*
@@ -890,18 +896,18 @@ static int os_desc_link(struct config_item *os_desc_ci,
struct gadget_info *gi = container_of(to_config_group(os_desc_ci),
struct gadget_info, os_desc_group);
struct usb_composite_dev *cdev = &gi->cdev;
- struct config_usb_cfg *c_target =
- container_of(to_config_group(usb_cfg_ci),
- struct config_usb_cfg, group);
- struct usb_configuration *c;
+ struct config_usb_cfg *c_target = to_config_usb_cfg(usb_cfg_ci);
+ struct usb_configuration *c = NULL, *iter;
int ret;
mutex_lock(&gi->lock);
- list_for_each_entry(c, &cdev->configs, list) {
- if (c == &c_target->c)
- break;
+ list_for_each_entry(iter, &cdev->configs, list) {
+ if (iter != &c_target->c)
+ continue;
+ c = iter;
+ break;
}
- if (c != &c_target->c) {
+ if (!c) {
ret = -EINVAL;
goto out;
}
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
index adc44a2..f975111 100644
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -279,6 +279,11 @@ static int __ffs_ep0_queue_wait(struct ffs_data *ffs, char *data, size_t len)
struct usb_request *req = ffs->ep0req;
int ret;
+ if (!req) {
+ spin_unlock_irq(&ffs->ev.waitq.lock);
+ return -EINVAL;
+ }
+
req->zero = len < le16_to_cpu(ffs->ev.setup.wLength);
spin_unlock_irq(&ffs->ev.waitq.lock);
@@ -1892,10 +1897,14 @@ static void functionfs_unbind(struct ffs_data *ffs)
ENTER();
if (!WARN_ON(!ffs->gadget)) {
+ /* dequeue before freeing ep0req */
+ usb_ep_dequeue(ffs->gadget->ep0, ffs->ep0req);
+ mutex_lock(&ffs->mutex);
usb_ep_free_request(ffs->gadget->ep0, ffs->ep0req);
ffs->ep0req = NULL;
ffs->gadget = NULL;
clear_bit(FFS_FL_BOUND, &ffs->flags);
+ mutex_unlock(&ffs->mutex);
ffs_data_put(ffs);
}
}
diff --git a/drivers/usb/gadget/function/f_ncm.c b/drivers/usb/gadget/function/f_ncm.c
index dc8f078..e0c1832 100644
--- a/drivers/usb/gadget/function/f_ncm.c
+++ b/drivers/usb/gadget/function/f_ncm.c
@@ -83,7 +83,9 @@ static inline struct f_ncm *func_to_ncm(struct usb_function *f)
/* peak (theoretical) bulk transfer rate in bits-per-second */
static inline unsigned ncm_bitrate(struct usb_gadget *g)
{
- if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
+ if (!g)
+ return 0;
+ else if (gadget_is_superspeed(g) && g->speed >= USB_SPEED_SUPER_PLUS)
return 4250000000U;
else if (gadget_is_superspeed(g) && g->speed == USB_SPEED_SUPER)
return 3750000000U;
diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
index 885a7f5..850394e 100644
--- a/drivers/usb/gadget/function/f_uac2.c
+++ b/drivers/usb/gadget/function/f_uac2.c
@@ -1069,6 +1069,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
}
std_as_out_if0_desc.bInterfaceNumber = ret;
std_as_out_if1_desc.bInterfaceNumber = ret;
+ std_as_out_if1_desc.bNumEndpoints = 1;
uac2->as_out_intf = ret;
uac2->as_out_alt = 0;
diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c
index d22ac23..200eb78 100644
--- a/drivers/usb/gadget/function/u_audio.c
+++ b/drivers/usb/gadget/function/u_audio.c
@@ -1174,7 +1174,7 @@ void g_audio_cleanup(struct g_audio *g_audio)
uac = g_audio->uac;
card = uac->card;
if (card)
- snd_card_free(card);
+ snd_card_free_when_closed(card);
kfree(uac->p_prm.reqs);
kfree(uac->c_prm.reqs);
diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
index 6f68cbe..116d2e1 100644
--- a/drivers/usb/gadget/function/u_serial.c
+++ b/drivers/usb/gadget/function/u_serial.c
@@ -81,6 +81,9 @@
#define WRITE_BUF_SIZE 8192 /* TX only */
#define GS_CONSOLE_BUF_SIZE 8192
+/* Prevents race conditions while accessing gser->ioport */
+static DEFINE_SPINLOCK(serial_port_lock);
+
/* console info */
struct gs_console {
struct console console;
@@ -1374,8 +1377,10 @@ void gserial_disconnect(struct gserial *gser)
if (!port)
return;
+ spin_lock_irqsave(&serial_port_lock, flags);
+
/* tell the TTY glue not to do I/O here any more */
- spin_lock_irqsave(&port->port_lock, flags);
+ spin_lock(&port->port_lock);
gs_console_disconnect(port);
@@ -1390,7 +1395,8 @@ void gserial_disconnect(struct gserial *gser)
tty_hangup(port->port.tty);
}
port->suspended = false;
- spin_unlock_irqrestore(&port->port_lock, flags);
+ spin_unlock(&port->port_lock);
+ spin_unlock_irqrestore(&serial_port_lock, flags);
/* disable endpoints, aborting down any active I/O */
usb_ep_disable(gser->out);
@@ -1424,10 +1430,19 @@ EXPORT_SYMBOL_GPL(gserial_suspend);
void gserial_resume(struct gserial *gser)
{
- struct gs_port *port = gser->ioport;
+ struct gs_port *port;
unsigned long flags;
- spin_lock_irqsave(&port->port_lock, flags);
+ spin_lock_irqsave(&serial_port_lock, flags);
+ port = gser->ioport;
+
+ if (!port) {
+ spin_unlock_irqrestore(&serial_port_lock, flags);
+ return;
+ }
+
+ spin_lock(&port->port_lock);
+ spin_unlock(&serial_port_lock);
port->suspended = false;
if (!port->start_delayed) {
spin_unlock_irqrestore(&port->port_lock, flags);
diff --git a/drivers/usb/gadget/function/uvc_configfs.c b/drivers/usb/gadget/function/uvc_configfs.c
index 77d6403..b553dca 100644
--- a/drivers/usb/gadget/function/uvc_configfs.c
+++ b/drivers/usb/gadget/function/uvc_configfs.c
@@ -505,11 +505,68 @@ UVC_ATTR_RO(uvcg_default_output_, cname, aname)
UVCG_DEFAULT_OUTPUT_ATTR(b_terminal_id, bTerminalID, 8);
UVCG_DEFAULT_OUTPUT_ATTR(w_terminal_type, wTerminalType, 16);
UVCG_DEFAULT_OUTPUT_ATTR(b_assoc_terminal, bAssocTerminal, 8);
-UVCG_DEFAULT_OUTPUT_ATTR(b_source_id, bSourceID, 8);
UVCG_DEFAULT_OUTPUT_ATTR(i_terminal, iTerminal, 8);
#undef UVCG_DEFAULT_OUTPUT_ATTR
+static ssize_t uvcg_default_output_b_source_id_show(struct config_item *item,
+ char *page)
+{
+ struct config_group *group = to_config_group(item);
+ struct f_uvc_opts *opts;
+ struct config_item *opts_item;
+ struct mutex *su_mutex = &group->cg_subsys->su_mutex;
+ struct uvc_output_terminal_descriptor *cd;
+ int result;
+
+ mutex_lock(su_mutex); /* for navigating configfs hierarchy */
+
+ opts_item = group->cg_item.ci_parent->ci_parent->
+ ci_parent->ci_parent;
+ opts = to_f_uvc_opts(opts_item);
+ cd = &opts->uvc_output_terminal;
+
+ mutex_lock(&opts->lock);
+ result = sprintf(page, "%u\n", le8_to_cpu(cd->bSourceID));
+ mutex_unlock(&opts->lock);
+
+ mutex_unlock(su_mutex);
+
+ return result;
+}
+
+static ssize_t uvcg_default_output_b_source_id_store(struct config_item *item,
+ const char *page, size_t len)
+{
+ struct config_group *group = to_config_group(item);
+ struct f_uvc_opts *opts;
+ struct config_item *opts_item;
+ struct mutex *su_mutex = &group->cg_subsys->su_mutex;
+ struct uvc_output_terminal_descriptor *cd;
+ int result;
+ u8 num;
+
+ result = kstrtou8(page, 0, &num);
+ if (result)
+ return result;
+
+ mutex_lock(su_mutex); /* for navigating configfs hierarchy */
+
+ opts_item = group->cg_item.ci_parent->ci_parent->
+ ci_parent->ci_parent;
+ opts = to_f_uvc_opts(opts_item);
+ cd = &opts->uvc_output_terminal;
+
+ mutex_lock(&opts->lock);
+ cd->bSourceID = num;
+ mutex_unlock(&opts->lock);
+
+ mutex_unlock(su_mutex);
+
+ return len;
+}
+UVC_ATTR(uvcg_default_output_, b_source_id, bSourceID);
+
static struct configfs_attribute *uvcg_default_output_attrs[] = {
&uvcg_default_output_attr_b_terminal_id,
&uvcg_default_output_attr_w_terminal_type,
diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
index 9e8b678..ed28aaa 100644
--- a/drivers/usb/gadget/legacy/inode.c
+++ b/drivers/usb/gadget/legacy/inode.c
@@ -229,6 +229,7 @@ static void put_ep (struct ep_data *data)
*/
static const char *CHIP;
+static DEFINE_MUTEX(sb_mutex); /* Serialize superblock operations */
/*----------------------------------------------------------------------*/
@@ -2013,13 +2014,20 @@ gadgetfs_fill_super (struct super_block *sb, struct fs_context *fc)
{
struct inode *inode;
struct dev_data *dev;
+ int rc;
- if (the_device)
- return -ESRCH;
+ mutex_lock(&sb_mutex);
+
+ if (the_device) {
+ rc = -ESRCH;
+ goto Done;
+ }
CHIP = usb_get_gadget_udc_name();
- if (!CHIP)
- return -ENODEV;
+ if (!CHIP) {
+ rc = -ENODEV;
+ goto Done;
+ }
/* superblock */
sb->s_blocksize = PAGE_SIZE;
@@ -2056,13 +2064,17 @@ gadgetfs_fill_super (struct super_block *sb, struct fs_context *fc)
* from binding to a controller.
*/
the_device = dev;
- return 0;
+ rc = 0;
+ goto Done;
-Enomem:
+ Enomem:
kfree(CHIP);
CHIP = NULL;
+ rc = -ENOMEM;
- return -ENOMEM;
+ Done:
+ mutex_unlock(&sb_mutex);
+ return rc;
}
/* "mount -t gadgetfs path /dev/gadget" ends up here */
@@ -2084,6 +2096,7 @@ static int gadgetfs_init_fs_context(struct fs_context *fc)
static void
gadgetfs_kill_sb (struct super_block *sb)
{
+ mutex_lock(&sb_mutex);
kill_litter_super (sb);
if (the_device) {
put_dev (the_device);
@@ -2091,6 +2104,7 @@ gadgetfs_kill_sb (struct super_block *sb)
}
kfree(CHIP);
CHIP = NULL;
+ mutex_unlock(&sb_mutex);
}
/*----------------------------------------------------------------------*/
diff --git a/drivers/usb/gadget/legacy/webcam.c b/drivers/usb/gadget/legacy/webcam.c
index 94e2286..e9b5846 100644
--- a/drivers/usb/gadget/legacy/webcam.c
+++ b/drivers/usb/gadget/legacy/webcam.c
@@ -293,6 +293,7 @@ static const struct uvc_descriptor_header * const uvc_fs_streaming_cls[] = {
(const struct uvc_descriptor_header *) &uvc_format_yuv,
(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
+ (const struct uvc_descriptor_header *) &uvc_color_matching,
(const struct uvc_descriptor_header *) &uvc_format_mjpg,
(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
@@ -305,6 +306,7 @@ static const struct uvc_descriptor_header * const uvc_hs_streaming_cls[] = {
(const struct uvc_descriptor_header *) &uvc_format_yuv,
(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
+ (const struct uvc_descriptor_header *) &uvc_color_matching,
(const struct uvc_descriptor_header *) &uvc_format_mjpg,
(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
@@ -317,6 +319,7 @@ static const struct uvc_descriptor_header * const uvc_ss_streaming_cls[] = {
(const struct uvc_descriptor_header *) &uvc_format_yuv,
(const struct uvc_descriptor_header *) &uvc_frame_yuv_360p,
(const struct uvc_descriptor_header *) &uvc_frame_yuv_720p,
+ (const struct uvc_descriptor_header *) &uvc_color_matching,
(const struct uvc_descriptor_header *) &uvc_format_mjpg,
(const struct uvc_descriptor_header *) &uvc_frame_mjpg_360p,
(const struct uvc_descriptor_header *) &uvc_frame_mjpg_720p,
diff --git a/drivers/usb/gadget/udc/bcm63xx_udc.c b/drivers/usb/gadget/udc/bcm63xx_udc.c
index a9f07c5..5c7dff6b 100644
--- a/drivers/usb/gadget/udc/bcm63xx_udc.c
+++ b/drivers/usb/gadget/udc/bcm63xx_udc.c
@@ -2259,7 +2259,7 @@ static void bcm63xx_udc_init_debugfs(struct bcm63xx_udc *udc)
*/
static void bcm63xx_udc_cleanup_debugfs(struct bcm63xx_udc *udc)
{
- debugfs_remove(debugfs_lookup(udc->gadget.name, usb_debug_root));
+ debugfs_lookup_and_remove(udc->gadget.name, usb_debug_root);
}
/***********************************************************************
diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c
index d0e051b..6f7ade1 100644
--- a/drivers/usb/gadget/udc/fotg210-udc.c
+++ b/drivers/usb/gadget/udc/fotg210-udc.c
@@ -706,6 +706,20 @@ static int fotg210_is_epnstall(struct fotg210_ep *ep)
return value & INOUTEPMPSR_STL_EP ? 1 : 0;
}
+/* For EP0 requests triggered by this driver (currently GET_STATUS response) */
+static void fotg210_ep0_complete(struct usb_ep *_ep, struct usb_request *req)
+{
+ struct fotg210_ep *ep;
+ struct fotg210_udc *fotg210;
+
+ ep = container_of(_ep, struct fotg210_ep, ep);
+ fotg210 = ep->fotg210;
+
+ if (req->status || req->actual != req->length) {
+ dev_warn(&fotg210->gadget.dev, "EP0 request failed: %d\n", req->status);
+ }
+}
+
static void fotg210_get_status(struct fotg210_udc *fotg210,
struct usb_ctrlrequest *ctrl)
{
@@ -1172,6 +1186,8 @@ static int fotg210_udc_probe(struct platform_device *pdev)
if (fotg210->ep0_req == NULL)
goto err_map;
+ fotg210->ep0_req->complete = fotg210_ep0_complete;
+
fotg210_init(fotg210);
fotg210_disable_unplug(fotg210);
diff --git a/drivers/usb/gadget/udc/fusb300_udc.c b/drivers/usb/gadget/udc/fusb300_udc.c
index 9af8b41..5e9e8e5 100644
--- a/drivers/usb/gadget/udc/fusb300_udc.c
+++ b/drivers/usb/gadget/udc/fusb300_udc.c
@@ -1347,6 +1347,7 @@ static int fusb300_remove(struct platform_device *pdev)
usb_del_gadget_udc(&fusb300->gadget);
iounmap(fusb300->reg);
free_irq(platform_get_irq(pdev, 0), fusb300);
+ free_irq(platform_get_irq(pdev, 1), fusb300);
fusb300_free_request(&fusb300->ep[0]->ep, fusb300->ep0_req);
for (i = 0; i < FUSB300_MAX_NUM_EP; i++)
@@ -1432,7 +1433,7 @@ static int fusb300_probe(struct platform_device *pdev)
IRQF_SHARED, udc_name, fusb300);
if (ret < 0) {
pr_err("request_irq1 error (%d)\n", ret);
- goto clean_up;
+ goto err_request_irq1;
}
INIT_LIST_HEAD(&fusb300->gadget.ep_list);
@@ -1471,7 +1472,7 @@ static int fusb300_probe(struct platform_device *pdev)
GFP_KERNEL);
if (fusb300->ep0_req == NULL) {
ret = -ENOMEM;
- goto clean_up3;
+ goto err_alloc_request;
}
init_controller(fusb300);
@@ -1486,7 +1487,10 @@ static int fusb300_probe(struct platform_device *pdev)
err_add_udc:
fusb300_free_request(&fusb300->ep[0]->ep, fusb300->ep0_req);
-clean_up3:
+err_alloc_request:
+ free_irq(ires1->start, fusb300);
+
+err_request_irq1:
free_irq(ires->start, fusb300);
clean_up:
diff --git a/drivers/usb/gadget/udc/gr_udc.c b/drivers/usb/gadget/udc/gr_udc.c
index 4b35739..d1febde 100644
--- a/drivers/usb/gadget/udc/gr_udc.c
+++ b/drivers/usb/gadget/udc/gr_udc.c
@@ -215,7 +215,7 @@ static void gr_dfs_create(struct gr_udc *dev)
static void gr_dfs_delete(struct gr_udc *dev)
{
- debugfs_remove(debugfs_lookup(dev_name(dev->dev), usb_debug_root));
+ debugfs_lookup_and_remove(dev_name(dev->dev), usb_debug_root);
}
#else /* !CONFIG_USB_GADGET_DEBUG_FS */
diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c
index 865de8d..ec0d3d7 100644
--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
+++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
@@ -532,7 +532,7 @@ static void create_debug_file(struct lpc32xx_udc *udc)
static void remove_debug_file(struct lpc32xx_udc *udc)
{
- debugfs_remove(debugfs_lookup(debug_filename, NULL));
+ debugfs_lookup_and_remove(debug_filename, NULL);
}
#else
diff --git a/drivers/usb/gadget/udc/pxa25x_udc.c b/drivers/usb/gadget/udc/pxa25x_udc.c
index a09ec1d..e4d2ab5 100644
--- a/drivers/usb/gadget/udc/pxa25x_udc.c
+++ b/drivers/usb/gadget/udc/pxa25x_udc.c
@@ -1341,7 +1341,7 @@ DEFINE_SHOW_ATTRIBUTE(udc_debug);
debugfs_create_file(dev->gadget.name, \
S_IRUGO, NULL, dev, &udc_debug_fops); \
} while (0)
-#define remove_debug_files(dev) debugfs_remove(debugfs_lookup(dev->gadget.name, NULL))
+#define remove_debug_files(dev) debugfs_lookup_and_remove(dev->gadget.name, NULL)
#else /* !CONFIG_USB_GADGET_DEBUG_FILES */
diff --git a/drivers/usb/gadget/udc/pxa27x_udc.c b/drivers/usb/gadget/udc/pxa27x_udc.c
index f4b7a2a..282b114 100644
--- a/drivers/usb/gadget/udc/pxa27x_udc.c
+++ b/drivers/usb/gadget/udc/pxa27x_udc.c
@@ -215,7 +215,7 @@ static void pxa_init_debugfs(struct pxa_udc *udc)
static void pxa_cleanup_debugfs(struct pxa_udc *udc)
{
- debugfs_remove(debugfs_lookup(udc->gadget.name, usb_debug_root));
+ debugfs_lookup_and_remove(udc->gadget.name, usb_debug_root);
}
#else
diff --git a/drivers/usb/host/ehci-fsl.c b/drivers/usb/host/ehci-fsl.c
index 385be30..e38dfbd 100644
--- a/drivers/usb/host/ehci-fsl.c
+++ b/drivers/usb/host/ehci-fsl.c
@@ -29,7 +29,7 @@
#include "ehci-fsl.h"
#define DRIVER_DESC "Freescale EHCI Host controller driver"
-#define DRV_NAME "ehci-fsl"
+#define DRV_NAME "fsl-ehci"
static struct hc_driver __read_mostly fsl_ehci_hc_driver;
diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c
index 4b02ace..d9a3fd8 100644
--- a/drivers/usb/host/fotg210-hcd.c
+++ b/drivers/usb/host/fotg210-hcd.c
@@ -862,7 +862,7 @@ static inline void remove_debug_files(struct fotg210_hcd *fotg210)
{
struct usb_bus *bus = &fotg210_to_hcd(fotg210)->self;
- debugfs_remove(debugfs_lookup(bus->bus_name, fotg210_debug_root));
+ debugfs_lookup_and_remove(bus->bus_name, fotg210_debug_root);
}
/* handshake - spin reading hc until handshake completes or fails
diff --git a/drivers/usb/host/isp116x-hcd.c b/drivers/usb/host/isp116x-hcd.c
index 8c7f099..9c3e12f2 100644
--- a/drivers/usb/host/isp116x-hcd.c
+++ b/drivers/usb/host/isp116x-hcd.c
@@ -1206,7 +1206,7 @@ static void create_debug_file(struct isp116x *isp116x)
static void remove_debug_file(struct isp116x *isp116x)
{
- debugfs_remove(debugfs_lookup(hcd_name, usb_debug_root));
+ debugfs_lookup_and_remove(hcd_name, usb_debug_root);
}
#else
diff --git a/drivers/usb/host/isp1362-hcd.c b/drivers/usb/host/isp1362-hcd.c
index d8610ce..bc68669 100644
--- a/drivers/usb/host/isp1362-hcd.c
+++ b/drivers/usb/host/isp1362-hcd.c
@@ -2170,7 +2170,7 @@ static void create_debug_file(struct isp1362_hcd *isp1362_hcd)
static void remove_debug_file(struct isp1362_hcd *isp1362_hcd)
{
- debugfs_remove(debugfs_lookup("isp1362", usb_debug_root));
+ debugfs_lookup_and_remove("isp1362", usb_debug_root);
}
/*-------------------------------------------------------------------------*/
diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
index 30de85a..994dc56 100644
--- a/drivers/usb/host/max3421-hcd.c
+++ b/drivers/usb/host/max3421-hcd.c
@@ -1436,7 +1436,7 @@ max3421_spi_thread(void *dev_id)
* use spi_wr_buf().
*/
for (i = 0; i < ARRAY_SIZE(max3421_hcd->iopins); ++i) {
- u8 val = spi_rd8(hcd, MAX3421_REG_IOPINS1);
+ u8 val = spi_rd8(hcd, MAX3421_REG_IOPINS1 + i);
val = ((val & 0xf0) |
(max3421_hcd->iopins[i] & 0x0f));
diff --git a/drivers/usb/host/sl811-hcd.c b/drivers/usb/host/sl811-hcd.c
index 8562373..825ff67 100644
--- a/drivers/usb/host/sl811-hcd.c
+++ b/drivers/usb/host/sl811-hcd.c
@@ -1501,7 +1501,7 @@ static void create_debug_file(struct sl811 *sl811)
static void remove_debug_file(struct sl811 *sl811)
{
- debugfs_remove(debugfs_lookup("sl811h", usb_debug_root));
+ debugfs_lookup_and_remove("sl811h", usb_debug_root);
}
/*-------------------------------------------------------------------------*/
diff --git a/drivers/usb/host/uhci-hcd.c b/drivers/usb/host/uhci-hcd.c
index d90b869..d138f62 100644
--- a/drivers/usb/host/uhci-hcd.c
+++ b/drivers/usb/host/uhci-hcd.c
@@ -536,8 +536,8 @@ static void release_uhci(struct uhci_hcd *uhci)
uhci->is_initialized = 0;
spin_unlock_irq(&uhci->lock);
- debugfs_remove(debugfs_lookup(uhci_to_hcd(uhci)->self.bus_name,
- uhci_debugfs_root));
+ debugfs_lookup_and_remove(uhci_to_hcd(uhci)->self.bus_name,
+ uhci_debugfs_root);
for (i = 0; i < UHCI_NUM_SKELQH; i++)
uhci_free_qh(uhci, uhci->skelqh[i]);
@@ -700,7 +700,7 @@ static int uhci_start(struct usb_hcd *hcd)
uhci->frame, uhci->frame_dma_handle);
err_alloc_frame:
- debugfs_remove(debugfs_lookup(hcd->self.bus_name, uhci_debugfs_root));
+ debugfs_lookup_and_remove(hcd->self.bus_name, uhci_debugfs_root);
return retval;
}
diff --git a/drivers/usb/host/xhci-mvebu.c b/drivers/usb/host/xhci-mvebu.c
index 8ca1a23..eabccf2 100644
--- a/drivers/usb/host/xhci-mvebu.c
+++ b/drivers/usb/host/xhci-mvebu.c
@@ -33,7 +33,7 @@ static void xhci_mvebu_mbus_config(void __iomem *base,
/* Program each DRAM CS in a seperate window */
for (win = 0; win < dram->num_cs; win++) {
- const struct mbus_dram_window *cs = dram->cs + win;
+ const struct mbus_dram_window *cs = &dram->cs[win];
writel(((cs->size - 1) & 0xffff0000) | (cs->mbus_attr << 8) |
(dram->mbus_dram_target_id << 4) | 1,
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index 105f2b8..7308b38 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -78,9 +78,12 @@ static const char hcd_name[] = "xhci_hcd";
static struct hc_driver __read_mostly xhci_pci_hc_driver;
static int xhci_pci_setup(struct usb_hcd *hcd);
+static int xhci_pci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ struct usb_tt *tt, gfp_t mem_flags);
static const struct xhci_driver_overrides xhci_pci_overrides __initconst = {
.reset = xhci_pci_setup,
+ .update_hub_device = xhci_pci_update_hub_device,
};
/* called after powerup, by probe or system-pm "wakeup" */
@@ -353,8 +356,38 @@ static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev)
NULL);
ACPI_FREE(obj);
}
+
+static void xhci_find_lpm_incapable_ports(struct usb_hcd *hcd, struct usb_device *hdev)
+{
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ struct xhci_hub *rhub = &xhci->usb3_rhub;
+ int ret;
+ int i;
+
+ /* This is not the usb3 roothub we are looking for */
+ if (hcd != rhub->hcd)
+ return;
+
+ if (hdev->maxchild > rhub->num_ports) {
+ dev_err(&hdev->dev, "USB3 roothub port number mismatch\n");
+ return;
+ }
+
+ for (i = 0; i < hdev->maxchild; i++) {
+ ret = usb_acpi_port_lpm_incapable(hdev, i);
+
+ dev_dbg(&hdev->dev, "port-%d disable U1/U2 _DSM: %d\n", i + 1, ret);
+
+ if (ret >= 0) {
+ rhub->ports[i]->lpm_incapable = ret;
+ continue;
+ }
+ }
+}
+
#else
static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev) { }
+static void xhci_find_lpm_incapable_ports(struct usb_hcd *hcd, struct usb_device *hdev) { }
#endif /* CONFIG_ACPI */
/* called during probe() after chip reset completes */
@@ -387,6 +420,16 @@ static int xhci_pci_setup(struct usb_hcd *hcd)
return xhci_pci_reinit(xhci, pdev);
}
+static int xhci_pci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ struct usb_tt *tt, gfp_t mem_flags)
+{
+ /* Check if acpi claims some USB3 roothub ports are lpm incapable */
+ if (!hdev->parent)
+ xhci_find_lpm_incapable_ports(hcd, hdev);
+
+ return xhci_update_hub_device(hcd, hdev, tt, mem_flags);
+}
+
/*
* We need to register our own PCI probe function (instead of the USB core's
* function) in order to create a second roothub under xHCI.
@@ -456,6 +499,8 @@ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
if (xhci->quirks & XHCI_DEFAULT_PM_RUNTIME_ALLOW)
pm_runtime_allow(&dev->dev);
+ dma_set_max_seg_size(&dev->dev, UINT_MAX);
+
return 0;
put_usb3_hcd:
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 90bf5d5..220d836 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -1178,7 +1178,10 @@ static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci,
struct xhci_virt_ep *ep;
struct xhci_ring *ring;
- ep = &xhci->devs[slot_id]->eps[ep_index];
+ ep = xhci_get_virt_ep(xhci, slot_id, ep_index);
+ if (!ep)
+ return;
+
if ((ep->ep_state & EP_HAS_STREAMS) ||
(ep->ep_state & EP_GETTING_NO_STREAMS)) {
int stream_id;
diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c
index bdb7765..32df571 100644
--- a/drivers/usb/host/xhci-tegra.c
+++ b/drivers/usb/host/xhci-tegra.c
@@ -1225,6 +1225,9 @@ static void tegra_xhci_id_work(struct work_struct *work)
mutex_unlock(&tegra->lock);
+ tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(tegra->padctl,
+ tegra->otg_usb2_port);
+
if (tegra->host_mode) {
/* switch to host mode */
if (tegra->otg_usb3_port >= 0) {
@@ -1339,9 +1342,6 @@ static int tegra_xhci_id_notify(struct notifier_block *nb,
}
tegra->otg_usb2_port = tegra_xusb_get_usb2_port(tegra, usbphy);
- tegra->otg_usb3_port = tegra_xusb_padctl_get_usb3_companion(
- tegra->padctl,
- tegra->otg_usb2_port);
tegra->host_mode = (usbphy->last_event == USB_EVENT_ID) ? true : false;
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 3636fa4..1fd2f6a 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -9,6 +9,7 @@
*/
#include <linux/pci.h>
+#include <linux/iommu.h>
#include <linux/iopoll.h>
#include <linux/irq.h>
#include <linux/log2.h>
@@ -225,6 +226,7 @@ int xhci_reset(struct xhci_hcd *xhci, u64 timeout_us)
static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
{
struct device *dev = xhci_to_hcd(xhci)->self.sysdev;
+ struct iommu_domain *domain;
int err, i;
u64 val;
u32 intrs;
@@ -243,7 +245,9 @@ static void xhci_zero_64b_regs(struct xhci_hcd *xhci)
* an iommu. Doing anything when there is no iommu is definitely
* unsafe...
*/
- if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !device_iommu_mapped(dev))
+ domain = iommu_get_domain_for_dev(dev);
+ if (!(xhci->quirks & XHCI_ZERO_64B_REGS) || !domain ||
+ domain->type == IOMMU_DOMAIN_IDENTITY)
return;
xhci_info(xhci, "Zeroing 64bit base registers, expecting fault\n");
@@ -3962,6 +3966,7 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
struct xhci_virt_device *virt_dev;
struct xhci_slot_ctx *slot_ctx;
+ unsigned long flags;
int i, ret;
/*
@@ -3990,7 +3995,11 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
}
virt_dev->udev = NULL;
xhci_disable_slot(xhci, udev->slot_id);
+
+ spin_lock_irqsave(&xhci->lock, flags);
xhci_free_virt_device(xhci, udev->slot_id);
+ spin_unlock_irqrestore(&xhci->lock, flags);
+
}
int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
@@ -5046,6 +5055,7 @@ static int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
struct usb_device *udev, enum usb3_link_state state)
{
struct xhci_hcd *xhci;
+ struct xhci_port *port;
u16 hub_encoded_timeout;
int mel;
int ret;
@@ -5059,6 +5069,13 @@ static int xhci_enable_usb3_lpm_timeout(struct usb_hcd *hcd,
!xhci->devs[udev->slot_id])
return USB3_LPM_DISABLED;
+ /* If connected to root port then check port can handle lpm */
+ if (udev->parent && !udev->parent->parent) {
+ port = xhci->usb3_rhub.ports[udev->portnum - 1];
+ if (port->lpm_incapable)
+ return USB3_LPM_DISABLED;
+ }
+
hub_encoded_timeout = xhci_calculate_lpm_timeout(hcd, udev, state);
mel = calculate_max_exit_latency(udev, state, hub_encoded_timeout);
if (mel < 0) {
@@ -5118,7 +5135,7 @@ static int xhci_disable_usb3_lpm_timeout(struct usb_hcd *hcd,
/* Once a hub descriptor is fetched for a device, we need to update the xHC's
* internal data structures for the device.
*/
-static int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
struct usb_tt *tt, gfp_t mem_flags)
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
@@ -5218,6 +5235,7 @@ static int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
xhci_free_command(xhci, config_cmd);
return ret;
}
+EXPORT_SYMBOL_GPL(xhci_update_hub_device);
static int xhci_get_frame(struct usb_hcd *hcd)
{
@@ -5495,6 +5513,8 @@ void xhci_init_driver(struct hc_driver *drv,
drv->check_bandwidth = over->check_bandwidth;
if (over->reset_bandwidth)
drv->reset_bandwidth = over->reset_bandwidth;
+ if (over->update_hub_device)
+ drv->update_hub_device = over->update_hub_device;
}
}
EXPORT_SYMBOL_GPL(xhci_init_driver);
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 4193427..b8ad967 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1737,6 +1737,7 @@ struct xhci_port {
int hcd_portnum;
struct xhci_hub *rhub;
struct xhci_port_cap *port_cap;
+ unsigned int lpm_incapable:1;
};
struct xhci_hub {
@@ -1948,6 +1949,8 @@ struct xhci_driver_overrides {
struct usb_host_endpoint *ep);
int (*check_bandwidth)(struct usb_hcd *, struct usb_device *);
void (*reset_bandwidth)(struct usb_hcd *, struct usb_device *);
+ int (*update_hub_device)(struct usb_hcd *hcd, struct usb_device *hdev,
+ struct usb_tt *tt, gfp_t mem_flags);
};
#define XHCI_CFC_DELAY 10
@@ -2104,6 +2107,8 @@ int xhci_drop_endpoint(struct usb_hcd *hcd, struct usb_device *udev,
struct usb_host_endpoint *ep);
int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+int xhci_update_hub_device(struct usb_hcd *hcd, struct usb_device *hdev,
+ struct usb_tt *tt, gfp_t mem_flags);
int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
int xhci_ext_cap_init(struct xhci_hcd *xhci);
diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
index 988a8c02..b421f13 100644
--- a/drivers/usb/misc/iowarrior.c
+++ b/drivers/usb/misc/iowarrior.c
@@ -814,7 +814,7 @@ static int iowarrior_probe(struct usb_interface *interface,
break;
case USB_DEVICE_ID_CODEMERCS_IOW100:
- dev->report_size = 13;
+ dev->report_size = 12;
break;
}
}
diff --git a/drivers/usb/musb/mediatek.c b/drivers/usb/musb/mediatek.c
index 6b92d03..4f52b92 100644
--- a/drivers/usb/musb/mediatek.c
+++ b/drivers/usb/musb/mediatek.c
@@ -346,7 +346,8 @@ static int mtk_musb_init(struct musb *musb)
err_phy_power_on:
phy_exit(glue->phy);
err_phy_init:
- mtk_otg_switch_exit(glue);
+ if (musb->port_mode == MUSB_OTG)
+ mtk_otg_switch_exit(glue);
return ret;
}
diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
index 9bb2077..b3f128b 100644
--- a/drivers/usb/serial/cp210x.c
+++ b/drivers/usb/serial/cp210x.c
@@ -60,6 +60,7 @@ static const struct usb_device_id id_table[] = {
{ USB_DEVICE(0x0846, 0x1100) }, /* NetGear Managed Switch M4100 series, M5300 series, M7100 series */
{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
{ USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
+ { USB_DEVICE(0x0908, 0x0070) }, /* Siemens SCALANCE LPE-9000 USB Serial Console */
{ USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
{ USB_DEVICE(0x0988, 0x0578) }, /* Teraoka AD2000 */
{ USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
@@ -119,6 +120,7 @@ static const struct usb_device_id id_table[] = {
{ USB_DEVICE(0x10C4, 0x826B) }, /* Cygnal Integrated Products, Inc., Fasttrax GPS demonstration module */
{ USB_DEVICE(0x10C4, 0x8281) }, /* Nanotec Plug & Drive */
{ USB_DEVICE(0x10C4, 0x8293) }, /* Telegesis ETRX2USB */
+ { USB_DEVICE(0x10C4, 0x82AA) }, /* Silicon Labs IFS-USB-DATACABLE used with Quint UPS */
{ USB_DEVICE(0x10C4, 0x82EF) }, /* CESINEL FALCO 6105 AC Power Supply */
{ USB_DEVICE(0x10C4, 0x82F1) }, /* CESINEL MEDCAL EFD Earth Fault Detector */
{ USB_DEVICE(0x10C4, 0x82F2) }, /* CESINEL MEDCAL ST Network Analyzer */
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index dee79c7..fc12fee 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -255,10 +255,16 @@ static void option_instat_callback(struct urb *urb);
#define QUECTEL_PRODUCT_EP06 0x0306
#define QUECTEL_PRODUCT_EM05G 0x030a
#define QUECTEL_PRODUCT_EM060K 0x030b
+#define QUECTEL_PRODUCT_EM05G_CS 0x030c
+#define QUECTEL_PRODUCT_EM05CN_SG 0x0310
#define QUECTEL_PRODUCT_EM05G_SG 0x0311
+#define QUECTEL_PRODUCT_EM05CN 0x0312
+#define QUECTEL_PRODUCT_EM05G_GR 0x0313
+#define QUECTEL_PRODUCT_EM05G_RS 0x0314
#define QUECTEL_PRODUCT_EM12 0x0512
#define QUECTEL_PRODUCT_RM500Q 0x0800
#define QUECTEL_PRODUCT_RM520N 0x0801
+#define QUECTEL_PRODUCT_EC200U 0x0901
#define QUECTEL_PRODUCT_EC200S_CN 0x6002
#define QUECTEL_PRODUCT_EC200T 0x6026
#define QUECTEL_PRODUCT_RM500K 0x7001
@@ -396,6 +402,8 @@ static void option_instat_callback(struct urb *urb);
#define LONGCHEER_VENDOR_ID 0x1c9e
/* 4G Systems products */
+/* This one was sold as the VW and Skoda "Carstick LTE" */
+#define FOUR_G_SYSTEMS_PRODUCT_CARSTICK_LTE 0x7605
/* This is the 4G XS Stick W14 a.k.a. Mobilcom Debitel Surf-Stick *
* It seems to contain a Qualcomm QSC6240/6290 chipset */
#define FOUR_G_SYSTEMS_PRODUCT_W14 0x9603
@@ -1159,8 +1167,18 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff),
.driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) },
+ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05CN, 0xff),
+ .driver_info = RSVD(6) | ZLP },
+ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05CN_SG, 0xff),
+ .driver_info = RSVD(6) | ZLP },
{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff),
.driver_info = RSVD(6) | ZLP },
+ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_GR, 0xff),
+ .driver_info = RSVD(6) | ZLP },
+ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_CS, 0xff),
+ .driver_info = RSVD(6) | ZLP },
+ { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_RS, 0xff),
+ .driver_info = RSVD(6) | ZLP },
{ USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G_SG, 0xff),
.driver_info = RSVD(6) | ZLP },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM060K, 0xff, 0x00, 0x40) },
@@ -1180,6 +1198,9 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0xff, 0x30) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0x40) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM520N, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, 0x0900, 0xff, 0, 0), /* RM500U-CN */
+ .driver_info = ZLP },
+ { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200U, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) },
{ USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) },
@@ -1281,6 +1302,14 @@ static const struct usb_device_id option_ids[] = {
.driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990 (PCIe) */
.driver_info = RSVD(0) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */
+ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
.driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
@@ -1959,6 +1988,8 @@ static const struct usb_device_id option_ids[] = {
.driver_info = RSVD(2) },
{ USB_DEVICE(AIRPLUS_VENDOR_ID, AIRPLUS_PRODUCT_MCD650) },
{ USB_DEVICE(TLAYTECH_VENDOR_ID, TLAYTECH_PRODUCT_TEU800) },
+ { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_CARSTICK_LTE),
+ .driver_info = RSVD(0) },
{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W14),
.driver_info = NCTRL(0) | NCTRL(1) },
{ USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W100),
diff --git a/drivers/usb/storage/ene_ub6250.c b/drivers/usb/storage/ene_ub6250.c
index 6012603..97c66c0 100644
--- a/drivers/usb/storage/ene_ub6250.c
+++ b/drivers/usb/storage/ene_ub6250.c
@@ -939,7 +939,7 @@ static int ms_lib_process_bootblock(struct us_data *us, u16 PhyBlock, u8 *PageDa
struct ms_lib_type_extdat ExtraData;
struct ene_ub6250_info *info = (struct ene_ub6250_info *) us->extra;
- PageBuffer = kmalloc(MS_BYTES_PER_PAGE, GFP_KERNEL);
+ PageBuffer = kzalloc(MS_BYTES_PER_PAGE * 2, GFP_KERNEL);
if (PageBuffer == NULL)
return (u32)-1;
diff --git a/drivers/usb/storage/uas-detect.h b/drivers/usb/storage/uas-detect.h
index 3f720fa..d73282c 100644
--- a/drivers/usb/storage/uas-detect.h
+++ b/drivers/usb/storage/uas-detect.h
@@ -116,6 +116,19 @@ static int uas_use_uas_driver(struct usb_interface *intf,
if (le16_to_cpu(udev->descriptor.idVendor) == 0x0bc2)
flags |= US_FL_NO_ATA_1X;
+ /*
+ * RTL9210-based enclosure from HIKSEMI, MD202 reportedly have issues
+ * with UAS. This isn't distinguishable with just idVendor and
+ * idProduct, use manufacturer and product too.
+ *
+ * Reported-by: Hongling Zeng <zenghongling@kylinos.cn>
+ */
+ if (le16_to_cpu(udev->descriptor.idVendor) == 0x0bda &&
+ le16_to_cpu(udev->descriptor.idProduct) == 0x9210 &&
+ (udev->manufacturer && !strcmp(udev->manufacturer, "HIKSEMI")) &&
+ (udev->product && !strcmp(udev->product, "MD202")))
+ flags |= US_FL_IGNORE_UAS;
+
usb_stor_adjust_quirks(udev, &flags);
if (flags & US_FL_IGNORE_UAS) {
diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
index 251778d..1f8c9b1 100644
--- a/drivers/usb/storage/unusual_uas.h
+++ b/drivers/usb/storage/unusual_uas.h
@@ -83,13 +83,6 @@ UNUSUAL_DEV(0x0bc2, 0x331a, 0x0000, 0x9999,
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_NO_REPORT_LUNS),
-/* Reported-by: Hongling Zeng <zenghongling@kylinos.cn> */
-UNUSUAL_DEV(0x0bda, 0x9210, 0x0000, 0x9999,
- "Hiksemi",
- "External HDD",
- USB_SC_DEVICE, USB_PR_DEVICE, NULL,
- US_FL_IGNORE_UAS),
-
/* Reported-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> */
UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999,
"Initio Corporation",
@@ -118,6 +111,13 @@ UNUSUAL_DEV(0x152d, 0x0578, 0x0000, 0x9999,
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_BROKEN_FUA),
+/* Reported by: Yaroslav Furman <yaro330@gmail.com> */
+UNUSUAL_DEV(0x152d, 0x0583, 0x0000, 0x9999,
+ "JMicron",
+ "JMS583Gen 2",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_REPORT_OPCODES),
+
/* Reported-by: Thinh Nguyen <thinhn@synopsys.com> */
UNUSUAL_DEV(0x154b, 0xf00b, 0x0000, 0x9999,
"PNY",
diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
index 998c1e3..9378b44 100644
--- a/drivers/usb/typec/altmodes/displayport.c
+++ b/drivers/usb/typec/altmodes/displayport.c
@@ -101,8 +101,12 @@ static int dp_altmode_configure(struct dp_altmode *dp, u8 con)
if (dp->data.status & DP_STATUS_PREFER_MULTI_FUNC &&
pin_assign & DP_PIN_ASSIGN_MULTI_FUNC_MASK)
pin_assign &= DP_PIN_ASSIGN_MULTI_FUNC_MASK;
- else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK)
+ else if (pin_assign & DP_PIN_ASSIGN_DP_ONLY_MASK) {
pin_assign &= DP_PIN_ASSIGN_DP_ONLY_MASK;
+ /* Default to pin assign C if available */
+ if (pin_assign & BIT(DP_PIN_ASSIGN_C))
+ pin_assign = BIT(DP_PIN_ASSIGN_C);
+ }
if (!pin_assign)
return -EINVAL;
@@ -418,6 +422,18 @@ static const char * const pin_assignments[] = {
[DP_PIN_ASSIGN_F] = "F",
};
+/*
+ * Helper function to extract a peripheral's currently supported
+ * Pin Assignments from its DisplayPort alternate mode state.
+ */
+static u8 get_current_pin_assignments(struct dp_altmode *dp)
+{
+ if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
+ return DP_CAP_PIN_ASSIGN_DFP_D(dp->alt->vdo);
+ else
+ return DP_CAP_PIN_ASSIGN_UFP_D(dp->alt->vdo);
+}
+
static ssize_t
pin_assignment_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t size)
@@ -444,10 +460,7 @@ pin_assignment_store(struct device *dev, struct device_attribute *attr,
goto out_unlock;
}
- if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
- assignments = DP_CAP_UFP_D_PIN_ASSIGN(dp->alt->vdo);
- else
- assignments = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo);
+ assignments = get_current_pin_assignments(dp);
if (!(DP_CONF_GET_PIN_ASSIGN(conf) & assignments)) {
ret = -EINVAL;
@@ -484,10 +497,7 @@ static ssize_t pin_assignment_show(struct device *dev,
cur = get_count_order(DP_CONF_GET_PIN_ASSIGN(dp->data.conf));
- if (DP_CONF_CURRENTLY(dp->data.conf) == DP_CONF_DFP_D)
- assignments = DP_CAP_UFP_D_PIN_ASSIGN(dp->alt->vdo);
- else
- assignments = DP_CAP_DFP_D_PIN_ASSIGN(dp->alt->vdo);
+ assignments = get_current_pin_assignments(dp);
for (i = 0; assignments; assignments >>= 1, i++) {
if (assignments & 1) {
@@ -527,10 +537,10 @@ int dp_altmode_probe(struct typec_altmode *alt)
/* FIXME: Port can only be DFP_U. */
/* Make sure we have compatiple pin configurations */
- if (!(DP_CAP_DFP_D_PIN_ASSIGN(port->vdo) &
- DP_CAP_UFP_D_PIN_ASSIGN(alt->vdo)) &&
- !(DP_CAP_UFP_D_PIN_ASSIGN(port->vdo) &
- DP_CAP_DFP_D_PIN_ASSIGN(alt->vdo)))
+ if (!(DP_CAP_PIN_ASSIGN_DFP_D(port->vdo) &
+ DP_CAP_PIN_ASSIGN_UFP_D(alt->vdo)) &&
+ !(DP_CAP_PIN_ASSIGN_UFP_D(port->vdo) &
+ DP_CAP_PIN_ASSIGN_DFP_D(alt->vdo)))
return -ENODEV;
ret = sysfs_create_group(&alt->dev.kobj, &dp_altmode_group);
diff --git a/drivers/usb/typec/mux/intel_pmc_mux.c b/drivers/usb/typec/mux/intel_pmc_mux.c
index a2f5cfd..a7313c2 100644
--- a/drivers/usb/typec/mux/intel_pmc_mux.c
+++ b/drivers/usb/typec/mux/intel_pmc_mux.c
@@ -563,15 +563,6 @@ static int pmc_usb_register_port(struct pmc_usb *pmc, int index,
return ret;
}
-static int is_memory(struct acpi_resource *res, void *data)
-{
- struct resource_win win = {};
- struct resource *r = &win.res;
-
- return !(acpi_dev_resource_memory(res, r) ||
- acpi_dev_resource_address_space(res, &win));
-}
-
/* IOM ACPI IDs and IOM_PORT_STATUS_OFFSET */
static const struct acpi_device_id iom_acpi_ids[] = {
/* TigerLake */
@@ -605,9 +596,11 @@ static int pmc_usb_probe_iom(struct pmc_usb *pmc)
return -ENODEV;
INIT_LIST_HEAD(&resource_list);
- ret = acpi_dev_get_resources(adev, &resource_list, is_memory, NULL);
- if (ret < 0)
+ ret = acpi_dev_get_memory_resources(adev, &resource_list);
+ if (ret < 0) {
+ acpi_dev_put(adev);
return ret;
+ }
rentry = list_first_entry_or_null(&resource_list, struct resource_entry, node);
if (rentry)
diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
index 984a13a..8132960 100644
--- a/drivers/usb/typec/tcpm/tcpm.c
+++ b/drivers/usb/typec/tcpm/tcpm.c
@@ -1428,10 +1428,18 @@ static int tcpm_ams_start(struct tcpm_port *port, enum tcpm_ams ams)
static void tcpm_queue_vdm(struct tcpm_port *port, const u32 header,
const u32 *data, int cnt)
{
+ u32 vdo_hdr = port->vdo_data[0];
+
WARN_ON(!mutex_is_locked(&port->lock));
- /* Make sure we are not still processing a previous VDM packet */
- WARN_ON(port->vdm_state > VDM_STATE_DONE);
+ /* If is sending discover_identity, handle received message first */
+ if (PD_VDO_SVDM(vdo_hdr) && PD_VDO_CMD(vdo_hdr) == CMD_DISCOVER_IDENT) {
+ port->send_discover = true;
+ mod_send_discover_delayed_work(port, SEND_DISCOVER_RETRY_MS);
+ } else {
+ /* Make sure we are not still processing a previous VDM packet */
+ WARN_ON(port->vdm_state > VDM_STATE_DONE);
+ }
port->vdo_count = cnt + 1;
port->vdo_data[0] = header;
@@ -1934,11 +1942,13 @@ static void vdm_run_state_machine(struct tcpm_port *port)
switch (PD_VDO_CMD(vdo_hdr)) {
case CMD_DISCOVER_IDENT:
res = tcpm_ams_start(port, DISCOVER_IDENTITY);
- if (res == 0)
+ if (res == 0) {
port->send_discover = false;
- else if (res == -EAGAIN)
+ } else if (res == -EAGAIN) {
+ port->vdo_data[0] = 0;
mod_send_discover_delayed_work(port,
SEND_DISCOVER_RETRY_MS);
+ }
break;
case CMD_DISCOVER_SVID:
res = tcpm_ams_start(port, DISCOVER_SVIDS);
@@ -2021,6 +2031,7 @@ static void vdm_run_state_machine(struct tcpm_port *port)
unsigned long timeout;
port->vdm_retries = 0;
+ port->vdo_data[0] = 0;
port->vdm_state = VDM_STATE_BUSY;
timeout = vdm_ready_timeout(vdo_hdr);
mod_vdm_delayed_work(port, timeout);
@@ -4527,14 +4538,13 @@ static void run_state_machine(struct tcpm_port *port)
tcpm_set_state(port, ready_state(port), 0);
break;
case DR_SWAP_CHANGE_DR:
- if (port->data_role == TYPEC_HOST) {
- tcpm_unregister_altmodes(port);
+ tcpm_unregister_altmodes(port);
+ if (port->data_role == TYPEC_HOST)
tcpm_set_roles(port, true, port->pwr_role,
TYPEC_DEVICE);
- } else {
+ else
tcpm_set_roles(port, true, port->pwr_role,
TYPEC_HOST);
- }
tcpm_ams_finish(port);
tcpm_set_state(port, ready_state(port), 0);
break;
diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
index 7363958..dca6803 100644
--- a/drivers/usb/typec/ucsi/ucsi.c
+++ b/drivers/usb/typec/ucsi/ucsi.c
@@ -1202,7 +1202,7 @@ static int ucsi_register_port(struct ucsi *ucsi, int index)
static int ucsi_init(struct ucsi *ucsi)
{
struct ucsi_connector *con;
- u64 command;
+ u64 command, ntfy;
int ret;
int i;
@@ -1214,8 +1214,8 @@ static int ucsi_init(struct ucsi *ucsi)
}
/* Enable basic notifications */
- ucsi->ntfy = UCSI_ENABLE_NTFY_CMD_COMPLETE | UCSI_ENABLE_NTFY_ERROR;
- command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
+ ntfy = UCSI_ENABLE_NTFY_CMD_COMPLETE | UCSI_ENABLE_NTFY_ERROR;
+ command = UCSI_SET_NOTIFICATION_ENABLE | ntfy;
ret = ucsi_send_command(ucsi, command, NULL, 0);
if (ret < 0)
goto err_reset;
@@ -1247,12 +1247,13 @@ static int ucsi_init(struct ucsi *ucsi)
}
/* Enable all notifications */
- ucsi->ntfy = UCSI_ENABLE_NTFY_ALL;
- command = UCSI_SET_NOTIFICATION_ENABLE | ucsi->ntfy;
+ ntfy = UCSI_ENABLE_NTFY_ALL;
+ command = UCSI_SET_NOTIFICATION_ENABLE | ntfy;
ret = ucsi_send_command(ucsi, command, NULL, 0);
if (ret < 0)
goto err_unregister;
+ ucsi->ntfy = ntfy;
return 0;
err_unregister:
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 4d9e3fd..3ccefa5 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -65,6 +65,7 @@ static void vdpasim_vq_notify(struct vringh *vring)
static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx)
{
struct vdpasim_virtqueue *vq = &vdpasim->vqs[idx];
+ uint16_t last_avail_idx = vq->vring.last_avail_idx;
vringh_init_iotlb(&vq->vring, vdpasim->features, vq->num, false,
(struct vring_desc *)(uintptr_t)vq->desc_addr,
@@ -73,6 +74,18 @@ static void vdpasim_queue_ready(struct vdpasim *vdpasim, unsigned int idx)
(struct vring_used *)
(uintptr_t)vq->device_addr);
+ vq->vring.last_avail_idx = last_avail_idx;
+
+ /*
+ * Since vdpa_sim does not support receive inflight descriptors as a
+ * destination of a migration, let's set both avail_idx and used_idx
+ * the same at vq start. This is how vhost-user works in a
+ * VHOST_SET_VRING_BASE call.
+ *
+ * Although the simple fix is to set last_used_idx at
+ * vdpasim_set_vq_state, it would be reset at vdpasim_queue_ready.
+ */
+ vq->vring.last_used_idx = last_avail_idx;
vq->vring.notify = vdpasim_vq_notify;
}
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index e7d2d5b7..3467c75 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1251,6 +1251,9 @@ static bool vduse_validate_config(struct vduse_dev_config *config)
if (config->config_size > PAGE_SIZE)
return false;
+ if (config->vq_num > 0xffff)
+ return false;
+
if (!device_is_allowed(config->device_id))
return false;
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 852e6c5..5623fc2 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -100,6 +100,8 @@ struct vfio_dma {
struct task_struct *task;
struct rb_root pfn_list; /* Ex-user pinned pfn list */
unsigned long *bitmap;
+ struct mm_struct *mm;
+ size_t locked_vm;
};
struct vfio_batch {
@@ -416,6 +418,19 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn)
return ret;
}
+static int mm_lock_acct(struct task_struct *task, struct mm_struct *mm,
+ bool lock_cap, long npage)
+{
+ int ret = mmap_write_lock_killable(mm);
+
+ if (ret)
+ return ret;
+
+ ret = __account_locked_vm(mm, abs(npage), npage > 0, task, lock_cap);
+ mmap_write_unlock(mm);
+ return ret;
+}
+
static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
{
struct mm_struct *mm;
@@ -424,16 +439,13 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async)
if (!npage)
return 0;
- mm = async ? get_task_mm(dma->task) : dma->task->mm;
- if (!mm)
+ mm = dma->mm;
+ if (async && !mmget_not_zero(mm))
return -ESRCH; /* process exited */
- ret = mmap_write_lock_killable(mm);
- if (!ret) {
- ret = __account_locked_vm(mm, abs(npage), npage > 0, dma->task,
- dma->lock_cap);
- mmap_write_unlock(mm);
- }
+ ret = mm_lock_acct(dma->task, mm, dma->lock_cap, npage);
+ if (!ret)
+ dma->locked_vm += npage;
if (async)
mmput(mm);
@@ -798,8 +810,8 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
struct mm_struct *mm;
int ret;
- mm = get_task_mm(dma->task);
- if (!mm)
+ mm = dma->mm;
+ if (!mmget_not_zero(mm))
return -ENODEV;
ret = vaddr_get_pfns(mm, vaddr, 1, dma->prot, pfn_base, pages);
@@ -809,7 +821,7 @@ static int vfio_pin_page_external(struct vfio_dma *dma, unsigned long vaddr,
ret = 0;
if (do_accounting && !is_invalid_reserved_pfn(*pfn_base)) {
- ret = vfio_lock_acct(dma, 1, true);
+ ret = vfio_lock_acct(dma, 1, false);
if (ret) {
put_pfn(*pfn_base, dma->prot);
if (ret == -ENOMEM)
@@ -1179,6 +1191,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
vfio_unmap_unpin(iommu, dma, true);
vfio_unlink_dma(iommu, dma);
put_task_struct(dma->task);
+ mmdrop(dma->mm);
vfio_dma_bitmap_free(dma);
if (dma->vaddr_invalid) {
iommu->vaddr_invalid_count--;
@@ -1563,6 +1576,38 @@ static bool vfio_iommu_iova_dma_valid(struct vfio_iommu *iommu,
return list_empty(iova);
}
+static int vfio_change_dma_owner(struct vfio_dma *dma)
+{
+ struct task_struct *task = current->group_leader;
+ struct mm_struct *mm = current->mm;
+ long npage = dma->locked_vm;
+ bool lock_cap;
+ int ret;
+
+ if (mm == dma->mm)
+ return 0;
+
+ lock_cap = capable(CAP_IPC_LOCK);
+ ret = mm_lock_acct(task, mm, lock_cap, npage);
+ if (ret)
+ return ret;
+
+ if (mmget_not_zero(dma->mm)) {
+ mm_lock_acct(dma->task, dma->mm, dma->lock_cap, -npage);
+ mmput(dma->mm);
+ }
+
+ if (dma->task != task) {
+ put_task_struct(dma->task);
+ dma->task = get_task_struct(task);
+ }
+ mmdrop(dma->mm);
+ dma->mm = mm;
+ mmgrab(dma->mm);
+ dma->lock_cap = lock_cap;
+ return 0;
+}
+
static int vfio_dma_do_map(struct vfio_iommu *iommu,
struct vfio_iommu_type1_dma_map *map)
{
@@ -1612,6 +1657,9 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
dma->size != size) {
ret = -EINVAL;
} else {
+ ret = vfio_change_dma_owner(dma);
+ if (ret)
+ goto out_unlock;
dma->vaddr = vaddr;
dma->vaddr_invalid = false;
iommu->vaddr_invalid_count--;
@@ -1649,29 +1697,15 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
* against the locked memory limit and we need to be able to do both
* outside of this call path as pinning can be asynchronous via the
* external interfaces for mdev devices. RLIMIT_MEMLOCK requires a
- * task_struct and VM locked pages requires an mm_struct, however
- * holding an indefinite mm reference is not recommended, therefore we
- * only hold a reference to a task. We could hold a reference to
- * current, however QEMU uses this call path through vCPU threads,
- * which can be killed resulting in a NULL mm and failure in the unmap
- * path when called via a different thread. Avoid this problem by
- * using the group_leader as threads within the same group require
- * both CLONE_THREAD and CLONE_VM and will therefore use the same
- * mm_struct.
- *
- * Previously we also used the task for testing CAP_IPC_LOCK at the
- * time of pinning and accounting, however has_capability() makes use
- * of real_cred, a copy-on-write field, so we can't guarantee that it
- * matches group_leader, or in fact that it might not change by the
- * time it's evaluated. If a process were to call MAP_DMA with
- * CAP_IPC_LOCK but later drop it, it doesn't make sense that they
- * possibly see different results for an iommu_mapped vfio_dma vs
- * externally mapped. Therefore track CAP_IPC_LOCK in vfio_dma at the
- * time of calling MAP_DMA.
+ * task_struct. Save the group_leader so that all DMA tracking uses
+ * the same task, to make debugging easier. VM locked pages requires
+ * an mm_struct, so grab the mm in case the task dies.
*/
get_task_struct(current->group_leader);
dma->task = current->group_leader;
dma->lock_cap = capable(CAP_IPC_LOCK);
+ dma->mm = current->mm;
+ mmgrab(dma->mm);
dma->pfn_list = RB_ROOT;
@@ -3168,9 +3202,8 @@ static int vfio_iommu_type1_dma_rw_chunk(struct vfio_iommu *iommu,
!(dma->prot & IOMMU_READ))
return -EPERM;
- mm = get_task_mm(dma->task);
-
- if (!mm)
+ mm = dma->mm;
+ if (!mmget_not_zero(mm))
return -EPERM;
if (kthread)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 297b5db..32148f0 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1517,6 +1517,9 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
nvq = &n->vqs[index];
mutex_lock(&vq->mutex);
+ if (fd == -1)
+ vhost_clear_msg(&n->dev);
+
/* Verify that ring has been setup correctly. */
if (!vhost_vq_access_ok(vq)) {
r = -EFAULT;
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 0a9746b..c0f926a 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -669,7 +669,7 @@ void vhost_dev_stop(struct vhost_dev *dev)
}
EXPORT_SYMBOL_GPL(vhost_dev_stop);
-static void vhost_clear_msg(struct vhost_dev *dev)
+void vhost_clear_msg(struct vhost_dev *dev)
{
struct vhost_msg_node *node, *n;
@@ -687,6 +687,7 @@ static void vhost_clear_msg(struct vhost_dev *dev)
spin_unlock(&dev->iotlb_lock);
}
+EXPORT_SYMBOL_GPL(vhost_clear_msg);
void vhost_dev_cleanup(struct vhost_dev *dev)
{
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 638bb64..f2675c0 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -182,6 +182,7 @@ long vhost_dev_ioctl(struct vhost_dev *, unsigned int ioctl, void __user *argp);
long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp);
bool vhost_vq_access_ok(struct vhost_virtqueue *vq);
bool vhost_log_access_ok(struct vhost_dev *);
+void vhost_clear_msg(struct vhost_dev *dev);
int vhost_get_vq_desc(struct vhost_virtqueue *,
struct iovec iov[], unsigned int iov_count,
diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
index c00e01a..a8a0a44 100644
--- a/drivers/video/fbdev/au1200fb.c
+++ b/drivers/video/fbdev/au1200fb.c
@@ -1040,6 +1040,9 @@ static int au1200fb_fb_check_var(struct fb_var_screeninfo *var,
u32 pixclock;
int screen_size, plane;
+ if (!var->pixclock)
+ return -EINVAL;
+
plane = fbdev->plane;
/* Make sure that the mode respect all LCD controller and
diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
index df40360..b671265 100644
--- a/drivers/video/fbdev/core/fbcon.c
+++ b/drivers/video/fbdev/core/fbcon.c
@@ -989,7 +989,7 @@ static const char *fbcon_startup(void)
set_blitting_type(vc, info);
/* Setup default font */
- if (!p->fontdata && !vc->vc_font.data) {
+ if (!p->fontdata) {
if (!fontname[0] || !(font = find_font(fontname)))
font = get_default_font(info->var.xres,
info->var.yres,
@@ -999,8 +999,6 @@ static const char *fbcon_startup(void)
vc->vc_font.height = font->height;
vc->vc_font.data = (void *)(p->fontdata = font->data);
vc->vc_font.charcount = font->charcount;
- } else {
- p->fontdata = vc->vc_font.data;
}
cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
@@ -1167,9 +1165,9 @@ static void fbcon_init(struct vc_data *vc, int init)
ops->p = &fb_display[fg_console];
}
-static void fbcon_free_font(struct fbcon_display *p, bool freefont)
+static void fbcon_free_font(struct fbcon_display *p)
{
- if (freefont && p->userfont && p->fontdata && (--REFCOUNT(p->fontdata) == 0))
+ if (p->userfont && p->fontdata && (--REFCOUNT(p->fontdata) == 0))
kfree(p->fontdata - FONT_EXTRA_WORDS * sizeof(int));
p->fontdata = NULL;
p->userfont = 0;
@@ -1183,8 +1181,8 @@ static void fbcon_deinit(struct vc_data *vc)
struct fb_info *info;
struct fbcon_ops *ops;
int idx;
- bool free_font = true;
+ fbcon_free_font(p);
idx = con2fb_map[vc->vc_num];
if (idx == -1)
@@ -1195,8 +1193,6 @@ static void fbcon_deinit(struct vc_data *vc)
if (!info)
goto finished;
- if (info->flags & FBINFO_MISC_FIRMWARE)
- free_font = false;
ops = info->fbcon_par;
if (!ops)
@@ -1208,9 +1204,8 @@ static void fbcon_deinit(struct vc_data *vc)
ops->flags &= ~FBCON_FLAGS_INIT;
finished:
- fbcon_free_font(p, free_font);
- if (free_font)
- vc->vc_font.data = NULL;
+ fbcon_free_font(p);
+ vc->vc_font.data = NULL;
if (vc->vc_hi_font_mask && vc->vc_screenbuf)
set_vc_hi_font(vc, false);
@@ -2507,9 +2502,12 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font,
h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres))
return -EINVAL;
+ if (font->width > 32 || font->height > 32)
+ return -EINVAL;
+
/* Make sure drawing engine can handle the font */
- if (!(info->pixmap.blit_x & (1 << (font->width - 1))) ||
- !(info->pixmap.blit_y & (1 << (font->height - 1))))
+ if (!(info->pixmap.blit_x & BIT(font->width - 1)) ||
+ !(info->pixmap.blit_y & BIT(font->height - 1)))
return -EINVAL;
/* Make sure driver can handle the font length */
diff --git a/drivers/video/fbdev/geode/lxfb_core.c b/drivers/video/fbdev/geode/lxfb_core.c
index 66c8126..6c6b6ef 100644
--- a/drivers/video/fbdev/geode/lxfb_core.c
+++ b/drivers/video/fbdev/geode/lxfb_core.c
@@ -234,6 +234,9 @@ static void get_modedb(struct fb_videomode **modedb, unsigned int *size)
static int lxfb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
{
+ if (!var->pixclock)
+ return -EINVAL;
+
if (var->xres > 1920 || var->yres > 1440)
return -EINVAL;
diff --git a/drivers/video/fbdev/intelfb/intelfbdrv.c b/drivers/video/fbdev/intelfb/intelfbdrv.c
index a957996..8a703ad 100644
--- a/drivers/video/fbdev/intelfb/intelfbdrv.c
+++ b/drivers/video/fbdev/intelfb/intelfbdrv.c
@@ -1214,6 +1214,9 @@ static int intelfb_check_var(struct fb_var_screeninfo *var,
dinfo = GET_DINFO(info);
+ if (!var->pixclock)
+ return -EINVAL;
+
/* update the pitch */
if (intelfbhw_validate_mode(dinfo, var) != 0)
return -EINVAL;
diff --git a/drivers/video/fbdev/nvidia/nvidia.c b/drivers/video/fbdev/nvidia/nvidia.c
index a372a18..f9c388a 100644
--- a/drivers/video/fbdev/nvidia/nvidia.c
+++ b/drivers/video/fbdev/nvidia/nvidia.c
@@ -763,6 +763,8 @@ static int nvidiafb_check_var(struct fb_var_screeninfo *var,
int pitch, err = 0;
NVTRACE_ENTER();
+ if (!var->pixclock)
+ return -EINVAL;
var->transp.offset = 0;
var->transp.length = 0;
diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
index d43b081d..db84a66 100644
--- a/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
+++ b/drivers/video/fbdev/omap2/omapfb/dss/dsi.c
@@ -1538,22 +1538,28 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
{
struct dsi_data *dsi = dsi_get_dsidrv_data(dsidev);
unsigned long flags;
- struct dsi_irq_stats stats;
+ struct dsi_irq_stats *stats;
+
+ stats = kzalloc(sizeof(*stats), GFP_KERNEL);
+ if (!stats) {
+ seq_printf(s, "out of memory\n");
+ return;
+ }
spin_lock_irqsave(&dsi->irq_stats_lock, flags);
- stats = dsi->irq_stats;
+ *stats = dsi->irq_stats;
memset(&dsi->irq_stats, 0, sizeof(dsi->irq_stats));
dsi->irq_stats.last_reset = jiffies;
spin_unlock_irqrestore(&dsi->irq_stats_lock, flags);
seq_printf(s, "period %u ms\n",
- jiffies_to_msecs(jiffies - stats.last_reset));
+ jiffies_to_msecs(jiffies - stats->last_reset));
- seq_printf(s, "irqs %d\n", stats.irq_count);
+ seq_printf(s, "irqs %d\n", stats->irq_count);
#define PIS(x) \
- seq_printf(s, "%-20s %10d\n", #x, stats.dsi_irqs[ffs(DSI_IRQ_##x)-1])
+ seq_printf(s, "%-20s %10d\n", #x, stats->dsi_irqs[ffs(DSI_IRQ_##x)-1])
seq_printf(s, "-- DSI%d interrupts --\n", dsi->module_id + 1);
PIS(VC0);
@@ -1577,10 +1583,10 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
#define PIS(x) \
seq_printf(s, "%-20s %10d %10d %10d %10d\n", #x, \
- stats.vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
- stats.vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
- stats.vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
- stats.vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
+ stats->vc_irqs[0][ffs(DSI_VC_IRQ_##x)-1], \
+ stats->vc_irqs[1][ffs(DSI_VC_IRQ_##x)-1], \
+ stats->vc_irqs[2][ffs(DSI_VC_IRQ_##x)-1], \
+ stats->vc_irqs[3][ffs(DSI_VC_IRQ_##x)-1]);
seq_printf(s, "-- VC interrupts --\n");
PIS(CS);
@@ -1596,7 +1602,7 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
#define PIS(x) \
seq_printf(s, "%-20s %10d\n", #x, \
- stats.cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
+ stats->cio_irqs[ffs(DSI_CIO_IRQ_##x)-1]);
seq_printf(s, "-- CIO interrupts --\n");
PIS(ERRSYNCESC1);
@@ -1620,6 +1626,8 @@ static void dsi_dump_dsidev_irqs(struct platform_device *dsidev,
PIS(ULPSACTIVENOT_ALL0);
PIS(ULPSACTIVENOT_ALL1);
#undef PIS
+
+ kfree(stats);
}
static void dsi1_dump_irqs(struct seq_file *s)
diff --git a/drivers/video/fbdev/smscufx.c b/drivers/video/fbdev/smscufx.c
index 5fa3f1e..b3295cd 100644
--- a/drivers/video/fbdev/smscufx.c
+++ b/drivers/video/fbdev/smscufx.c
@@ -1621,7 +1621,7 @@ static int ufx_usb_probe(struct usb_interface *interface,
struct usb_device *usbdev;
struct ufx_data *dev;
struct fb_info *info;
- int retval;
+ int retval = -ENOMEM;
u32 id_rev, fpga_rev;
/* usb initialization */
@@ -1653,15 +1653,17 @@ static int ufx_usb_probe(struct usb_interface *interface,
if (!ufx_alloc_urb_list(dev, WRITES_IN_FLIGHT, MAX_TRANSFER)) {
dev_err(dev->gdev, "ufx_alloc_urb_list failed\n");
- goto e_nomem;
+ goto put_ref;
}
/* We don't register a new USB class. Our client interface is fbdev */
/* allocates framebuffer driver structure, not framebuffer memory */
info = framebuffer_alloc(0, &usbdev->dev);
- if (!info)
- goto e_nomem;
+ if (!info) {
+ dev_err(dev->gdev, "framebuffer_alloc failed\n");
+ goto free_urb_list;
+ }
dev->info = info;
info->par = dev;
@@ -1704,22 +1706,34 @@ static int ufx_usb_probe(struct usb_interface *interface,
check_warn_goto_error(retval, "unable to find common mode for display and adapter");
retval = ufx_reg_set_bits(dev, 0x4000, 0x00000001);
- check_warn_goto_error(retval, "error %d enabling graphics engine", retval);
+ if (retval < 0) {
+ dev_err(dev->gdev, "error %d enabling graphics engine", retval);
+ goto setup_modes;
+ }
/* ready to begin using device */
atomic_set(&dev->usb_active, 1);
dev_dbg(dev->gdev, "checking var");
retval = ufx_ops_check_var(&info->var, info);
- check_warn_goto_error(retval, "error %d ufx_ops_check_var", retval);
+ if (retval < 0) {
+ dev_err(dev->gdev, "error %d ufx_ops_check_var", retval);
+ goto reset_active;
+ }
dev_dbg(dev->gdev, "setting par");
retval = ufx_ops_set_par(info);
- check_warn_goto_error(retval, "error %d ufx_ops_set_par", retval);
+ if (retval < 0) {
+ dev_err(dev->gdev, "error %d ufx_ops_set_par", retval);
+ goto reset_active;
+ }
dev_dbg(dev->gdev, "registering framebuffer");
retval = register_framebuffer(info);
- check_warn_goto_error(retval, "error %d register_framebuffer", retval);
+ if (retval < 0) {
+ dev_err(dev->gdev, "error %d register_framebuffer", retval);
+ goto reset_active;
+ }
dev_info(dev->gdev, "SMSC UDX USB device /dev/fb%d attached. %dx%d resolution."
" Using %dK framebuffer memory\n", info->node,
@@ -1727,21 +1741,23 @@ static int ufx_usb_probe(struct usb_interface *interface,
return 0;
-error:
- fb_dealloc_cmap(&info->cmap);
-destroy_modedb:
+reset_active:
+ atomic_set(&dev->usb_active, 0);
+setup_modes:
fb_destroy_modedb(info->monspecs.modedb);
vfree(info->screen_base);
fb_destroy_modelist(&info->modelist);
+error:
+ fb_dealloc_cmap(&info->cmap);
+destroy_modedb:
framebuffer_release(info);
+free_urb_list:
+ if (dev->urbs.count > 0)
+ ufx_free_urb_list(dev);
put_ref:
kref_put(&dev->kref, ufx_free); /* ref for framebuffer */
kref_put(&dev->kref, ufx_free); /* last ref from kref_init */
return retval;
-
-e_nomem:
- retval = -ENOMEM;
- goto put_ref;
}
static void ufx_usb_disconnect(struct usb_interface *interface)
diff --git a/drivers/video/fbdev/stifb.c b/drivers/video/fbdev/stifb.c
index 3feb6e4..ef8a4c5 100644
--- a/drivers/video/fbdev/stifb.c
+++ b/drivers/video/fbdev/stifb.c
@@ -922,6 +922,28 @@ SETUP_HCRX(struct stifb_info *fb)
/* ------------------- driver specific functions --------------------------- */
static int
+stifb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+{
+ struct stifb_info *fb = container_of(info, struct stifb_info, info);
+
+ if (var->xres != fb->info.var.xres ||
+ var->yres != fb->info.var.yres ||
+ var->bits_per_pixel != fb->info.var.bits_per_pixel)
+ return -EINVAL;
+
+ var->xres_virtual = var->xres;
+ var->yres_virtual = var->yres;
+ var->xoffset = 0;
+ var->yoffset = 0;
+ var->grayscale = fb->info.var.grayscale;
+ var->red.length = fb->info.var.red.length;
+ var->green.length = fb->info.var.green.length;
+ var->blue.length = fb->info.var.blue.length;
+
+ return 0;
+}
+
+static int
stifb_setcolreg(u_int regno, u_int red, u_int green,
u_int blue, u_int transp, struct fb_info *info)
{
@@ -1145,6 +1167,7 @@ stifb_init_display(struct stifb_info *fb)
static const struct fb_ops stifb_ops = {
.owner = THIS_MODULE,
+ .fb_check_var = stifb_check_var,
.fb_setcolreg = stifb_setcolreg,
.fb_blank = stifb_blank,
.fb_fillrect = stifb_fillrect,
@@ -1164,6 +1187,7 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
struct stifb_info *fb;
struct fb_info *info;
unsigned long sti_rom_address;
+ char modestr[32];
char *dev_name;
int bpp, xres, yres;
@@ -1342,6 +1366,9 @@ static int __init stifb_init_fb(struct sti_struct *sti, int bpp_pref)
info->flags = FBINFO_HWACCEL_COPYAREA | FBINFO_HWACCEL_FILLRECT;
info->pseudo_palette = &fb->pseudo_palette;
+ scnprintf(modestr, sizeof(modestr), "%dx%d-%d", xres, yres, bpp);
+ fb_find_mode(&info->var, info, modestr, NULL, 0, NULL, bpp);
+
/* This has to be done !!! */
if (fb_alloc_cmap(&info->cmap, NR_PALETTE, 0))
goto out_err1;
diff --git a/drivers/video/fbdev/tgafb.c b/drivers/video/fbdev/tgafb.c
index ae0cf55..b9b00a1 100644
--- a/drivers/video/fbdev/tgafb.c
+++ b/drivers/video/fbdev/tgafb.c
@@ -166,6 +166,9 @@ tgafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
{
struct tga_par *par = (struct tga_par *)info->par;
+ if (!var->pixclock)
+ return -EINVAL;
+
if (par->tga_type == TGA_TYPE_8PLANE) {
if (var->bits_per_pixel != 8)
return -EINVAL;
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index 30654d3..a274261 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -196,7 +196,7 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
int err;
if (index >= vp_modern_get_num_queues(mdev))
- return ERR_PTR(-ENOENT);
+ return ERR_PTR(-EINVAL);
/* Check if queue is either not available or already active. */
num = vp_modern_get_queue_size(mdev, index);
diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
index f2ae2e5..4a2ddf7 100644
--- a/drivers/w1/w1.c
+++ b/drivers/w1/w1.c
@@ -1166,6 +1166,8 @@ int w1_process(void *data)
/* remainder if it woke up early */
unsigned long jremain = 0;
+ atomic_inc(&dev->refcnt);
+
for (;;) {
if (!jremain && dev->search_count) {
@@ -1193,8 +1195,10 @@ int w1_process(void *data)
*/
mutex_unlock(&dev->list_mutex);
- if (kthread_should_stop())
+ if (kthread_should_stop()) {
+ __set_current_state(TASK_RUNNING);
break;
+ }
/* Only sleep when the search is active. */
if (dev->search_count) {
diff --git a/drivers/w1/w1_int.c b/drivers/w1/w1_int.c
index b3e1792..3a71c5e 100644
--- a/drivers/w1/w1_int.c
+++ b/drivers/w1/w1_int.c
@@ -51,10 +51,9 @@ static struct w1_master *w1_alloc_dev(u32 id, int slave_count, int slave_ttl,
dev->search_count = w1_search_count;
dev->enable_pullup = w1_enable_pullup;
- /* 1 for w1_process to decrement
- * 1 for __w1_remove_master_device to decrement
+ /* For __w1_remove_master_device to decrement
*/
- atomic_set(&dev->refcnt, 2);
+ atomic_set(&dev->refcnt, 1);
INIT_LIST_HEAD(&dev->slist);
INIT_LIST_HEAD(&dev->async_list);
diff --git a/drivers/watchdog/at91sam9_wdt.c b/drivers/watchdog/at91sam9_wdt.c
index 292b5a1..fed7be2 100644
--- a/drivers/watchdog/at91sam9_wdt.c
+++ b/drivers/watchdog/at91sam9_wdt.c
@@ -206,10 +206,9 @@ static int at91_wdt_init(struct platform_device *pdev, struct at91wdt *wdt)
"min heartbeat and max heartbeat might be too close for the system to handle it correctly\n");
if ((tmp & AT91_WDT_WDFIEN) && wdt->irq) {
- err = request_irq(wdt->irq, wdt_interrupt,
- IRQF_SHARED | IRQF_IRQPOLL |
- IRQF_NO_SUSPEND,
- pdev->name, wdt);
+ err = devm_request_irq(dev, wdt->irq, wdt_interrupt,
+ IRQF_SHARED | IRQF_IRQPOLL | IRQF_NO_SUSPEND,
+ pdev->name, wdt);
if (err)
return err;
}
diff --git a/drivers/watchdog/diag288_wdt.c b/drivers/watchdog/diag288_wdt.c
index 4cb1087..6ca5d95 100644
--- a/drivers/watchdog/diag288_wdt.c
+++ b/drivers/watchdog/diag288_wdt.c
@@ -86,7 +86,7 @@ static int __diag288(unsigned int func, unsigned int timeout,
"1:\n"
EX_TABLE(0b, 1b)
: "+d" (err) : "d"(__func), "d"(__timeout),
- "d"(__action), "d"(__len) : "1", "cc");
+ "d"(__action), "d"(__len) : "1", "cc", "memory");
return err;
}
@@ -268,12 +268,21 @@ static int __init diag288_init(void)
char ebc_begin[] = {
194, 197, 199, 201, 213
};
+ char *ebc_cmd;
watchdog_set_nowayout(&wdt_dev, nowayout_info);
if (MACHINE_IS_VM) {
- if (__diag288_vm(WDT_FUNC_INIT, 15,
- ebc_begin, sizeof(ebc_begin)) != 0) {
+ ebc_cmd = kmalloc(sizeof(ebc_begin), GFP_KERNEL);
+ if (!ebc_cmd) {
+ pr_err("The watchdog cannot be initialized\n");
+ return -ENOMEM;
+ }
+ memcpy(ebc_cmd, ebc_begin, sizeof(ebc_begin));
+ ret = __diag288_vm(WDT_FUNC_INIT, 15,
+ ebc_cmd, sizeof(ebc_begin));
+ kfree(ebc_cmd);
+ if (ret != 0) {
pr_err("The watchdog cannot be initialized\n");
return -EINVAL;
}
diff --git a/drivers/watchdog/pcwd_usb.c b/drivers/watchdog/pcwd_usb.c
index 1bdaf17..8202f0a 100644
--- a/drivers/watchdog/pcwd_usb.c
+++ b/drivers/watchdog/pcwd_usb.c
@@ -325,7 +325,8 @@ static int usb_pcwd_set_heartbeat(struct usb_pcwd_private *usb_pcwd, int t)
static int usb_pcwd_get_temperature(struct usb_pcwd_private *usb_pcwd,
int *temperature)
{
- unsigned char msb, lsb;
+ unsigned char msb = 0x00;
+ unsigned char lsb = 0x00;
usb_pcwd_send_command(usb_pcwd, CMD_READ_TEMP, &msb, &lsb);
@@ -341,7 +342,8 @@ static int usb_pcwd_get_temperature(struct usb_pcwd_private *usb_pcwd,
static int usb_pcwd_get_timeleft(struct usb_pcwd_private *usb_pcwd,
int *time_left)
{
- unsigned char msb, lsb;
+ unsigned char msb = 0x00;
+ unsigned char lsb = 0x00;
/* Read the time that's left before rebooting */
/* Note: if the board is not yet armed then we will read 0xFFFF */
diff --git a/drivers/watchdog/sbsa_gwdt.c b/drivers/watchdog/sbsa_gwdt.c
index 9791c74..6386280 100644
--- a/drivers/watchdog/sbsa_gwdt.c
+++ b/drivers/watchdog/sbsa_gwdt.c
@@ -150,6 +150,7 @@ static int sbsa_gwdt_set_timeout(struct watchdog_device *wdd,
struct sbsa_gwdt *gwdt = watchdog_get_drvdata(wdd);
wdd->timeout = timeout;
+ timeout = clamp_t(unsigned int, timeout, 1, wdd->max_hw_heartbeat_ms / 1000);
if (action)
sbsa_gwdt_reg_write(gwdt->clk * timeout, gwdt);
diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
index 3a3d8b5..5eec84f 100644
--- a/drivers/watchdog/watchdog_dev.c
+++ b/drivers/watchdog/watchdog_dev.c
@@ -1044,8 +1044,8 @@ static int watchdog_cdev_register(struct watchdog_device *wdd)
if (wdd->id == 0) {
misc_deregister(&watchdog_miscdev);
old_wd_data = NULL;
- put_device(&wd_data->dev);
}
+ put_device(&wd_data->dev);
return err;
}
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 46d9295..5e8321f 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -528,9 +528,10 @@ static void bind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
BUG_ON(irq == -1);
if (IS_ENABLED(CONFIG_SMP) && force_affinity) {
- cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(cpu));
- cpumask_copy(irq_get_effective_affinity_mask(irq),
- cpumask_of(cpu));
+ struct irq_data *data = irq_get_irq_data(irq);
+
+ irq_data_update_affinity(data, cpumask_of(cpu));
+ irq_data_update_effective_affinity(data, cpumask_of(cpu));
}
xen_evtchn_port_bind_to_cpu(evtchn, cpu, info->cpu);
diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index b47fd84..e18df9a 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -129,13 +129,13 @@ static bool pvcalls_conn_back_read(void *opaque)
if (masked_prod < masked_cons) {
vec[0].iov_base = data->in + masked_prod;
vec[0].iov_len = wanted;
- iov_iter_kvec(&msg.msg_iter, WRITE, vec, 1, wanted);
+ iov_iter_kvec(&msg.msg_iter, READ, vec, 1, wanted);
} else {
vec[0].iov_base = data->in + masked_prod;
vec[0].iov_len = array_size - masked_prod;
vec[1].iov_base = data->in;
vec[1].iov_len = wanted - vec[0].iov_len;
- iov_iter_kvec(&msg.msg_iter, WRITE, vec, 2, wanted);
+ iov_iter_kvec(&msg.msg_iter, READ, vec, 2, wanted);
}
atomic_set(&map->read, 0);
@@ -188,13 +188,13 @@ static bool pvcalls_conn_back_write(struct sock_mapping *map)
if (pvcalls_mask(prod, array_size) > pvcalls_mask(cons, array_size)) {
vec[0].iov_base = data->out + pvcalls_mask(cons, array_size);
vec[0].iov_len = size;
- iov_iter_kvec(&msg.msg_iter, READ, vec, 1, size);
+ iov_iter_kvec(&msg.msg_iter, WRITE, vec, 1, size);
} else {
vec[0].iov_base = data->out + pvcalls_mask(cons, array_size);
vec[0].iov_len = array_size - pvcalls_mask(cons, array_size);
vec[1].iov_base = data->out;
vec[1].iov_len = size - vec[0].iov_len;
- iov_iter_kvec(&msg.msg_iter, READ, vec, 2, size);
+ iov_iter_kvec(&msg.msg_iter, WRITE, vec, 2, size);
}
atomic_set(&map->write, 0);
diff --git a/fs/affs/file.c b/fs/affs/file.c
index 75ebd2b..25d480e 100644
--- a/fs/affs/file.c
+++ b/fs/affs/file.c
@@ -881,7 +881,7 @@ affs_truncate(struct inode *inode)
if (inode->i_size > AFFS_I(inode)->mmu_private) {
struct address_space *mapping = inode->i_mapping;
struct page *page;
- void *fsdata;
+ void *fsdata = NULL;
loff_t isize = inode->i_size;
int res;
diff --git a/fs/aio.c b/fs/aio.c
index 1a78979..e88fd9b 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -334,6 +334,9 @@ static int aio_ring_mremap(struct vm_area_struct *vma)
spin_lock(&mm->ioctx_lock);
rcu_read_lock();
table = rcu_dereference(mm->ioctx_table);
+ if (!table)
+ goto out_unlock;
+
for (i = 0; i < table->nr; i++) {
struct kioctx *ctx;
@@ -347,6 +350,7 @@ static int aio_ring_mremap(struct vm_area_struct *vma)
}
}
+out_unlock:
rcu_read_unlock();
spin_unlock(&mm->ioctx_lock);
return res;
diff --git a/fs/attr.c b/fs/attr.c
index f581c4d..0ca14cb 100644
--- a/fs/attr.c
+++ b/fs/attr.c
@@ -18,6 +18,70 @@
#include <linux/evm.h>
#include <linux/ima.h>
+#include "internal.h"
+
+/**
+ * setattr_should_drop_sgid - determine whether the setgid bit needs to be
+ * removed
+ * @mnt_userns: user namespace of the mount @inode was found from
+ * @inode: inode to check
+ *
+ * This function determines whether the setgid bit needs to be removed.
+ * We retain backwards compatibility and require setgid bit to be removed
+ * unconditionally if S_IXGRP is set. Otherwise we have the exact same
+ * requirements as setattr_prepare() and setattr_copy().
+ *
+ * Return: ATTR_KILL_SGID if setgid bit needs to be removed, 0 otherwise.
+ */
+int setattr_should_drop_sgid(struct user_namespace *mnt_userns,
+ const struct inode *inode)
+{
+ umode_t mode = inode->i_mode;
+
+ if (!(mode & S_ISGID))
+ return 0;
+ if (mode & S_IXGRP)
+ return ATTR_KILL_SGID;
+ if (!in_group_or_capable(mnt_userns, inode,
+ i_gid_into_mnt(mnt_userns, inode)))
+ return ATTR_KILL_SGID;
+ return 0;
+}
+
+/**
+ * setattr_should_drop_suidgid - determine whether the set{g,u}id bit needs to
+ * be dropped
+ * @mnt_userns: user namespace of the mount @inode was found from
+ * @inode: inode to check
+ *
+ * This function determines whether the set{g,u}id bits need to be removed.
+ * If the setuid bit needs to be removed ATTR_KILL_SUID is returned. If the
+ * setgid bit needs to be removed ATTR_KILL_SGID is returned. If both
+ * set{g,u}id bits need to be removed the corresponding mask of both flags is
+ * returned.
+ *
+ * Return: A mask of ATTR_KILL_S{G,U}ID indicating which - if any - setid bits
+ * to remove, 0 otherwise.
+ */
+int setattr_should_drop_suidgid(struct user_namespace *mnt_userns,
+ struct inode *inode)
+{
+ umode_t mode = inode->i_mode;
+ int kill = 0;
+
+ /* suid always must be killed */
+ if (unlikely(mode & S_ISUID))
+ kill = ATTR_KILL_SUID;
+
+ kill |= setattr_should_drop_sgid(mnt_userns, inode);
+
+ if (unlikely(kill && !capable(CAP_FSETID) && S_ISREG(mode)))
+ return kill;
+
+ return 0;
+}
+EXPORT_SYMBOL(setattr_should_drop_suidgid);
+
/**
* chown_ok - verify permissions to chown inode
* @mnt_userns: user namespace of the mount @inode was found from
@@ -141,8 +205,7 @@ int setattr_prepare(struct user_namespace *mnt_userns, struct dentry *dentry,
mapped_gid = i_gid_into_mnt(mnt_userns, inode);
/* Also check the setgid bit! */
- if (!in_group_p(mapped_gid) &&
- !capable_wrt_inode_uidgid(mnt_userns, inode, CAP_FSETID))
+ if (!in_group_or_capable(mnt_userns, inode, mapped_gid))
attr->ia_mode &= ~S_ISGID;
}
@@ -257,8 +320,7 @@ void setattr_copy(struct user_namespace *mnt_userns, struct inode *inode,
if (ia_valid & ATTR_MODE) {
umode_t mode = attr->ia_mode;
kgid_t kgid = i_gid_into_mnt(mnt_userns, inode);
- if (!in_group_p(kgid) &&
- !capable_wrt_inode_uidgid(mnt_userns, inode, CAP_FSETID))
+ if (!in_group_or_capable(mnt_userns, inode, kgid))
mode &= ~S_ISGID;
inode->i_mode = mode;
}
@@ -383,7 +445,7 @@ int notify_change(struct user_namespace *mnt_userns, struct dentry *dentry,
}
}
if (ia_valid & ATTR_KILL_SGID) {
- if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
+ if (mode & S_ISGID) {
if (!(ia_valid & ATTR_MODE)) {
ia_valid = attr->ia_valid |= ATTR_MODE;
attr->ia_mode = inode->i_mode;
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 5eea567..19f71c3 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1554,7 +1554,8 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
btrfs_info(fs_info,
"reclaiming chunk %llu with %llu%% used %llu%% unusable",
- bg->start, div_u64(bg->used * 100, bg->length),
+ bg->start,
+ div64_u64(bg->used * 100, bg->length),
div64_u64(zone_unusable * 100, bg->length));
trace_btrfs_reclaim_block_group(bg);
ret = btrfs_relocate_chunk(fs_info, bg->start);
diff --git a/fs/btrfs/discard.c b/fs/btrfs/discard.c
index e1b7bd9..bd9dde3 100644
--- a/fs/btrfs/discard.c
+++ b/fs/btrfs/discard.c
@@ -77,6 +77,7 @@ static struct list_head *get_discard_list(struct btrfs_discard_ctl *discard_ctl,
static void __add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
struct btrfs_block_group *block_group)
{
+ lockdep_assert_held(&discard_ctl->lock);
if (!btrfs_run_discard_work(discard_ctl))
return;
@@ -88,6 +89,8 @@ static void __add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
BTRFS_DISCARD_DELAY);
block_group->discard_state = BTRFS_DISCARD_RESET_CURSOR;
}
+ if (list_empty(&block_group->discard_list))
+ btrfs_get_block_group(block_group);
list_move_tail(&block_group->discard_list,
get_discard_list(discard_ctl, block_group));
@@ -107,8 +110,12 @@ static void add_to_discard_list(struct btrfs_discard_ctl *discard_ctl,
static void add_to_discard_unused_list(struct btrfs_discard_ctl *discard_ctl,
struct btrfs_block_group *block_group)
{
+ bool queued;
+
spin_lock(&discard_ctl->lock);
+ queued = !list_empty(&block_group->discard_list);
+
if (!btrfs_run_discard_work(discard_ctl)) {
spin_unlock(&discard_ctl->lock);
return;
@@ -120,6 +127,8 @@ static void add_to_discard_unused_list(struct btrfs_discard_ctl *discard_ctl,
block_group->discard_eligible_time = (ktime_get_ns() +
BTRFS_DISCARD_UNUSED_DELAY);
block_group->discard_state = BTRFS_DISCARD_RESET_CURSOR;
+ if (!queued)
+ btrfs_get_block_group(block_group);
list_add_tail(&block_group->discard_list,
&discard_ctl->discard_list[BTRFS_DISCARD_INDEX_UNUSED]);
@@ -130,6 +139,7 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
struct btrfs_block_group *block_group)
{
bool running = false;
+ bool queued = false;
spin_lock(&discard_ctl->lock);
@@ -139,7 +149,16 @@ static bool remove_from_discard_list(struct btrfs_discard_ctl *discard_ctl,
}
block_group->discard_eligible_time = 0;
+ queued = !list_empty(&block_group->discard_list);
list_del_init(&block_group->discard_list);
+ /*
+ * If the block group is currently running in the discard workfn, we
+ * don't want to deref it, since it's still being used by the workfn.
+ * The workfn will notice this case and deref the block group when it is
+ * finished.
+ */
+ if (queued && !running)
+ btrfs_put_block_group(block_group);
spin_unlock(&discard_ctl->lock);
@@ -212,10 +231,12 @@ static struct btrfs_block_group *peek_discard_list(
if (block_group && now >= block_group->discard_eligible_time) {
if (block_group->discard_index == BTRFS_DISCARD_INDEX_UNUSED &&
block_group->used != 0) {
- if (btrfs_is_block_group_data_only(block_group))
+ if (btrfs_is_block_group_data_only(block_group)) {
__add_to_discard_list(discard_ctl, block_group);
- else
+ } else {
list_del_init(&block_group->discard_list);
+ btrfs_put_block_group(block_group);
+ }
goto again;
}
if (block_group->discard_state == BTRFS_DISCARD_RESET_CURSOR) {
@@ -502,6 +523,15 @@ static void btrfs_discard_workfn(struct work_struct *work)
spin_lock(&discard_ctl->lock);
discard_ctl->prev_discard = trimmed;
discard_ctl->prev_discard_time = now;
+ /*
+ * If the block group was removed from the discard list while it was
+ * running in this workfn, then we didn't deref it, since this function
+ * still owned that reference. But we set the discard_ctl->block_group
+ * back to NULL, so we can use that condition to know that now we need
+ * to deref the block_group.
+ */
+ if (discard_ctl->block_group == NULL)
+ btrfs_put_block_group(block_group);
discard_ctl->block_group = NULL;
__btrfs_discard_schedule_work(discard_ctl, now, false);
spin_unlock(&discard_ctl->lock);
@@ -638,8 +668,12 @@ void btrfs_discard_punt_unused_bgs_list(struct btrfs_fs_info *fs_info)
list_for_each_entry_safe(block_group, next, &fs_info->unused_bgs,
bg_list) {
list_del_init(&block_group->bg_list);
- btrfs_put_block_group(block_group);
btrfs_discard_queue_work(&fs_info->discard_ctl, block_group);
+ /*
+ * This put is for the get done by btrfs_mark_bg_unused.
+ * Queueing discard incremented it for discard's reference.
+ */
+ btrfs_put_block_group(block_group);
}
spin_unlock(&fs_info->unused_bgs_lock);
}
@@ -669,6 +703,7 @@ static void btrfs_discard_purge_list(struct btrfs_discard_ctl *discard_ctl)
if (block_group->used == 0)
btrfs_mark_bg_unused(block_group);
spin_lock(&discard_ctl->lock);
+ btrfs_put_block_group(block_group);
}
}
spin_unlock(&discard_ctl->lock);
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 2c80fc9..ce2da06 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -380,7 +380,14 @@ static int csum_one_extent_buffer(struct extent_buffer *eb)
btrfs_print_tree(eb, 0);
btrfs_err(fs_info, "block=%llu write time tree block corruption detected",
eb->start);
- WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+ /*
+ * Be noisy if this is an extent buffer from a log tree. We don't abort
+ * a transaction in case there's a bad log tree extent buffer, we just
+ * fallback to a transaction commit. Still we want to know when there is
+ * a bad log tree extent buffer, as that may signal a bug somewhere.
+ */
+ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG) ||
+ btrfs_header_owner(eb) == BTRFS_TREE_LOG_OBJECTID);
return ret;
}
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index e72973f..750c1ff9 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -1717,6 +1717,11 @@ static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
BUG();
if (ret && insert_reserved)
btrfs_pin_extent(trans, node->bytenr, node->num_bytes, 1);
+ if (ret < 0)
+ btrfs_err(trans->fs_info,
+"failed to run delayed ref for logical %llu num_bytes %llu type %u action %u ref_mod %d: %d",
+ node->bytenr, node->num_bytes, node->type,
+ node->action, node->ref_mod, ret);
return ret;
}
@@ -1955,8 +1960,6 @@ static int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
if (ret) {
unselect_delayed_ref_head(delayed_refs, locked_ref);
btrfs_put_delayed_ref(ref);
- btrfs_debug(fs_info, "run_one_delayed_ref returned %d",
- ret);
return ret;
}
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index ed9c715..f417329 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -4267,7 +4267,9 @@ static long btrfs_ioctl_qgroup_assign(struct file *file, void __user *arg)
}
/* update qgroup status and info */
+ mutex_lock(&fs_info->qgroup_ioctl_lock);
err = btrfs_run_qgroups(trans);
+ mutex_unlock(&fs_info->qgroup_ioctl_lock);
if (err < 0)
btrfs_handle_fs_error(fs_info, err,
"failed to update qgroup status and info");
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 485abe7..26110d9 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -2752,13 +2752,22 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
}
/*
- * called from commit_transaction. Writes all changed qgroups to disk.
+ * Writes all changed qgroups to disk.
+ * Called by the transaction commit path and the qgroup assign ioctl.
*/
int btrfs_run_qgroups(struct btrfs_trans_handle *trans)
{
struct btrfs_fs_info *fs_info = trans->fs_info;
int ret = 0;
+ /*
+ * In case we are called from the qgroup assign ioctl, assert that we
+ * are holding the qgroup_ioctl_lock, otherwise we can race with a quota
+ * disable operation (ioctl) and access a freed quota root.
+ */
+ if (trans->transaction->state != TRANS_STATE_COMMIT_DOING)
+ lockdep_assert_held(&fs_info->qgroup_ioctl_lock);
+
if (!fs_info->quota_root)
return ret;
@@ -3286,6 +3295,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
int err = -ENOMEM;
int ret = 0;
bool stopped = false;
+ bool did_leaf_rescans = false;
path = btrfs_alloc_path();
if (!path)
@@ -3306,6 +3316,7 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
}
err = qgroup_rescan_leaf(trans, path);
+ did_leaf_rescans = true;
if (err > 0)
btrfs_commit_transaction(trans);
@@ -3326,16 +3337,23 @@ static void btrfs_qgroup_rescan_worker(struct btrfs_work *work)
mutex_unlock(&fs_info->qgroup_rescan_lock);
/*
- * only update status, since the previous part has already updated the
- * qgroup info.
+ * Only update status, since the previous part has already updated the
+ * qgroup info, and only if we did any actual work. This also prevents
+ * race with a concurrent quota disable, which has already set
+ * fs_info->quota_root to NULL and cleared BTRFS_FS_QUOTA_ENABLED at
+ * btrfs_quota_disable().
*/
- trans = btrfs_start_transaction(fs_info->quota_root, 1);
- if (IS_ERR(trans)) {
- err = PTR_ERR(trans);
+ if (did_leaf_rescans) {
+ trans = btrfs_start_transaction(fs_info->quota_root, 1);
+ if (IS_ERR(trans)) {
+ err = PTR_ERR(trans);
+ trans = NULL;
+ btrfs_err(fs_info,
+ "fail to start transaction for status update: %d",
+ err);
+ }
+ } else {
trans = NULL;
- btrfs_err(fs_info,
- "fail to start transaction for status update: %d",
- err);
}
mutex_lock(&fs_info->qgroup_rescan_lock);
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 9250a17..692ae2e 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -7549,10 +7549,10 @@ long btrfs_ioctl_send(struct file *mnt_file, struct btrfs_ioctl_send_args *arg)
/*
* Check that we don't overflow at later allocations, we request
* clone_sources_count + 1 items, and compare to unsigned long inside
- * access_ok.
+ * access_ok. Also set an upper limit for allocation size so this can't
+ * easily exhaust memory. Max number of clone sources is about 200K.
*/
- if (arg->clone_sources_count >
- ULONG_MAX / sizeof(struct clone_root) - 1) {
+ if (arg->clone_sources_count > SZ_8M / sizeof(struct clone_root)) {
ret = -EINVAL;
goto out;
}
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index 7272896..7c0c6fc 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -3141,7 +3141,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
ret = 0;
if (ret) {
blk_finish_plug(&plug);
- btrfs_abort_transaction(trans, ret);
btrfs_set_log_full_commit(trans);
mutex_unlock(&root->log_mutex);
goto out;
@@ -3273,7 +3272,6 @@ int btrfs_sync_log(struct btrfs_trans_handle *trans,
goto out_wake_log_root;
} else if (ret) {
btrfs_set_log_full_commit(trans);
- btrfs_abort_transaction(trans, ret);
mutex_unlock(&log_root_tree->log_mutex);
goto out_wake_log_root;
}
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index f01549b..5e19186 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -409,6 +409,7 @@ void btrfs_free_device(struct btrfs_device *device)
static void free_fs_devices(struct btrfs_fs_devices *fs_devices)
{
struct btrfs_device *device;
+
WARN_ON(fs_devices->opened);
while (!list_empty(&fs_devices->devices)) {
device = list_entry(fs_devices->devices.next,
@@ -1221,9 +1222,22 @@ void btrfs_close_devices(struct btrfs_fs_devices *fs_devices)
mutex_lock(&uuid_mutex);
close_fs_devices(fs_devices);
- if (!fs_devices->opened)
+ if (!fs_devices->opened) {
list_splice_init(&fs_devices->seed_list, &list);
+ /*
+ * If the struct btrfs_fs_devices is not assembled with any
+ * other device, it can be re-initialized during the next mount
+ * without the needing device-scan step. Therefore, it can be
+ * fully freed.
+ */
+ if (fs_devices->num_devices == 1) {
+ list_del(&fs_devices->fs_list);
+ free_fs_devices(fs_devices);
+ }
+ }
+
+
list_for_each_entry_safe(fs_devices, tmp, &list, seed_list) {
close_fs_devices(fs_devices);
list_del(&fs_devices->seed_list);
@@ -1392,8 +1406,17 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, fmode_t flags,
* So, we need to add a special mount option to scan for
* later supers, using BTRFS_SUPER_MIRROR_MAX instead
*/
- flags |= FMODE_EXCL;
+ /*
+ * Avoid using flag |= FMODE_EXCL here, as the systemd-udev may
+ * initiate the device scan which may race with the user's mount
+ * or mkfs command, resulting in failure.
+ * Since the device scan is solely for reading purposes, there is
+ * no need for FMODE_EXCL. Additionally, the devices are read again
+ * during the mount process. It is ok to get some inconsistent
+ * values temporarily, as the device paths of the fsid are the only
+ * required information for assembling the volume.
+ */
bdev = blkdev_get_by_path(path, flags, holder);
if (IS_ERR(bdev))
return ERR_CAST(bdev);
@@ -1646,7 +1669,7 @@ static int find_free_dev_extent_start(struct btrfs_device *device,
if (ret < 0)
goto out;
- while (1) {
+ while (search_start < search_end) {
l = path->nodes[0];
slot = path->slots[0];
if (slot >= btrfs_header_nritems(l)) {
@@ -1669,6 +1692,9 @@ static int find_free_dev_extent_start(struct btrfs_device *device,
if (key.type != BTRFS_DEV_EXTENT_KEY)
goto next;
+ if (key.offset > search_end)
+ break;
+
if (key.offset > search_start) {
hole_size = key.offset - search_start;
dev_extent_hole_check(device, &search_start, &hole_size,
@@ -1729,6 +1755,7 @@ static int find_free_dev_extent_start(struct btrfs_device *device,
else
ret = 0;
+ ASSERT(max_hole_start + max_hole_size <= search_end);
out:
btrfs_free_path(path);
*start = max_hole_start;
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 767a0c6..12e674f 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -63,7 +63,7 @@ struct list_head *zlib_alloc_workspace(unsigned int level)
workspacesize = max(zlib_deflate_workspacesize(MAX_WBITS, MAX_MEM_LEVEL),
zlib_inflate_workspacesize());
- workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL);
+ workspace->strm.workspace = kvzalloc(workspacesize, GFP_KERNEL);
workspace->level = level;
workspace->buf = NULL;
/*
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 53bffda..cb87714 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -2084,6 +2084,9 @@ static long ceph_fallocate(struct file *file, int mode,
loff_t endoff = 0;
loff_t size;
+ dout("%s %p %llx.%llx mode %x, offset %llu length %llu\n", __func__,
+ inode, ceph_vinop(inode), mode, offset, length);
+
if (mode != (FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
return -EOPNOTSUPP;
@@ -2124,6 +2127,10 @@ static long ceph_fallocate(struct file *file, int mode,
if (ret < 0)
goto unlock;
+ ret = file_modified(file);
+ if (ret)
+ goto put_caps;
+
filemap_invalidate_lock(inode->i_mapping);
ceph_zero_pagecache_range(inode, offset, length);
ret = ceph_zero_objects(inode, offset, length);
@@ -2139,6 +2146,7 @@ static long ceph_fallocate(struct file *file, int mode,
}
filemap_invalidate_unlock(inode->i_mapping);
+put_caps:
ceph_put_cap_refs(ci, got);
unlock:
inode_unlock(inode);
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index 78d052d..0dc8871 100644
--- a/fs/ceph/mds_client.c
+++ b/fs/ceph/mds_client.c
@@ -3543,6 +3543,12 @@ static void handle_session(struct ceph_mds_session *session,
break;
case CEPH_SESSION_FLUSHMSG:
+ /* flush cap releases */
+ spin_lock(&session->s_cap_lock);
+ if (session->s_num_cap_releases)
+ ceph_flush_cap_releases(mdsc, session);
+ spin_unlock(&session->s_cap_lock);
+
send_flushmsg_ack(mdsc, session, seq);
break;
diff --git a/fs/cifs/cifs_debug.c b/fs/cifs/cifs_debug.c
index 248a8f9..e750153 100644
--- a/fs/cifs/cifs_debug.c
+++ b/fs/cifs/cifs_debug.c
@@ -173,7 +173,7 @@ static int cifs_debug_files_proc_show(struct seq_file *m, void *v)
seq_puts(m, "# Version:1\n");
seq_puts(m, "# Format:\n");
- seq_puts(m, "# <tree id> <persistent fid> <flags> <count> <pid> <uid>");
+ seq_puts(m, "# <tree id> <ses id> <persistent fid> <flags> <count> <pid> <uid>");
#ifdef CONFIG_CIFS_DEBUG2
seq_printf(m, " <filename> <mid>\n");
#else
@@ -190,8 +190,9 @@ static int cifs_debug_files_proc_show(struct seq_file *m, void *v)
cfile = list_entry(tmp2, struct cifsFileInfo,
tlist);
seq_printf(m,
- "0x%x 0x%llx 0x%x %d %d %d %pd",
+ "0x%x 0x%llx 0x%llx 0x%x %d %d %d %pd",
tcon->tid,
+ ses->Suid,
cfile->fid.persistent_fid,
cfile->f_flags,
cfile->count,
diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index fc736ce..26f8743 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -682,6 +682,7 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
seq_printf(s, ",acdirmax=%lu", cifs_sb->ctx->acdirmax / HZ);
seq_printf(s, ",acregmax=%lu", cifs_sb->ctx->acregmax / HZ);
}
+ seq_printf(s, ",closetimeo=%lu", cifs_sb->ctx->closetimeo / HZ);
if (tcon->ses->chan_max > 1)
seq_printf(s, ",multichannel,max_channels=%zu",
diff --git a/fs/cifs/cifsfs.h b/fs/cifs/cifsfs.h
index fa37f26..f7c91a3e 100644
--- a/fs/cifs/cifsfs.h
+++ b/fs/cifs/cifsfs.h
@@ -118,7 +118,10 @@ extern const struct dentry_operations cifs_ci_dentry_ops;
#ifdef CONFIG_CIFS_DFS_UPCALL
extern struct vfsmount *cifs_dfs_d_automount(struct path *path);
#else
-#define cifs_dfs_d_automount NULL
+static inline struct vfsmount *cifs_dfs_d_automount(struct path *path)
+{
+ return ERR_PTR(-EREMOTE);
+}
#endif
/* Functions related to symlinks */
diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
index 243d176..6ca08e4 100644
--- a/fs/cifs/cifssmb.c
+++ b/fs/cifs/cifssmb.c
@@ -4751,8 +4751,13 @@ CIFSGetDFSRefer(const unsigned int xid, struct cifs_ses *ses,
return -ENODEV;
getDFSRetry:
- rc = smb_init(SMB_COM_TRANSACTION2, 15, ses->tcon_ipc, (void **) &pSMB,
- (void **) &pSMBr);
+ /*
+ * Use smb_init_no_reconnect() instead of smb_init() as
+ * CIFSGetDFSRefer() may be called from cifs_reconnect_tcon() and thus
+ * causing an infinite recursion.
+ */
+ rc = smb_init_no_reconnect(SMB_COM_TRANSACTION2, 15, ses->tcon_ipc,
+ (void **)&pSMB, (void **)&pSMBr);
if (rc)
return rc;
diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 555bd38..f6c4126 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -2436,6 +2436,8 @@ compare_mount_options(struct super_block *sb, struct cifs_mnt_data *mnt_data)
return 0;
if (old->ctx->acdirmax != new->ctx->acdirmax)
return 0;
+ if (old->ctx->closetimeo != new->ctx->closetimeo)
+ return 0;
return 1;
}
diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index 1f3efa78..1864bda 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -792,26 +792,27 @@ static int get_dfs_referral(const unsigned int xid, struct cifs_ses *ses, const
*/
static int cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, const char *path)
{
- int rc;
- struct cache_entry *ce;
struct dfs_info3_param *refs = NULL;
+ struct cache_entry *ce;
int numrefs = 0;
- bool newent = false;
+ int rc;
cifs_dbg(FYI, "%s: search path: %s\n", __func__, path);
- down_write(&htable_rw_lock);
+ down_read(&htable_rw_lock);
ce = lookup_cache_entry(path);
- if (!IS_ERR(ce)) {
- if (!cache_entry_expired(ce)) {
- dump_ce(ce);
- up_write(&htable_rw_lock);
- return 0;
- }
- } else {
- newent = true;
+ if (!IS_ERR(ce) && !cache_entry_expired(ce)) {
+ up_read(&htable_rw_lock);
+ return 0;
}
+ /*
+ * Unlock shared access as we don't want to hold any locks while getting
+ * a new referral. The @ses used for performing the I/O could be
+ * reconnecting and it acquires @htable_rw_lock to look up the dfs cache
+ * in order to failover -- if necessary.
+ */
+ up_read(&htable_rw_lock);
/*
* Either the entry was not found, or it is expired.
@@ -819,19 +820,22 @@ static int cache_refresh_path(const unsigned int xid, struct cifs_ses *ses, cons
*/
rc = get_dfs_referral(xid, ses, path, &refs, &numrefs);
if (rc)
- goto out_unlock;
+ goto out;
dump_refs(refs, numrefs);
- if (!newent) {
- rc = update_cache_entry_locked(ce, refs, numrefs);
- goto out_unlock;
+ down_write(&htable_rw_lock);
+ /* Re-check as another task might have it added or refreshed already */
+ ce = lookup_cache_entry(path);
+ if (!IS_ERR(ce)) {
+ if (cache_entry_expired(ce))
+ rc = update_cache_entry_locked(ce, refs, numrefs);
+ } else {
+ rc = add_cache_entry_locked(refs, numrefs);
}
- rc = add_cache_entry_locked(refs, numrefs);
-
-out_unlock:
up_write(&htable_rw_lock);
+out:
free_dfs_info_array(refs, numrefs);
return rc;
}
@@ -1046,10 +1050,10 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
const struct nls_table *cp, int remap, const char *path,
const struct dfs_cache_tgt_iterator *it)
{
- int rc;
- const char *npath;
- struct cache_entry *ce;
struct cache_dfs_tgt *t;
+ struct cache_entry *ce;
+ const char *npath;
+ int rc = 0;
npath = dfs_cache_canonical_path(path, cp, remap);
if (IS_ERR(npath))
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index aa42234..b3cf9ab 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -897,12 +897,12 @@ int cifs_close(struct inode *inode, struct file *file)
* So, Increase the ref count to avoid use-after-free.
*/
if (!mod_delayed_work(deferredclose_wq,
- &cfile->deferred, cifs_sb->ctx->acregmax))
+ &cfile->deferred, cifs_sb->ctx->closetimeo))
cifsFileInfo_get(cfile);
} else {
/* Deferred close for files */
queue_delayed_work(deferredclose_wq,
- &cfile->deferred, cifs_sb->ctx->acregmax);
+ &cfile->deferred, cifs_sb->ctx->closetimeo);
cfile->deferred_close_scheduled = true;
spin_unlock(&cinode->deferred_lock);
return 0;
@@ -3613,7 +3613,7 @@ uncached_fill_pages(struct TCP_Server_Info *server,
rdata->got_bytes += result;
}
- return rdata->got_bytes > 0 && result != -ECONNABORTED ?
+ return result != -ECONNABORTED && rdata->got_bytes > 0 ?
rdata->got_bytes : result;
}
@@ -4388,7 +4388,7 @@ readpages_fill_pages(struct TCP_Server_Info *server,
rdata->got_bytes += result;
}
- return rdata->got_bytes > 0 && result != -ECONNABORTED ?
+ return result != -ECONNABORTED && rdata->got_bytes > 0 ?
rdata->got_bytes : result;
}
diff --git a/fs/cifs/fs_context.c b/fs/cifs/fs_context.c
index 3b8ed36..8455db6 100644
--- a/fs/cifs/fs_context.c
+++ b/fs/cifs/fs_context.c
@@ -143,6 +143,7 @@ const struct fs_parameter_spec smb3_fs_parameters[] = {
fsparam_u32("actimeo", Opt_actimeo),
fsparam_u32("acdirmax", Opt_acdirmax),
fsparam_u32("acregmax", Opt_acregmax),
+ fsparam_u32("closetimeo", Opt_closetimeo),
fsparam_u32("echo_interval", Opt_echo_interval),
fsparam_u32("max_credits", Opt_max_credits),
fsparam_u32("handletimeout", Opt_handletimeout),
@@ -436,13 +437,14 @@ int smb3_parse_opt(const char *options, const char *key, char **val)
* but there are some bugs that prevent rename from working if there are
* multiple delimiters.
*
- * Returns a sanitized duplicate of @path. The caller is responsible for
- * cleaning up the original.
+ * Returns a sanitized duplicate of @path. @gfp indicates the GFP_* flags
+ * for kstrdup.
+ * The caller is responsible for freeing the original.
*/
#define IS_DELIM(c) ((c) == '/' || (c) == '\\')
-static char *sanitize_path(char *path)
+char *cifs_sanitize_prepath(char *prepath, gfp_t gfp)
{
- char *cursor1 = path, *cursor2 = path;
+ char *cursor1 = prepath, *cursor2 = prepath;
/* skip all prepended delimiters */
while (IS_DELIM(*cursor1))
@@ -464,7 +466,7 @@ static char *sanitize_path(char *path)
cursor2--;
*(cursor2) = '\0';
- return kstrdup(path, GFP_KERNEL);
+ return kstrdup(prepath, gfp);
}
/*
@@ -526,7 +528,7 @@ smb3_parse_devname(const char *devname, struct smb3_fs_context *ctx)
if (!*pos)
return 0;
- ctx->prepath = sanitize_path(pos);
+ ctx->prepath = cifs_sanitize_prepath(pos, GFP_KERNEL);
if (!ctx->prepath)
return -ENOMEM;
@@ -1058,6 +1060,13 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
}
ctx->acdirmax = ctx->acregmax = HZ * result.uint_32;
break;
+ case Opt_closetimeo:
+ ctx->closetimeo = HZ * result.uint_32;
+ if (ctx->closetimeo > SMB3_MAX_DCLOSETIMEO) {
+ cifs_errorf(fc, "closetimeo too large\n");
+ goto cifs_parse_mount_err;
+ }
+ break;
case Opt_echo_interval:
ctx->echo_interval = result.uint_32;
break;
@@ -1496,6 +1505,7 @@ int smb3_init_fs_context(struct fs_context *fc)
ctx->acregmax = CIFS_DEF_ACTIMEO;
ctx->acdirmax = CIFS_DEF_ACTIMEO;
+ ctx->closetimeo = SMB3_DEF_DCLOSETIMEO;
/* Most clients set timeout to 0, allows server to use its default */
ctx->handle_timeout = 0; /* See MS-SMB2 spec section 2.2.14.2.12 */
diff --git a/fs/cifs/fs_context.h b/fs/cifs/fs_context.h
index 29601a4..3cf8d62 100644
--- a/fs/cifs/fs_context.h
+++ b/fs/cifs/fs_context.h
@@ -123,6 +123,7 @@ enum cifs_param {
Opt_actimeo,
Opt_acdirmax,
Opt_acregmax,
+ Opt_closetimeo,
Opt_echo_interval,
Opt_max_credits,
Opt_snapshot,
@@ -243,6 +244,8 @@ struct smb3_fs_context {
/* attribute cache timemout for files and directories in jiffies */
unsigned long acregmax;
unsigned long acdirmax;
+ /* timeout for deferred close of files in jiffies */
+ unsigned long closetimeo;
struct smb_version_operations *ops;
struct smb_version_values *vals;
char *prepath;
@@ -275,4 +278,12 @@ static inline struct smb3_fs_context *smb3_fc2context(const struct fs_context *f
extern int smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx);
extern void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);
+/*
+ * max deferred close timeout (jiffies) - 2^30
+ */
+#define SMB3_MAX_DCLOSETIMEO (1 << 30)
+#define SMB3_DEF_DCLOSETIMEO (1 * HZ) /* even 1 sec enough to help eg open/write/close/open/read */
+
+extern char *cifs_sanitize_prepath(char *prepath, gfp_t gfp);
+
#endif
diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
index 3a90ee3..300f5f3 100644
--- a/fs/cifs/misc.c
+++ b/fs/cifs/misc.c
@@ -1301,7 +1301,7 @@ int cifs_update_super_prepath(struct cifs_sb_info *cifs_sb, char *prefix)
kfree(cifs_sb->prepath);
if (prefix && *prefix) {
- cifs_sb->prepath = kstrdup(prefix, GFP_ATOMIC);
+ cifs_sb->prepath = cifs_sanitize_prepath(prefix, GFP_ATOMIC);
if (!cifs_sb->prepath)
return -ENOMEM;
diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c
index f3e49ef..20e9d1b 100644
--- a/fs/cifs/smb2inode.c
+++ b/fs/cifs/smb2inode.c
@@ -223,15 +223,32 @@ smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
size[0] = 8; /* sizeof __le64 */
data[0] = ptr;
- rc = SMB2_set_info_init(tcon, server,
- &rqst[num_rqst], COMPOUND_FID,
- COMPOUND_FID, current->tgid,
- FILE_END_OF_FILE_INFORMATION,
- SMB2_O_INFO_FILE, 0, data, size);
+ if (cfile) {
+ rc = SMB2_set_info_init(tcon, server,
+ &rqst[num_rqst],
+ cfile->fid.persistent_fid,
+ cfile->fid.volatile_fid,
+ current->tgid,
+ FILE_END_OF_FILE_INFORMATION,
+ SMB2_O_INFO_FILE, 0,
+ data, size);
+ } else {
+ rc = SMB2_set_info_init(tcon, server,
+ &rqst[num_rqst],
+ COMPOUND_FID,
+ COMPOUND_FID,
+ current->tgid,
+ FILE_END_OF_FILE_INFORMATION,
+ SMB2_O_INFO_FILE, 0,
+ data, size);
+ if (!rc) {
+ smb2_set_next_command(tcon, &rqst[num_rqst]);
+ smb2_set_related(&rqst[num_rqst]);
+ }
+ }
if (rc)
goto finished;
- smb2_set_next_command(tcon, &rqst[num_rqst]);
- smb2_set_related(&rqst[num_rqst++]);
+ num_rqst++;
trace_smb3_set_eof_enter(xid, ses->Suid, tcon->tid, full_path);
break;
case SMB2_OP_SET_INFO:
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index 817d781..fab5829 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -653,7 +653,7 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)
if (rc == -EOPNOTSUPP) {
cifs_dbg(FYI,
"server does not support query network interfaces\n");
- goto out;
+ ret_data_len = 0;
} else if (rc != 0) {
cifs_tcon_dbg(VFS, "error %d on ioctl to get interface list\n", rc);
goto out;
@@ -966,12 +966,13 @@ smb3_qfs_tcon(const unsigned int xid, struct cifs_tcon *tcon,
struct cifs_fid fid;
struct cached_fid *cfid = NULL;
- oparms.tcon = tcon;
- oparms.desired_access = FILE_READ_ATTRIBUTES;
- oparms.disposition = FILE_OPEN;
- oparms.create_options = cifs_create_options(cifs_sb, 0);
- oparms.fid = &fid;
- oparms.reconnect = false;
+ oparms = (struct cifs_open_parms) {
+ .tcon = tcon,
+ .desired_access = FILE_READ_ATTRIBUTES,
+ .disposition = FILE_OPEN,
+ .create_options = cifs_create_options(cifs_sb, 0),
+ .fid = &fid,
+ };
rc = open_cached_dir(xid, tcon, "", cifs_sb, &cfid);
if (rc == 0)
diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
index c0ea281..f51fea2 100644
--- a/fs/cifs/smb2pdu.c
+++ b/fs/cifs/smb2pdu.c
@@ -3999,12 +3999,15 @@ smb2_readv_callback(struct mid_q_entry *mid)
(struct smb2_sync_hdr *)rdata->iov[0].iov_base;
struct cifs_credits credits = { .value = 0, .instance = 0 };
struct smb_rqst rqst = { .rq_iov = &rdata->iov[1],
- .rq_nvec = 1,
- .rq_pages = rdata->pages,
- .rq_offset = rdata->page_offset,
- .rq_npages = rdata->nr_pages,
- .rq_pagesz = rdata->pagesz,
- .rq_tailsz = rdata->tailsz };
+ .rq_nvec = 1, };
+
+ if (rdata->got_bytes) {
+ rqst.rq_pages = rdata->pages;
+ rqst.rq_offset = rdata->page_offset;
+ rqst.rq_npages = rdata->nr_pages;
+ rqst.rq_pagesz = rdata->pagesz;
+ rqst.rq_tailsz = rdata->tailsz;
+ }
WARN_ONCE(rdata->server != mid->server,
"rdata server %p != mid server %p",
diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
index 31ef64e..a9a5d27 100644
--- a/fs/cifs/smbdirect.c
+++ b/fs/cifs/smbdirect.c
@@ -1405,6 +1405,7 @@ void smbd_destroy(struct TCP_Server_Info *server)
destroy_workqueue(info->workqueue);
log_rdma_event(INFO, "rdma session destroyed\n");
kfree(info);
+ server->smbd_conn = NULL;
}
/*
@@ -1701,6 +1702,7 @@ static struct smbd_connection *_smbd_get_connection(
allocate_mr_failed:
/* At this point, need to a full transport shutdown */
+ server->smbd_conn = info;
smbd_destroy(server);
return NULL;
@@ -2249,6 +2251,7 @@ static int allocate_mr_list(struct smbd_connection *info)
atomic_set(&info->mr_ready_count, 0);
atomic_set(&info->mr_used_count, 0);
init_waitqueue_head(&info->wait_for_mr_cleanup);
+ INIT_WORK(&info->mr_recovery_work, smbd_mr_recovery_work);
/* Allocate more MRs (2x) than hardware responder_resources */
for (i = 0; i < info->responder_resources * 2; i++) {
smbdirect_mr = kzalloc(sizeof(*smbdirect_mr), GFP_KERNEL);
@@ -2276,13 +2279,13 @@ static int allocate_mr_list(struct smbd_connection *info)
list_add_tail(&smbdirect_mr->list, &info->mr_list);
atomic_inc(&info->mr_ready_count);
}
- INIT_WORK(&info->mr_recovery_work, smbd_mr_recovery_work);
return 0;
out:
kfree(smbdirect_mr);
list_for_each_entry_safe(smbdirect_mr, tmp, &info->mr_list, list) {
+ list_del(&smbdirect_mr->list);
ib_dereg_mr(smbdirect_mr->mr);
kfree(smbdirect_mr->sgl);
kfree(smbdirect_mr);
diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
index 5140566..49b7edb 100644
--- a/fs/cifs/transport.c
+++ b/fs/cifs/transport.c
@@ -299,7 +299,7 @@ static int
__smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
struct smb_rqst *rqst)
{
- int rc = 0;
+ int rc;
struct kvec *iov;
int n_vec;
unsigned int send_length = 0;
@@ -310,6 +310,7 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
struct msghdr smb_msg = {};
__be32 rfc1002_marker;
+ cifs_in_send_inc(server);
if (cifs_rdma_enabled(server)) {
/* return -EAGAIN when connecting or reconnecting */
rc = -EAGAIN;
@@ -318,14 +319,17 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
goto smbd_done;
}
+ rc = -EAGAIN;
if (ssocket == NULL)
- return -EAGAIN;
+ goto out;
+ rc = -ERESTARTSYS;
if (fatal_signal_pending(current)) {
cifs_dbg(FYI, "signal pending before send request\n");
- return -ERESTARTSYS;
+ goto out;
}
+ rc = 0;
/* cork the socket */
tcp_sock_set_cork(ssocket->sk, true);
@@ -438,7 +442,8 @@ __smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
rc);
else if (rc > 0)
rc = 0;
-
+out:
+ cifs_in_send_dec(server);
return rc;
}
@@ -855,9 +860,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst,
* I/O response may come back and free the mid entry on another thread.
*/
cifs_save_when_sent(mid);
- cifs_in_send_inc(server);
rc = smb_send_rqst(server, 1, rqst, flags);
- cifs_in_send_dec(server);
if (rc < 0) {
revert_current_mid(server, mid->credits);
@@ -1149,9 +1152,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses,
else
midQ[i]->callback = cifs_compound_last_callback;
}
- cifs_in_send_inc(server);
rc = smb_send_rqst(server, num_rqst, rqst, flags);
- cifs_in_send_dec(server);
for (i = 0; i < num_rqst; i++)
cifs_save_when_sent(midQ[i]);
@@ -1388,9 +1389,7 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses,
midQ->mid_state = MID_REQUEST_SUBMITTED;
- cifs_in_send_inc(server);
rc = smb_send(server, in_buf, len);
- cifs_in_send_dec(server);
cifs_save_when_sent(midQ);
if (rc < 0)
@@ -1527,9 +1526,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon,
}
midQ->mid_state = MID_REQUEST_SUBMITTED;
- cifs_in_send_inc(server);
rc = smb_send(server, in_buf, len);
- cifs_in_send_dec(server);
cifs_save_when_sent(midQ);
if (rc < 0)
diff --git a/fs/coda/upcall.c b/fs/coda/upcall.c
index eb3b189..610484c 100644
--- a/fs/coda/upcall.c
+++ b/fs/coda/upcall.c
@@ -790,7 +790,7 @@ static int coda_upcall(struct venus_comm *vcp,
sig_req = kmalloc(sizeof(struct upc_req), GFP_KERNEL);
if (!sig_req) goto exit;
- sig_inputArgs = kvzalloc(sizeof(struct coda_in_hdr), GFP_KERNEL);
+ sig_inputArgs = kvzalloc(sizeof(*sig_inputArgs), GFP_KERNEL);
if (!sig_inputArgs) {
kfree(sig_req);
goto exit;
diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
index 10eddfa..fa086a8 100644
--- a/fs/dlm/lockspace.c
+++ b/fs/dlm/lockspace.c
@@ -17,7 +17,6 @@
#include "recoverd.h"
#include "dir.h"
#include "midcomms.h"
-#include "lowcomms.h"
#include "config.h"
#include "memory.h"
#include "lock.h"
@@ -384,23 +383,23 @@ static int threads_start(void)
{
int error;
- error = dlm_scand_start();
- if (error) {
- log_print("cannot start dlm_scand thread %d", error);
- goto fail;
- }
-
/* Thread for sending/receiving messages for all lockspace's */
error = dlm_midcomms_start();
if (error) {
- log_print("cannot start dlm lowcomms %d", error);
- goto scand_fail;
+ log_print("cannot start dlm midcomms %d", error);
+ goto fail;
+ }
+
+ error = dlm_scand_start();
+ if (error) {
+ log_print("cannot start dlm_scand thread %d", error);
+ goto midcomms_fail;
}
return 0;
- scand_fail:
- dlm_scand_stop();
+ midcomms_fail:
+ dlm_midcomms_stop();
fail:
return error;
}
@@ -705,7 +704,7 @@ int dlm_new_lockspace(const char *name, const char *cluster,
if (!ls_count) {
dlm_scand_stop();
dlm_midcomms_shutdown();
- dlm_lowcomms_stop();
+ dlm_midcomms_stop();
}
out:
mutex_unlock(&ls_lock);
@@ -889,7 +888,7 @@ int dlm_release_lockspace(void *lockspace, int force)
if (!error)
ls_count--;
if (!ls_count)
- dlm_lowcomms_stop();
+ dlm_midcomms_stop();
mutex_unlock(&ls_lock);
return error;
diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index d56a8f8..1eb95ba 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1959,10 +1959,6 @@ static const struct dlm_proto_ops dlm_sctp_ops = {
int dlm_lowcomms_start(void)
{
int error = -EINVAL;
- int i;
-
- for (i = 0; i < CONN_HASH_SIZE; i++)
- INIT_HLIST_HEAD(&connection_hash[i]);
init_local();
if (!dlm_local_count) {
@@ -1971,8 +1967,6 @@ int dlm_lowcomms_start(void)
goto fail;
}
- INIT_WORK(&listen_con.rwork, process_listen_recv_socket);
-
error = work_start();
if (error)
goto fail_local;
@@ -2011,6 +2005,16 @@ int dlm_lowcomms_start(void)
return error;
}
+void dlm_lowcomms_init(void)
+{
+ int i;
+
+ for (i = 0; i < CONN_HASH_SIZE; i++)
+ INIT_HLIST_HEAD(&connection_hash[i]);
+
+ INIT_WORK(&listen_con.rwork, process_listen_recv_socket);
+}
+
void dlm_lowcomms_exit(void)
{
struct dlm_node_addr *na, *safe;
diff --git a/fs/dlm/lowcomms.h b/fs/dlm/lowcomms.h
index 4ccae07..2643363 100644
--- a/fs/dlm/lowcomms.h
+++ b/fs/dlm/lowcomms.h
@@ -35,6 +35,7 @@ extern int dlm_allow_conn;
int dlm_lowcomms_start(void);
void dlm_lowcomms_shutdown(void);
void dlm_lowcomms_stop(void);
+void dlm_lowcomms_init(void);
void dlm_lowcomms_exit(void);
int dlm_lowcomms_close(int nodeid);
struct dlm_msg *dlm_lowcomms_new_msg(int nodeid, int len, gfp_t allocation,
diff --git a/fs/dlm/main.c b/fs/dlm/main.c
index afc66a1..974f7eb 100644
--- a/fs/dlm/main.c
+++ b/fs/dlm/main.c
@@ -17,7 +17,7 @@
#include "user.h"
#include "memory.h"
#include "config.h"
-#include "lowcomms.h"
+#include "midcomms.h"
static int __init init_dlm(void)
{
@@ -27,6 +27,8 @@ static int __init init_dlm(void)
if (error)
goto out;
+ dlm_midcomms_init();
+
error = dlm_lockspace_init();
if (error)
goto out_mem;
@@ -63,6 +65,7 @@ static int __init init_dlm(void)
out_lockspace:
dlm_lockspace_exit();
out_mem:
+ dlm_midcomms_exit();
dlm_memory_exit();
out:
return error;
@@ -76,7 +79,7 @@ static void __exit exit_dlm(void)
dlm_config_exit();
dlm_memory_exit();
dlm_lockspace_exit();
- dlm_lowcomms_exit();
+ dlm_midcomms_exit();
dlm_unregister_debugfs();
}
diff --git a/fs/dlm/midcomms.c b/fs/dlm/midcomms.c
index 7ae39ec..84a7a39 100644
--- a/fs/dlm/midcomms.c
+++ b/fs/dlm/midcomms.c
@@ -366,7 +366,7 @@ static int dlm_send_ack(int nodeid, uint32_t seq)
struct dlm_msg *msg;
char *ppc;
- msg = dlm_lowcomms_new_msg(nodeid, mb_len, GFP_NOFS, &ppc,
+ msg = dlm_lowcomms_new_msg(nodeid, mb_len, GFP_ATOMIC, &ppc,
NULL, NULL);
if (!msg)
return -ENOMEM;
@@ -394,7 +394,7 @@ static int dlm_send_fin(struct midcomms_node *node,
struct dlm_mhandle *mh;
char *ppc;
- mh = dlm_midcomms_get_mhandle(node->nodeid, mb_len, GFP_NOFS, &ppc);
+ mh = dlm_midcomms_get_mhandle(node->nodeid, mb_len, GFP_ATOMIC, &ppc);
if (!mh)
return -ENOMEM;
@@ -478,15 +478,14 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
switch (p->header.h_cmd) {
case DLM_FIN:
- /* send ack before fin */
- dlm_send_ack(node->nodeid, node->seq_next);
-
spin_lock(&node->state_lock);
pr_debug("receive fin msg from node %d with state %s\n",
node->nodeid, dlm_state_str(node->state));
switch (node->state) {
case DLM_ESTABLISHED:
+ dlm_send_ack(node->nodeid, node->seq_next);
+
node->state = DLM_CLOSE_WAIT;
pr_debug("switch node %d to state %s\n",
node->nodeid, dlm_state_str(node->state));
@@ -498,16 +497,19 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
node->state = DLM_LAST_ACK;
pr_debug("switch node %d to state %s case 1\n",
node->nodeid, dlm_state_str(node->state));
- spin_unlock(&node->state_lock);
- goto send_fin;
+ set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
+ dlm_send_fin(node, dlm_pas_fin_ack_rcv);
}
break;
case DLM_FIN_WAIT1:
+ dlm_send_ack(node->nodeid, node->seq_next);
node->state = DLM_CLOSING;
+ set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
pr_debug("switch node %d to state %s\n",
node->nodeid, dlm_state_str(node->state));
break;
case DLM_FIN_WAIT2:
+ dlm_send_ack(node->nodeid, node->seq_next);
midcomms_node_reset(node);
pr_debug("switch node %d to state %s\n",
node->nodeid, dlm_state_str(node->state));
@@ -524,8 +526,6 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
return;
}
spin_unlock(&node->state_lock);
-
- set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
break;
default:
WARN_ON(test_bit(DLM_NODE_FLAG_STOP_RX, &node->flags));
@@ -543,12 +543,6 @@ static void dlm_midcomms_receive_buffer(union dlm_packet *p,
log_print_ratelimited("ignore dlm msg because seq mismatch, seq: %u, expected: %u, nodeid: %d",
seq, node->seq_next, node->nodeid);
}
-
- return;
-
-send_fin:
- set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
- dlm_send_fin(node, dlm_pas_fin_ack_rcv);
}
static struct midcomms_node *
@@ -1149,12 +1143,27 @@ void dlm_midcomms_commit_mhandle(struct dlm_mhandle *mh)
int dlm_midcomms_start(void)
{
+ return dlm_lowcomms_start();
+}
+
+void dlm_midcomms_stop(void)
+{
+ dlm_lowcomms_stop();
+}
+
+void dlm_midcomms_init(void)
+{
int i;
for (i = 0; i < CONN_HASH_SIZE; i++)
INIT_HLIST_HEAD(&node_hash[i]);
- return dlm_lowcomms_start();
+ dlm_lowcomms_init();
+}
+
+void dlm_midcomms_exit(void)
+{
+ dlm_lowcomms_exit();
}
static void dlm_act_fin_ack_rcv(struct midcomms_node *node)
@@ -1269,11 +1278,11 @@ void dlm_midcomms_remove_member(int nodeid)
case DLM_CLOSE_WAIT:
/* passive shutdown DLM_LAST_ACK case 2 */
node->state = DLM_LAST_ACK;
- spin_unlock(&node->state_lock);
-
pr_debug("switch node %d to state %s case 2\n",
node->nodeid, dlm_state_str(node->state));
- goto send_fin;
+ set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
+ dlm_send_fin(node, dlm_pas_fin_ack_rcv);
+ break;
case DLM_LAST_ACK:
/* probably receive fin caught it, do nothing */
break;
@@ -1289,12 +1298,6 @@ void dlm_midcomms_remove_member(int nodeid)
spin_unlock(&node->state_lock);
srcu_read_unlock(&nodes_srcu, idx);
- return;
-
-send_fin:
- set_bit(DLM_NODE_FLAG_STOP_RX, &node->flags);
- dlm_send_fin(node, dlm_pas_fin_ack_rcv);
- srcu_read_unlock(&nodes_srcu, idx);
}
static void midcomms_node_release(struct rcu_head *rcu)
@@ -1325,6 +1328,7 @@ static void midcomms_shutdown(struct midcomms_node *node)
node->state = DLM_FIN_WAIT1;
pr_debug("switch node %d to state %s case 2\n",
node->nodeid, dlm_state_str(node->state));
+ dlm_send_fin(node, dlm_act_fin_ack_rcv);
break;
case DLM_CLOSED:
/* we have what we want */
@@ -1338,12 +1342,8 @@ static void midcomms_shutdown(struct midcomms_node *node)
}
spin_unlock(&node->state_lock);
- if (node->state == DLM_FIN_WAIT1) {
- dlm_send_fin(node, dlm_act_fin_ack_rcv);
-
- if (DLM_DEBUG_FENCE_TERMINATION)
- msleep(5000);
- }
+ if (DLM_DEBUG_FENCE_TERMINATION)
+ msleep(5000);
/* wait for other side dlm + fin */
ret = wait_event_timeout(node->shutdown_wait,
diff --git a/fs/dlm/midcomms.h b/fs/dlm/midcomms.h
index 579abc6..1a36b78 100644
--- a/fs/dlm/midcomms.h
+++ b/fs/dlm/midcomms.h
@@ -20,6 +20,9 @@ struct dlm_mhandle *dlm_midcomms_get_mhandle(int nodeid, int len,
void dlm_midcomms_commit_mhandle(struct dlm_mhandle *mh);
int dlm_midcomms_close(int nodeid);
int dlm_midcomms_start(void);
+void dlm_midcomms_stop(void);
+void dlm_midcomms_init(void);
+void dlm_midcomms_exit(void);
void dlm_midcomms_shutdown(void);
void dlm_midcomms_add_member(int nodeid);
void dlm_midcomms_remove_member(int nodeid);
diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
index 7a6df35..73b86b5 100644
--- a/fs/erofs/zmap.c
+++ b/fs/erofs/zmap.c
@@ -700,12 +700,16 @@ static int z_erofs_iomap_begin_report(struct inode *inode, loff_t offset,
iomap->type = IOMAP_HOLE;
iomap->addr = IOMAP_NULL_ADDR;
/*
- * No strict rule how to describe extents for post EOF, yet
- * we need do like below. Otherwise, iomap itself will get
+ * No strict rule on how to describe extents for post EOF, yet
+ * we need to do like below. Otherwise, iomap itself will get
* into an endless loop on post EOF.
+ *
+ * Calculate the effective offset by subtracting extent start
+ * (map.m_la) from the requested offset, and add it to length.
+ * (NB: offset >= map.m_la always)
*/
if (iomap->offset >= inode->i_size)
- iomap->length = length + map.m_la - offset;
+ iomap->length = length + offset - map.m_la;
}
iomap->flags = 0;
return 0;
diff --git a/fs/eventfd.c b/fs/eventfd.c
index c0ffee9..249ca6c 100644
--- a/fs/eventfd.c
+++ b/fs/eventfd.c
@@ -43,21 +43,7 @@ struct eventfd_ctx {
int id;
};
-/**
- * eventfd_signal - Adds @n to the eventfd counter.
- * @ctx: [in] Pointer to the eventfd context.
- * @n: [in] Value of the counter to be added to the eventfd internal counter.
- * The value cannot be negative.
- *
- * This function is supposed to be called by the kernel in paths that do not
- * allow sleeping. In this function we allow the counter to reach the ULLONG_MAX
- * value, and we signal this as overflow condition by returning a EPOLLERR
- * to poll(2).
- *
- * Returns the amount by which the counter was incremented. This will be less
- * than @n if the counter has overflowed.
- */
-__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
+__u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask)
{
unsigned long flags;
@@ -78,12 +64,31 @@ __u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
n = ULLONG_MAX - ctx->count;
ctx->count += n;
if (waitqueue_active(&ctx->wqh))
- wake_up_locked_poll(&ctx->wqh, EPOLLIN);
+ wake_up_locked_poll(&ctx->wqh, EPOLLIN | mask);
current->in_eventfd = 0;
spin_unlock_irqrestore(&ctx->wqh.lock, flags);
return n;
}
+
+/**
+ * eventfd_signal - Adds @n to the eventfd counter.
+ * @ctx: [in] Pointer to the eventfd context.
+ * @n: [in] Value of the counter to be added to the eventfd internal counter.
+ * The value cannot be negative.
+ *
+ * This function is supposed to be called by the kernel in paths that do not
+ * allow sleeping. In this function we allow the counter to reach the ULLONG_MAX
+ * value, and we signal this as overflow condition by returning a EPOLLERR
+ * to poll(2).
+ *
+ * Returns the amount by which the counter was incremented. This will be less
+ * than @n if the counter has overflowed.
+ */
+__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
+{
+ return eventfd_signal_mask(ctx, n, 0);
+}
EXPORT_SYMBOL_GPL(eventfd_signal);
static void eventfd_free_ctx(struct eventfd_ctx *ctx)
diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index cf326c5..1ec1978 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -484,7 +484,8 @@ static inline void ep_set_busy_poll_napi_id(struct epitem *epi)
*/
#ifdef CONFIG_DEBUG_LOCK_ALLOC
-static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi)
+static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi,
+ unsigned pollflags)
{
struct eventpoll *ep_src;
unsigned long flags;
@@ -515,16 +516,17 @@ static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi)
}
spin_lock_irqsave_nested(&ep->poll_wait.lock, flags, nests);
ep->nests = nests + 1;
- wake_up_locked_poll(&ep->poll_wait, EPOLLIN);
+ wake_up_locked_poll(&ep->poll_wait, EPOLLIN | pollflags);
ep->nests = 0;
spin_unlock_irqrestore(&ep->poll_wait.lock, flags);
}
#else
-static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi)
+static void ep_poll_safewake(struct eventpoll *ep, struct epitem *epi,
+ unsigned pollflags)
{
- wake_up_poll(&ep->poll_wait, EPOLLIN);
+ wake_up_poll(&ep->poll_wait, EPOLLIN | pollflags);
}
#endif
@@ -735,7 +737,7 @@ static void ep_free(struct eventpoll *ep)
/* We need to release all tasks waiting for these file */
if (waitqueue_active(&ep->poll_wait))
- ep_poll_safewake(ep, NULL);
+ ep_poll_safewake(ep, NULL, 0);
/*
* We need to lock this because we could be hit by
@@ -1201,7 +1203,7 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
/* We have to call this outside the lock */
if (pwake)
- ep_poll_safewake(ep, epi);
+ ep_poll_safewake(ep, epi, pollflags & EPOLL_URING_WAKE);
if (!(epi->event.events & EPOLLEXCLUSIVE))
ewake = 1;
@@ -1546,7 +1548,7 @@ static int ep_insert(struct eventpoll *ep, const struct epoll_event *event,
/* We have to call this outside the lock */
if (pwake)
- ep_poll_safewake(ep, NULL);
+ ep_poll_safewake(ep, NULL, 0);
return 0;
}
@@ -1622,7 +1624,7 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi,
/* We have to call this outside the lock */
if (pwake)
- ep_poll_safewake(ep, NULL);
+ ep_poll_safewake(ep, NULL, 0);
return 0;
}
diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
index cb1c0d8..3940a56 100644
--- a/fs/exfat/dir.c
+++ b/fs/exfat/dir.c
@@ -103,7 +103,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
clu.dir = ei->hint_bmap.clu;
}
- while (clu_offset > 0) {
+ while (clu_offset > 0 && clu.dir != EXFAT_EOF_CLUSTER) {
if (exfat_get_next_cluster(sb, &(clu.dir)))
return -EIO;
@@ -237,10 +237,7 @@ static int exfat_iterate(struct file *filp, struct dir_context *ctx)
fake_offset = 1;
}
- if (cpos & (DENTRY_SIZE - 1)) {
- err = -ENOENT;
- goto unlock;
- }
+ cpos = round_up(cpos, DENTRY_SIZE);
/* name buffer should be allocated before use */
err = exfat_alloc_namebuf(nb);
diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
index 9f82a8a..db53870 100644
--- a/fs/exfat/exfat_fs.h
+++ b/fs/exfat/exfat_fs.h
@@ -42,7 +42,7 @@ enum {
#define ES_2_ENTRIES 2
#define ES_ALL_ENTRIES 0
-#define DIR_DELETED 0xFFFF0321
+#define DIR_DELETED 0xFFFFFFF7
/* type values */
#define TYPE_UNUSED 0x0000
diff --git a/fs/exfat/file.c b/fs/exfat/file.c
index d890fd3..c40082a 100644
--- a/fs/exfat/file.c
+++ b/fs/exfat/file.c
@@ -251,8 +251,7 @@ void exfat_truncate(struct inode *inode, loff_t size)
else
mark_inode_dirty(inode);
- inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
- inode->i_blkbits;
+ inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
write_size:
aligned_size = i_size_read(inode);
if (aligned_size & (blocksize - 1)) {
diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
index 72a0ccf..23d8c36 100644
--- a/fs/exfat/inode.c
+++ b/fs/exfat/inode.c
@@ -243,8 +243,7 @@ static int exfat_map_cluster(struct inode *inode, unsigned int clu_offset,
return err;
} /* end of if != DIR_DELETED */
- inode->i_blocks +=
- num_to_be_allocated << sbi->sect_per_clus_bits;
+ inode->i_blocks += EXFAT_CLU_TO_B(num_to_be_allocated, sbi) >> 9;
/*
* Move *clu pointer along FAT chains (hole care) because the
@@ -602,8 +601,7 @@ static int exfat_fill_inode(struct inode *inode, struct exfat_dir_entry *info)
exfat_save_attr(inode, info->attr);
- inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
- inode->i_blkbits;
+ inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
inode->i_mtime = info->mtime;
inode->i_ctime = info->mtime;
ei->i_crtime = info->crtime;
diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
index 8a7f4c0..b22d6c9 100644
--- a/fs/exfat/namei.c
+++ b/fs/exfat/namei.c
@@ -398,7 +398,7 @@ static int exfat_find_empty_entry(struct inode *inode,
ei->i_size_ondisk += sbi->cluster_size;
ei->i_size_aligned += sbi->cluster_size;
ei->flags = p_dir->flags;
- inode->i_blocks += 1 << sbi->sect_per_clus_bits;
+ inode->i_blocks += sbi->cluster_size >> 9;
}
return dentry;
diff --git a/fs/exfat/super.c b/fs/exfat/super.c
index 4b5d02b..82297623 100644
--- a/fs/exfat/super.c
+++ b/fs/exfat/super.c
@@ -364,8 +364,7 @@ static int exfat_read_root(struct inode *inode)
inode->i_op = &exfat_dir_inode_operations;
inode->i_fop = &exfat_dir_operations;
- inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >>
- inode->i_blkbits;
+ inode->i_blocks = round_up(i_size_read(inode), sbi->cluster_size) >> 9;
ei->i_pos = ((loff_t)sbi->root_dir << 32) | 0xffffffff;
ei->i_size_aligned = i_size_read(inode);
ei->i_size_ondisk = i_size_read(inode);
diff --git a/fs/ext4/block_validity.c b/fs/ext4/block_validity.c
index 4666b55..5504f72 100644
--- a/fs/ext4/block_validity.c
+++ b/fs/ext4/block_validity.c
@@ -292,15 +292,10 @@ void ext4_release_system_zone(struct super_block *sb)
call_rcu(&system_blks->rcu, ext4_destroy_system_zone);
}
-/*
- * Returns 1 if the passed-in block region (start_blk,
- * start_blk+count) is valid; 0 if some part of the block region
- * overlaps with some other filesystem metadata blocks.
- */
-int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
- unsigned int count)
+int ext4_sb_block_valid(struct super_block *sb, struct inode *inode,
+ ext4_fsblk_t start_blk, unsigned int count)
{
- struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
struct ext4_system_blocks *system_blks;
struct ext4_system_zone *entry;
struct rb_node *n;
@@ -329,7 +324,9 @@ int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
else if (start_blk >= (entry->start_blk + entry->count))
n = n->rb_right;
else {
- ret = (entry->ino == inode->i_ino);
+ ret = 0;
+ if (inode)
+ ret = (entry->ino == inode->i_ino);
break;
}
}
@@ -338,6 +335,17 @@ int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
return ret;
}
+/*
+ * Returns 1 if the passed-in block region (start_blk,
+ * start_blk+count) is valid; 0 if some part of the block region
+ * overlaps with some other filesystem metadata blocks.
+ */
+int ext4_inode_block_valid(struct inode *inode, ext4_fsblk_t start_blk,
+ unsigned int count)
+{
+ return ext4_sb_block_valid(inode->i_sb, inode, start_blk, count);
+}
+
int ext4_check_blockref(const char *function, unsigned int line,
struct inode *inode, __le32 *p, unsigned int max)
{
diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index bc209f3..80f0942 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3698,6 +3698,9 @@ extern int ext4_inode_block_valid(struct inode *inode,
unsigned int count);
extern int ext4_check_blockref(const char *, unsigned int,
struct inode *, __le32 *, unsigned int);
+extern int ext4_sb_block_valid(struct super_block *sb, struct inode *inode,
+ ext4_fsblk_t start_blk, unsigned int count);
+
/* extents.c */
struct ext4_ext_path;
diff --git a/fs/ext4/fast_commit.c b/fs/ext4/fast_commit.c
index a8d0a80..2660c34 100644
--- a/fs/ext4/fast_commit.c
+++ b/fs/ext4/fast_commit.c
@@ -1282,8 +1282,14 @@ struct dentry_info_args {
char *dname;
};
+/* Same as struct ext4_fc_tl, but uses native endianness fields */
+struct ext4_fc_tl_mem {
+ u16 fc_tag;
+ u16 fc_len;
+};
+
static inline void tl_to_darg(struct dentry_info_args *darg,
- struct ext4_fc_tl *tl, u8 *val)
+ struct ext4_fc_tl_mem *tl, u8 *val)
{
struct ext4_fc_dentry_info fcd;
@@ -1295,16 +1301,18 @@ static inline void tl_to_darg(struct dentry_info_args *darg,
darg->dname_len = tl->fc_len - sizeof(struct ext4_fc_dentry_info);
}
-static inline void ext4_fc_get_tl(struct ext4_fc_tl *tl, u8 *val)
+static inline void ext4_fc_get_tl(struct ext4_fc_tl_mem *tl, u8 *val)
{
- memcpy(tl, val, EXT4_FC_TAG_BASE_LEN);
- tl->fc_len = le16_to_cpu(tl->fc_len);
- tl->fc_tag = le16_to_cpu(tl->fc_tag);
+ struct ext4_fc_tl tl_disk;
+
+ memcpy(&tl_disk, val, EXT4_FC_TAG_BASE_LEN);
+ tl->fc_len = le16_to_cpu(tl_disk.fc_len);
+ tl->fc_tag = le16_to_cpu(tl_disk.fc_tag);
}
/* Unlink replay function */
-static int ext4_fc_replay_unlink(struct super_block *sb, struct ext4_fc_tl *tl,
- u8 *val)
+static int ext4_fc_replay_unlink(struct super_block *sb,
+ struct ext4_fc_tl_mem *tl, u8 *val)
{
struct inode *inode, *old_parent;
struct qstr entry;
@@ -1401,8 +1409,8 @@ static int ext4_fc_replay_link_internal(struct super_block *sb,
}
/* Link replay function */
-static int ext4_fc_replay_link(struct super_block *sb, struct ext4_fc_tl *tl,
- u8 *val)
+static int ext4_fc_replay_link(struct super_block *sb,
+ struct ext4_fc_tl_mem *tl, u8 *val)
{
struct inode *inode;
struct dentry_info_args darg;
@@ -1456,8 +1464,8 @@ static int ext4_fc_record_modified_inode(struct super_block *sb, int ino)
/*
* Inode replay function
*/
-static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
- u8 *val)
+static int ext4_fc_replay_inode(struct super_block *sb,
+ struct ext4_fc_tl_mem *tl, u8 *val)
{
struct ext4_fc_inode fc_inode;
struct ext4_inode *raw_inode;
@@ -1557,8 +1565,8 @@ static int ext4_fc_replay_inode(struct super_block *sb, struct ext4_fc_tl *tl,
* inode for which we are trying to create a dentry here, should already have
* been replayed before we start here.
*/
-static int ext4_fc_replay_create(struct super_block *sb, struct ext4_fc_tl *tl,
- u8 *val)
+static int ext4_fc_replay_create(struct super_block *sb,
+ struct ext4_fc_tl_mem *tl, u8 *val)
{
int ret = 0;
struct inode *inode = NULL;
@@ -1657,7 +1665,7 @@ int ext4_fc_record_regions(struct super_block *sb, int ino,
/* Replay add range tag */
static int ext4_fc_replay_add_range(struct super_block *sb,
- struct ext4_fc_tl *tl, u8 *val)
+ struct ext4_fc_tl_mem *tl, u8 *val)
{
struct ext4_fc_add_range fc_add_ex;
struct ext4_extent newex, *ex;
@@ -1778,8 +1786,8 @@ static int ext4_fc_replay_add_range(struct super_block *sb,
/* Replay DEL_RANGE tag */
static int
-ext4_fc_replay_del_range(struct super_block *sb, struct ext4_fc_tl *tl,
- u8 *val)
+ext4_fc_replay_del_range(struct super_block *sb,
+ struct ext4_fc_tl_mem *tl, u8 *val)
{
struct inode *inode;
struct ext4_fc_del_range lrange;
@@ -1972,7 +1980,7 @@ static int ext4_fc_replay_scan(journal_t *journal,
struct ext4_fc_replay_state *state;
int ret = JBD2_FC_REPLAY_CONTINUE;
struct ext4_fc_add_range ext;
- struct ext4_fc_tl tl;
+ struct ext4_fc_tl_mem tl;
struct ext4_fc_tail tail;
__u8 *start, *end, *cur, *val;
struct ext4_fc_head head;
@@ -2091,7 +2099,7 @@ static int ext4_fc_replay(journal_t *journal, struct buffer_head *bh,
{
struct super_block *sb = journal->j_private;
struct ext4_sb_info *sbi = EXT4_SB(sb);
- struct ext4_fc_tl tl;
+ struct ext4_fc_tl_mem tl;
__u8 *start, *end, *cur, *val;
int ret = JBD2_FC_REPLAY_CONTINUE;
struct ext4_fc_replay_state *state = &sbi->s_fc_replay_state;
diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
index 4493ef0..cdf9bfe 100644
--- a/fs/ext4/fsmap.c
+++ b/fs/ext4/fsmap.c
@@ -486,6 +486,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
keys[0].fmr_physical = bofs;
if (keys[1].fmr_physical >= eofs)
keys[1].fmr_physical = eofs - 1;
+ if (keys[1].fmr_physical < keys[0].fmr_physical)
+ return 0;
start_fsb = keys[0].fmr_physical;
end_fsb = keys[1].fmr_physical;
diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 38ad09e..6473786 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -158,7 +158,6 @@ int ext4_find_inline_data_nolock(struct inode *inode)
(void *)ext4_raw_inode(&is.iloc));
EXT4_I(inode)->i_inline_size = EXT4_MIN_INLINE_DATA_SIZE +
le32_to_cpu(is.s.here->e_value_size);
- ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
}
out:
brelse(is.iloc.bh);
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 0a63863..a39d5cc 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -4549,8 +4549,13 @@ static inline int ext4_iget_extra_inode(struct inode *inode,
if (EXT4_INODE_HAS_XATTR_SPACE(inode) &&
*magic == cpu_to_le32(EXT4_XATTR_MAGIC)) {
+ int err;
+
ext4_set_inode_state(inode, EXT4_STATE_XATTR);
- return ext4_find_inline_data_nolock(inode);
+ err = ext4_find_inline_data_nolock(inode);
+ if (!err && ext4_has_inline_data(inode))
+ ext4_set_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
+ return err;
} else
EXT4_I(inode)->i_inline_off = 0;
return 0;
@@ -4631,13 +4636,6 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
goto bad_inode;
raw_inode = ext4_raw_inode(&iloc);
- if ((ino == EXT4_ROOT_INO) && (raw_inode->i_links_count == 0)) {
- ext4_error_inode(inode, function, line, 0,
- "iget: root inode unallocated");
- ret = -EFSCORRUPTED;
- goto bad_inode;
- }
-
if ((flags & EXT4_IGET_HANDLE) &&
(raw_inode->i_links_count == 0) && (raw_inode->i_mode == 0)) {
ret = -ESTALE;
@@ -4710,11 +4708,16 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
* NeilBrown 1999oct15
*/
if (inode->i_nlink == 0) {
- if ((inode->i_mode == 0 ||
+ if ((inode->i_mode == 0 || flags & EXT4_IGET_SPECIAL ||
!(EXT4_SB(inode->i_sb)->s_mount_state & EXT4_ORPHAN_FS)) &&
ino != EXT4_BOOT_LOADER_INO) {
- /* this inode is deleted */
- ret = -ESTALE;
+ /* this inode is deleted or unallocated */
+ if (flags & EXT4_IGET_SPECIAL) {
+ ext4_error_inode(inode, function, line, 0,
+ "iget: special inode unallocated");
+ ret = -EFSCORRUPTED;
+ } else
+ ret = -ESTALE;
goto bad_inode;
}
/* The only unlinked inodes we let through here have
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index 2e6d03e..656c6ba 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -184,6 +184,7 @@ static long swap_inode_boot_loader(struct super_block *sb,
ei_bl->i_flags = 0;
inode_set_iversion(inode_bl, 1);
i_size_write(inode_bl, 0);
+ EXT4_I(inode_bl)->i_disksize = inode_bl->i_size;
inode_bl->i_mode = S_IFREG;
if (ext4_has_feature_extents(sb)) {
ext4_set_inode_flag(inode_bl, EXT4_INODE_EXTENTS);
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 0c7498a..e6718bf 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -5888,7 +5888,8 @@ static void ext4_free_blocks_simple(struct inode *inode, ext4_fsblk_t block,
}
/**
- * ext4_free_blocks() -- Free given blocks and update quota
+ * ext4_mb_clear_bb() -- helper function for freeing blocks.
+ * Used by ext4_free_blocks()
* @handle: handle for this transaction
* @inode: inode
* @bh: optional buffer of the block to be freed
@@ -5896,9 +5897,9 @@ static void ext4_free_blocks_simple(struct inode *inode, ext4_fsblk_t block,
* @count: number of blocks to be freed
* @flags: flags used by ext4_free_blocks
*/
-void ext4_free_blocks(handle_t *handle, struct inode *inode,
- struct buffer_head *bh, ext4_fsblk_t block,
- unsigned long count, int flags)
+static void ext4_mb_clear_bb(handle_t *handle, struct inode *inode,
+ ext4_fsblk_t block, unsigned long count,
+ int flags)
{
struct buffer_head *bitmap_bh = NULL;
struct super_block *sb = inode->i_sb;
@@ -5915,79 +5916,14 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
sbi = EXT4_SB(sb);
- if (sbi->s_mount_state & EXT4_FC_REPLAY) {
- ext4_free_blocks_simple(inode, block, count);
- return;
- }
-
- might_sleep();
- if (bh) {
- if (block)
- BUG_ON(block != bh->b_blocknr);
- else
- block = bh->b_blocknr;
- }
-
if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
!ext4_inode_block_valid(inode, block, count)) {
- ext4_error(sb, "Freeing blocks not in datazone - "
- "block = %llu, count = %lu", block, count);
+ ext4_error(sb, "Freeing blocks in system zone - "
+ "Block = %llu, count = %lu", block, count);
+ /* err = 0. ext4_std_error should be a no op */
goto error_return;
}
-
- ext4_debug("freeing block %llu\n", block);
- trace_ext4_free_blocks(inode, block, count, flags);
-
- if (bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
- BUG_ON(count > 1);
-
- ext4_forget(handle, flags & EXT4_FREE_BLOCKS_METADATA,
- inode, bh, block);
- }
-
- /*
- * If the extent to be freed does not begin on a cluster
- * boundary, we need to deal with partial clusters at the
- * beginning and end of the extent. Normally we will free
- * blocks at the beginning or the end unless we are explicitly
- * requested to avoid doing so.
- */
- overflow = EXT4_PBLK_COFF(sbi, block);
- if (overflow) {
- if (flags & EXT4_FREE_BLOCKS_NOFREE_FIRST_CLUSTER) {
- overflow = sbi->s_cluster_ratio - overflow;
- block += overflow;
- if (count > overflow)
- count -= overflow;
- else
- return;
- } else {
- block -= overflow;
- count += overflow;
- }
- }
- overflow = EXT4_LBLK_COFF(sbi, count);
- if (overflow) {
- if (flags & EXT4_FREE_BLOCKS_NOFREE_LAST_CLUSTER) {
- if (count > overflow)
- count -= overflow;
- else
- return;
- } else
- count += sbi->s_cluster_ratio - overflow;
- }
-
- if (!bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
- int i;
- int is_metadata = flags & EXT4_FREE_BLOCKS_METADATA;
-
- for (i = 0; i < count; i++) {
- cond_resched();
- if (is_metadata)
- bh = sb_find_get_block(inode->i_sb, block + i);
- ext4_forget(handle, is_metadata, inode, bh, block + i);
- }
- }
+ flags |= EXT4_FREE_BLOCKS_VALIDATED;
do_more:
overflow = 0;
@@ -6005,6 +5941,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
overflow = EXT4_C2B(sbi, bit) + count -
EXT4_BLOCKS_PER_GROUP(sb);
count -= overflow;
+ /* The range changed so it's no longer validated */
+ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
}
count_clusters = EXT4_NUM_B2C(sbi, count);
bitmap_bh = ext4_read_block_bitmap(sb, block_group);
@@ -6019,13 +5957,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
goto error_return;
}
- if (in_range(ext4_block_bitmap(sb, gdp), block, count) ||
- in_range(ext4_inode_bitmap(sb, gdp), block, count) ||
- in_range(block, ext4_inode_table(sb, gdp),
- sbi->s_itb_per_group) ||
- in_range(block + count - 1, ext4_inode_table(sb, gdp),
- sbi->s_itb_per_group)) {
-
+ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
+ !ext4_inode_block_valid(inode, block, count)) {
ext4_error(sb, "Freeing blocks in system zone - "
"Block = %llu, count = %lu", block, count);
/* err = 0. ext4_std_error should be a no op */
@@ -6096,7 +6029,7 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
NULL);
if (err && err != -EOPNOTSUPP)
ext4_msg(sb, KERN_WARNING, "discard request in"
- " group:%d block:%d count:%lu failed"
+ " group:%u block:%d count:%lu failed"
" with %d", block_group, bit, count,
err);
} else
@@ -6148,6 +6081,8 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
block += count;
count = overflow;
put_bh(bitmap_bh);
+ /* The range changed so it's no longer validated */
+ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
goto do_more;
}
error_return:
@@ -6157,6 +6092,108 @@ void ext4_free_blocks(handle_t *handle, struct inode *inode,
}
/**
+ * ext4_free_blocks() -- Free given blocks and update quota
+ * @handle: handle for this transaction
+ * @inode: inode
+ * @bh: optional buffer of the block to be freed
+ * @block: starting physical block to be freed
+ * @count: number of blocks to be freed
+ * @flags: flags used by ext4_free_blocks
+ */
+void ext4_free_blocks(handle_t *handle, struct inode *inode,
+ struct buffer_head *bh, ext4_fsblk_t block,
+ unsigned long count, int flags)
+{
+ struct super_block *sb = inode->i_sb;
+ unsigned int overflow;
+ struct ext4_sb_info *sbi;
+
+ sbi = EXT4_SB(sb);
+
+ if (sbi->s_mount_state & EXT4_FC_REPLAY) {
+ ext4_free_blocks_simple(inode, block, count);
+ return;
+ }
+
+ might_sleep();
+ if (bh) {
+ if (block)
+ BUG_ON(block != bh->b_blocknr);
+ else
+ block = bh->b_blocknr;
+ }
+
+ if (!(flags & EXT4_FREE_BLOCKS_VALIDATED) &&
+ !ext4_inode_block_valid(inode, block, count)) {
+ ext4_error(sb, "Freeing blocks not in datazone - "
+ "block = %llu, count = %lu", block, count);
+ return;
+ }
+ flags |= EXT4_FREE_BLOCKS_VALIDATED;
+
+ ext4_debug("freeing block %llu\n", block);
+ trace_ext4_free_blocks(inode, block, count, flags);
+
+ if (bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
+ BUG_ON(count > 1);
+
+ ext4_forget(handle, flags & EXT4_FREE_BLOCKS_METADATA,
+ inode, bh, block);
+ }
+
+ /*
+ * If the extent to be freed does not begin on a cluster
+ * boundary, we need to deal with partial clusters at the
+ * beginning and end of the extent. Normally we will free
+ * blocks at the beginning or the end unless we are explicitly
+ * requested to avoid doing so.
+ */
+ overflow = EXT4_PBLK_COFF(sbi, block);
+ if (overflow) {
+ if (flags & EXT4_FREE_BLOCKS_NOFREE_FIRST_CLUSTER) {
+ overflow = sbi->s_cluster_ratio - overflow;
+ block += overflow;
+ if (count > overflow)
+ count -= overflow;
+ else
+ return;
+ } else {
+ block -= overflow;
+ count += overflow;
+ }
+ /* The range changed so it's no longer validated */
+ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+ overflow = EXT4_LBLK_COFF(sbi, count);
+ if (overflow) {
+ if (flags & EXT4_FREE_BLOCKS_NOFREE_LAST_CLUSTER) {
+ if (count > overflow)
+ count -= overflow;
+ else
+ return;
+ } else
+ count += sbi->s_cluster_ratio - overflow;
+ /* The range changed so it's no longer validated */
+ flags &= ~EXT4_FREE_BLOCKS_VALIDATED;
+ }
+
+ if (!bh && (flags & EXT4_FREE_BLOCKS_FORGET)) {
+ int i;
+ int is_metadata = flags & EXT4_FREE_BLOCKS_METADATA;
+
+ for (i = 0; i < count; i++) {
+ cond_resched();
+ if (is_metadata)
+ bh = sb_find_get_block(inode->i_sb, block + i);
+ ext4_forget(handle, is_metadata, inode, bh, block + i);
+ }
+ }
+
+ ext4_mb_clear_bb(handle, inode, block, count, flags);
+ return;
+}
+
+/**
* ext4_group_add_blocks() -- Add given blocks to an existing group
* @handle: handle to this transaction
* @sb: super block
@@ -6212,11 +6249,7 @@ int ext4_group_add_blocks(handle_t *handle, struct super_block *sb,
goto error_return;
}
- if (in_range(ext4_block_bitmap(sb, desc), block, count) ||
- in_range(ext4_inode_bitmap(sb, desc), block, count) ||
- in_range(block, ext4_inode_table(sb, desc), sbi->s_itb_per_group) ||
- in_range(block + count - 1, ext4_inode_table(sb, desc),
- sbi->s_itb_per_group)) {
+ if (!ext4_sb_block_valid(sb, NULL, block, count)) {
ext4_error(sb, "Adding blocks in system zones - "
"Block = %llu, count = %lu",
block, count);
diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
index 1e6cc6c..0c47e1e 100644
--- a/fs/ext4/namei.c
+++ b/fs/ext4/namei.c
@@ -1595,11 +1595,10 @@ static struct buffer_head *__ext4_find_entry(struct inode *dir,
int has_inline_data = 1;
ret = ext4_find_inline_entry(dir, fname, res_dir,
&has_inline_data);
- if (has_inline_data) {
- if (inlined)
- *inlined = 1;
+ if (inlined)
+ *inlined = has_inline_data;
+ if (has_inline_data)
goto cleanup_and_exit;
- }
}
if ((namelen <= 2) && (name[0] == '.') &&
@@ -3660,7 +3659,8 @@ static void ext4_resetent(handle_t *handle, struct ext4_renament *ent,
* so the old->de may no longer valid and need to find it again
* before reset old inode info.
*/
- old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de, NULL);
+ old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de,
+ &old.inlined);
if (IS_ERR(old.bh))
retval = PTR_ERR(old.bh);
if (!old.bh)
@@ -3827,9 +3827,20 @@ static int ext4_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
return retval;
}
- old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de, NULL);
- if (IS_ERR(old.bh))
- return PTR_ERR(old.bh);
+ /*
+ * We need to protect against old.inode directory getting converted
+ * from inline directory format into a normal one.
+ */
+ if (S_ISDIR(old.inode->i_mode))
+ inode_lock_nested(old.inode, I_MUTEX_NONDIR2);
+
+ old.bh = ext4_find_entry(old.dir, &old.dentry->d_name, &old.de,
+ &old.inlined);
+ if (IS_ERR(old.bh)) {
+ retval = PTR_ERR(old.bh);
+ goto unlock_moved_dir;
+ }
+
/*
* Check for inode number is _not_ due to possible IO errors.
* We might rmdir the source, keep it as pwd of some process
@@ -4017,6 +4028,11 @@ static int ext4_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
brelse(old.dir_bh);
brelse(old.bh);
brelse(new.bh);
+
+unlock_moved_dir:
+ if (S_ISDIR(old.inode->i_mode))
+ inode_unlock(old.inode);
+
return retval;
}
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 18977ff..03e2244 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -413,7 +413,8 @@ static void io_submit_init_bio(struct ext4_io_submit *io,
static void io_submit_add_bh(struct ext4_io_submit *io,
struct inode *inode,
- struct page *page,
+ struct page *pagecache_page,
+ struct page *bounce_page,
struct buffer_head *bh)
{
int ret;
@@ -427,10 +428,11 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
io_submit_init_bio(io, bh);
io->io_bio->bi_write_hint = inode->i_write_hint;
}
- ret = bio_add_page(io->io_bio, page, bh->b_size, bh_offset(bh));
+ ret = bio_add_page(io->io_bio, bounce_page ?: pagecache_page,
+ bh->b_size, bh_offset(bh));
if (ret != bh->b_size)
goto submit_and_retry;
- wbc_account_cgroup_owner(io->io_wbc, page, bh->b_size);
+ wbc_account_cgroup_owner(io->io_wbc, pagecache_page, bh->b_size);
io->io_next_block++;
}
@@ -548,8 +550,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
do {
if (!buffer_async_write(bh))
continue;
- io_submit_add_bh(io, inode,
- bounce_page ? bounce_page : page, bh);
+ io_submit_add_bh(io, inode, page, bounce_page, bh);
nr_submitted++;
clear_buffer_dirty(bh);
} while ((bh = bh->b_this_page) != head);
diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index 38c998d..400886f 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -5382,8 +5382,11 @@ static int ext4_load_journal(struct super_block *sb,
if (!really_read_only && journal_devnum &&
journal_devnum != le32_to_cpu(es->s_journal_dev)) {
es->s_journal_dev = cpu_to_le32(journal_devnum);
-
- /* Make sure we flush the recovery flag to disk. */
+ ext4_commit_super(sb);
+ }
+ if (!really_read_only && journal_inum &&
+ journal_inum != le32_to_cpu(es->s_journal_inum)) {
+ es->s_journal_inum = cpu_to_le32(journal_inum);
ext4_commit_super(sb);
}
diff --git a/fs/ext4/sysfs.c b/fs/ext4/sysfs.c
index 2314f74..aa07b78 100644
--- a/fs/ext4/sysfs.c
+++ b/fs/ext4/sysfs.c
@@ -489,6 +489,11 @@ static void ext4_sb_release(struct kobject *kobj)
complete(&sbi->s_kobj_unregister);
}
+static void ext4_feat_release(struct kobject *kobj)
+{
+ kfree(kobj);
+}
+
static const struct sysfs_ops ext4_attr_ops = {
.show = ext4_attr_show,
.store = ext4_attr_store,
@@ -503,7 +508,7 @@ static struct kobj_type ext4_sb_ktype = {
static struct kobj_type ext4_feat_ktype = {
.default_groups = ext4_feat_groups,
.sysfs_ops = &ext4_attr_ops,
- .release = (void (*)(struct kobject *))kfree,
+ .release = ext4_feat_release,
};
void ext4_notify_error_sysfs(struct ext4_sb_info *sbi)
diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
index b92da41..7ef49e4 100644
--- a/fs/ext4/xattr.c
+++ b/fs/ext4/xattr.c
@@ -386,6 +386,17 @@ static int ext4_xattr_inode_iget(struct inode *parent, unsigned long ea_ino,
struct inode *inode;
int err;
+ /*
+ * We have to check for this corruption early as otherwise
+ * iget_locked() could wait indefinitely for the state of our
+ * parent inode.
+ */
+ if (parent->i_ino == ea_ino) {
+ ext4_error(parent->i_sb,
+ "Parent and EA inode have the same ino %lu", ea_ino);
+ return -EFSCORRUPTED;
+ }
+
inode = ext4_iget(parent->i_sb, ea_ino, EXT4_IGET_NORMAL);
if (IS_ERR(inode)) {
err = PTR_ERR(inode);
@@ -1422,6 +1433,13 @@ static struct inode *ext4_xattr_inode_create(handle_t *handle,
uid_t owner[2] = { i_uid_read(inode), i_gid_read(inode) };
int err;
+ if (inode->i_sb->s_root == NULL) {
+ ext4_warning(inode->i_sb,
+ "refuse to create EA inode when umounting");
+ WARN_ON(1);
+ return ERR_PTR(-EINVAL);
+ }
+
/*
* Let the next inode be the goal, so we try and allocate the EA inode
* in the same group, or nearby one.
@@ -2549,9 +2567,8 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
is = kzalloc(sizeof(struct ext4_xattr_ibody_find), GFP_NOFS);
bs = kzalloc(sizeof(struct ext4_xattr_block_find), GFP_NOFS);
- buffer = kvmalloc(value_size, GFP_NOFS);
b_entry_name = kmalloc(entry->e_name_len + 1, GFP_NOFS);
- if (!is || !bs || !buffer || !b_entry_name) {
+ if (!is || !bs || !b_entry_name) {
error = -ENOMEM;
goto out;
}
@@ -2563,12 +2580,18 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
/* Save the entry name and the entry value */
if (entry->e_value_inum) {
+ buffer = kvmalloc(value_size, GFP_NOFS);
+ if (!buffer) {
+ error = -ENOMEM;
+ goto out;
+ }
+
error = ext4_xattr_inode_get(inode, entry, buffer, value_size);
if (error)
goto out;
} else {
size_t value_offs = le16_to_cpu(entry->e_value_offs);
- memcpy(buffer, (void *)IFIRST(header) + value_offs, value_size);
+ buffer = (void *)IFIRST(header) + value_offs;
}
memcpy(b_entry_name, entry->e_name, entry->e_name_len);
@@ -2583,25 +2606,26 @@ static int ext4_xattr_move_to_block(handle_t *handle, struct inode *inode,
if (error)
goto out;
- /* Remove the chosen entry from the inode */
- error = ext4_xattr_ibody_set(handle, inode, &i, is);
- if (error)
- goto out;
-
i.value = buffer;
i.value_len = value_size;
error = ext4_xattr_block_find(inode, &i, bs);
if (error)
goto out;
- /* Add entry which was removed from the inode into the block */
+ /* Move ea entry from the inode into the block */
error = ext4_xattr_block_set(handle, inode, &i, bs);
if (error)
goto out;
- error = 0;
+
+ /* Remove the chosen entry from the inode */
+ i.value = NULL;
+ i.value_len = 0;
+ error = ext4_xattr_ibody_set(handle, inode, &i, is);
+
out:
kfree(b_entry_name);
- kvfree(buffer);
+ if (entry->e_value_inum && buffer)
+ kvfree(buffer);
if (is)
brelse(is->iloc.bh);
if (bs)
@@ -2776,6 +2800,9 @@ int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
(void *)header, total_ino);
EXT4_I(inode)->i_extra_isize = new_extra_isize;
+ if (ext4_has_inline_data(inode))
+ error = ext4_find_inline_data_nolock(inode);
+
cleanup:
if (error && (mnt_count != le16_to_cpu(sbi->s_es->s_mnt_count))) {
ext4_warning(inode->i_sb, "Unable to expand inode %lu. Delete some EAs or run e2fsck.",
diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 02840da..c68f1f8 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -672,7 +672,7 @@ static int recover_orphan_inode(struct f2fs_sb_info *sbi, nid_t ino)
/* truncate all the data during iput */
iput(inode);
- err = f2fs_get_node_info(sbi, ino, &ni);
+ err = f2fs_get_node_info(sbi, ino, &ni, false);
if (err)
goto err_out;
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 6adf047..4fa62f98 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1275,7 +1275,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
psize = (loff_t)(cc->rpages[last_index]->index + 1) << PAGE_SHIFT;
- err = f2fs_get_node_info(fio.sbi, dn.nid, &ni);
+ err = f2fs_get_node_info(fio.sbi, dn.nid, &ni, false);
if (err)
goto out_put_dnode;
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index cfa6e13..524d4b4 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -677,7 +677,7 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
}
if (fio->io_wbc && !is_read_io(fio->op))
- wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE);
+ wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
__attach_io_flag(fio);
bio_set_op_attrs(bio, fio->op, fio->op_flags);
@@ -887,7 +887,7 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
}
if (fio->io_wbc)
- wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE);
+ wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
inc_page_count(fio->sbi, WB_DATA_TYPE(page));
@@ -961,7 +961,7 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
}
if (fio->io_wbc)
- wbc_account_cgroup_owner(fio->io_wbc, bio_page, PAGE_SIZE);
+ wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
io->last_block_in_bio = fio->new_blkaddr;
@@ -1354,7 +1354,7 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type)
if (unlikely(is_inode_flag_set(dn->inode, FI_NO_ALLOC)))
return -EPERM;
- err = f2fs_get_node_info(sbi, dn->nid, &ni);
+ err = f2fs_get_node_info(sbi, dn->nid, &ni, false);
if (err)
return err;
@@ -1796,7 +1796,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
if (!page)
return -ENOMEM;
- err = f2fs_get_node_info(sbi, inode->i_ino, &ni);
+ err = f2fs_get_node_info(sbi, inode->i_ino, &ni, false);
if (err) {
f2fs_put_page(page, 1);
return err;
@@ -1828,7 +1828,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
if (!page)
return -ENOMEM;
- err = f2fs_get_node_info(sbi, xnid, &ni);
+ err = f2fs_get_node_info(sbi, xnid, &ni, false);
if (err) {
f2fs_put_page(page, 1);
return err;
@@ -2688,7 +2688,7 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio)
fio->need_lock = LOCK_REQ;
}
- err = f2fs_get_node_info(fio->sbi, dn.nid, &ni);
+ err = f2fs_get_node_info(fio->sbi, dn.nid, &ni, false);
if (err)
goto out_writepage;
diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
index 761fd42..6a9ab5c 100644
--- a/fs/f2fs/extent_cache.c
+++ b/fs/f2fs/extent_cache.c
@@ -415,7 +415,8 @@ static bool f2fs_lookup_extent_tree(struct inode *inode, pgoff_t pgofs,
struct extent_node *en;
bool ret = false;
- f2fs_bug_on(sbi, !et);
+ if (!et)
+ return false;
trace_f2fs_lookup_extent_tree_start(inode, pgofs);
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index a144471..80e4f9a 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -3416,7 +3416,7 @@ int f2fs_need_dentry_mark(struct f2fs_sb_info *sbi, nid_t nid);
bool f2fs_is_checkpointed_node(struct f2fs_sb_info *sbi, nid_t nid);
bool f2fs_need_inode_block_update(struct f2fs_sb_info *sbi, nid_t ino);
int f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
- struct node_info *ni);
+ struct node_info *ni, bool checkpoint_context);
pgoff_t f2fs_get_next_page_offset(struct dnode_of_data *dn, pgoff_t pgofs);
int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode);
int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from);
diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
index 758048a..3be34ea 100644
--- a/fs/f2fs/file.c
+++ b/fs/f2fs/file.c
@@ -1232,7 +1232,7 @@ static int __clone_blkaddrs(struct inode *src_inode, struct inode *dst_inode,
if (ret)
return ret;
- ret = f2fs_get_node_info(sbi, dn.nid, &ni);
+ ret = f2fs_get_node_info(sbi, dn.nid, &ni, false);
if (ret) {
f2fs_put_dnode(&dn);
return ret;
@@ -3928,7 +3928,7 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg)
goto out;
}
- if (inode->i_size != 0) {
+ if (F2FS_HAS_BLOCKS(inode)) {
ret = -EFBIG;
goto out;
}
diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
index 280a938..615b109 100644
--- a/fs/f2fs/gc.c
+++ b/fs/f2fs/gc.c
@@ -944,7 +944,7 @@ static int gc_node_segment(struct f2fs_sb_info *sbi,
continue;
}
- if (f2fs_get_node_info(sbi, nid, &ni)) {
+ if (f2fs_get_node_info(sbi, nid, &ni, false)) {
f2fs_put_page(node_page, 1);
continue;
}
@@ -1002,7 +1002,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
{
struct page *node_page;
nid_t nid;
- unsigned int ofs_in_node, max_addrs;
+ unsigned int ofs_in_node, max_addrs, base;
block_t source_blkaddr;
nid = le32_to_cpu(sum->nid);
@@ -1012,7 +1012,7 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
if (IS_ERR(node_page))
return false;
- if (f2fs_get_node_info(sbi, nid, dni)) {
+ if (f2fs_get_node_info(sbi, nid, dni, false)) {
f2fs_put_page(node_page, 1);
return false;
}
@@ -1028,11 +1028,17 @@ static bool is_alive(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
return false;
}
- max_addrs = IS_INODE(node_page) ? DEF_ADDRS_PER_INODE :
- DEF_ADDRS_PER_BLOCK;
- if (ofs_in_node >= max_addrs) {
- f2fs_err(sbi, "Inconsistent ofs_in_node:%u in summary, ino:%u, nid:%u, max:%u",
- ofs_in_node, dni->ino, dni->nid, max_addrs);
+ if (IS_INODE(node_page)) {
+ base = offset_in_addr(F2FS_INODE(node_page));
+ max_addrs = DEF_ADDRS_PER_INODE;
+ } else {
+ base = 0;
+ max_addrs = DEF_ADDRS_PER_BLOCK;
+ }
+
+ if (base + ofs_in_node >= max_addrs) {
+ f2fs_err(sbi, "Inconsistent blkaddr offset: base:%u, ofs_in_node:%u, max:%u, ino:%u, nid:%u",
+ base, ofs_in_node, max_addrs, dni->ino, dni->nid);
f2fs_put_page(node_page, 1);
return false;
}
@@ -1217,7 +1223,7 @@ static int move_data_block(struct inode *inode, block_t bidx,
f2fs_wait_on_block_writeback(inode, dn.data_blkaddr);
- err = f2fs_get_node_info(fio.sbi, dn.nid, &ni);
+ err = f2fs_get_node_info(fio.sbi, dn.nid, &ni, false);
if (err)
goto put_out;
diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
index d0e3fc9..e4fc169 100644
--- a/fs/f2fs/inline.c
+++ b/fs/f2fs/inline.c
@@ -64,7 +64,6 @@ bool f2fs_may_inline_dentry(struct inode *inode)
void f2fs_do_read_inline_data(struct page *page, struct page *ipage)
{
struct inode *inode = page->mapping->host;
- void *src_addr, *dst_addr;
if (PageUptodate(page))
return;
@@ -74,11 +73,8 @@ void f2fs_do_read_inline_data(struct page *page, struct page *ipage)
zero_user_segment(page, MAX_INLINE_DATA(inode), PAGE_SIZE);
/* Copy the whole inline data block */
- src_addr = inline_data_addr(inode, ipage);
- dst_addr = kmap_atomic(page);
- memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode));
- flush_dcache_page(page);
- kunmap_atomic(dst_addr);
+ memcpy_to_page(page, 0, inline_data_addr(inode, ipage),
+ MAX_INLINE_DATA(inode));
if (!PageUptodate(page))
SetPageUptodate(page);
}
@@ -150,7 +146,7 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page)
if (err)
return err;
- err = f2fs_get_node_info(fio.sbi, dn->nid, &ni);
+ err = f2fs_get_node_info(fio.sbi, dn->nid, &ni, false);
if (err) {
f2fs_truncate_data_blocks_range(dn, 1);
f2fs_put_dnode(dn);
@@ -246,7 +242,6 @@ int f2fs_convert_inline_inode(struct inode *inode)
int f2fs_write_inline_data(struct inode *inode, struct page *page)
{
- void *src_addr, *dst_addr;
struct dnode_of_data dn;
int err;
@@ -263,10 +258,8 @@ int f2fs_write_inline_data(struct inode *inode, struct page *page)
f2fs_bug_on(F2FS_I_SB(inode), page->index);
f2fs_wait_on_page_writeback(dn.inode_page, NODE, true, true);
- src_addr = kmap_atomic(page);
- dst_addr = inline_data_addr(inode, dn.inode_page);
- memcpy(dst_addr, src_addr, MAX_INLINE_DATA(inode));
- kunmap_atomic(src_addr);
+ memcpy_from_page(inline_data_addr(inode, dn.inode_page),
+ page, 0, MAX_INLINE_DATA(inode));
set_page_dirty(dn.inode_page);
f2fs_clear_page_cache_dirty_tag(page);
@@ -427,18 +420,17 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage,
dentry_blk = page_address(page);
+ /*
+ * Start by zeroing the full block, to ensure that all unused space is
+ * zeroed and no uninitialized memory is leaked to disk.
+ */
+ memset(dentry_blk, 0, F2FS_BLKSIZE);
+
make_dentry_ptr_inline(dir, &src, inline_dentry);
make_dentry_ptr_block(dir, &dst, dentry_blk);
/* copy data from inline dentry block to new dentry block */
memcpy(dst.bitmap, src.bitmap, src.nr_bitmap);
- memset(dst.bitmap + src.nr_bitmap, 0, dst.nr_bitmap - src.nr_bitmap);
- /*
- * we do not need to zero out remainder part of dentry and filename
- * field, since we have used bitmap for marking the usage status of
- * them, besides, we can also ignore copying/zeroing reserved space
- * of dentry block, because them haven't been used so far.
- */
memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max);
memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN);
@@ -805,7 +797,7 @@ int f2fs_inline_data_fiemap(struct inode *inode,
ilen = start + len;
ilen -= start;
- err = f2fs_get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni);
+ err = f2fs_get_node_info(F2FS_I_SB(inode), inode->i_ino, &ni, false);
if (err)
goto out;
diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
index bd8960f..94e2113 100644
--- a/fs/f2fs/inode.c
+++ b/fs/f2fs/inode.c
@@ -681,17 +681,19 @@ void f2fs_update_inode_page(struct inode *inode)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
struct page *node_page;
+ int count = 0;
retry:
node_page = f2fs_get_node_page(sbi, inode->i_ino);
if (IS_ERR(node_page)) {
int err = PTR_ERR(node_page);
- if (err == -ENOMEM) {
- cond_resched();
+ /* The node block was truncated. */
+ if (err == -ENOENT)
+ return;
+
+ if (err == -ENOMEM || ++count <= DEFAULT_RETRY_IO_COUNT)
goto retry;
- } else if (err != -ENOENT) {
- f2fs_stop_checkpoint(sbi, false);
- }
+ f2fs_stop_checkpoint(sbi, false);
return;
}
f2fs_update_inode(inode, node_page);
@@ -888,7 +890,7 @@ void f2fs_handle_failed_inode(struct inode *inode)
* so we can prevent losing this orphan when encoutering checkpoint
* and following suddenly power-off.
*/
- err = f2fs_get_node_info(sbi, inode->i_ino, &ni);
+ err = f2fs_get_node_info(sbi, inode->i_ino, &ni, false);
if (err) {
set_sbi_flag(sbi, SBI_NEED_FSCK);
set_inode_flag(inode, FI_FREE_NID);
diff --git a/fs/f2fs/iostat.c b/fs/f2fs/iostat.c
index cdcf54a..9e0160a 100644
--- a/fs/f2fs/iostat.c
+++ b/fs/f2fs/iostat.c
@@ -194,8 +194,12 @@ static inline void __update_iostat_latency(struct bio_iostat_ctx *iostat_ctx,
return;
ts_diff = jiffies - iostat_ctx->submit_ts;
- if (iotype >= META_FLUSH)
+ if (iotype == META_FLUSH) {
iotype = META;
+ } else if (iotype >= NR_PAGE_TYPE) {
+ f2fs_warn(sbi, "%s: %d over NR_PAGE_TYPE", __func__, iotype);
+ return;
+ }
if (rw == 0) {
idx = READ_IO;
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index f810c6b..89a7f60 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -430,6 +430,10 @@ static void cache_nat_entry(struct f2fs_sb_info *sbi, nid_t nid,
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct nat_entry *new, *e;
+ /* Let's mitigate lock contention of nat_tree_lock during checkpoint */
+ if (rwsem_is_locked(&sbi->cp_global_sem))
+ return;
+
new = __alloc_nat_entry(sbi, nid, false);
if (!new)
return;
@@ -539,7 +543,7 @@ int f2fs_try_to_free_nats(struct f2fs_sb_info *sbi, int nr_shrink)
}
int f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
- struct node_info *ni)
+ struct node_info *ni, bool checkpoint_context)
{
struct f2fs_nm_info *nm_i = NM_I(sbi);
struct curseg_info *curseg = CURSEG_I(sbi, CURSEG_HOT_DATA);
@@ -572,9 +576,10 @@ int f2fs_get_node_info(struct f2fs_sb_info *sbi, nid_t nid,
* nat_tree_lock. Therefore, we should retry, if we failed to grab here
* while not bothering checkpoint.
*/
- if (!rwsem_is_locked(&sbi->cp_global_sem)) {
+ if (!rwsem_is_locked(&sbi->cp_global_sem) || checkpoint_context) {
down_read(&curseg->journal_rwsem);
- } else if (!down_read_trylock(&curseg->journal_rwsem)) {
+ } else if (rwsem_is_contended(&nm_i->nat_tree_lock) ||
+ !down_read_trylock(&curseg->journal_rwsem)) {
up_read(&nm_i->nat_tree_lock);
goto retry;
}
@@ -887,7 +892,7 @@ static int truncate_node(struct dnode_of_data *dn)
int err;
pgoff_t index;
- err = f2fs_get_node_info(sbi, dn->nid, &ni);
+ err = f2fs_get_node_info(sbi, dn->nid, &ni, false);
if (err)
return err;
@@ -1286,7 +1291,7 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
goto fail;
#ifdef CONFIG_F2FS_CHECK_FS
- err = f2fs_get_node_info(sbi, dn->nid, &new_ni);
+ err = f2fs_get_node_info(sbi, dn->nid, &new_ni, false);
if (err) {
dec_valid_node_count(sbi, dn->inode, !ofs);
goto fail;
@@ -1352,7 +1357,7 @@ static int read_node_page(struct page *page, int op_flags)
return LOCKED_PAGE;
}
- err = f2fs_get_node_info(sbi, page->index, &ni);
+ err = f2fs_get_node_info(sbi, page->index, &ni, false);
if (err)
return err;
@@ -1603,7 +1608,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
nid = nid_of_node(page);
f2fs_bug_on(sbi, page->index != nid);
- if (f2fs_get_node_info(sbi, nid, &ni))
+ if (f2fs_get_node_info(sbi, nid, &ni, !do_balance))
goto redirty_out;
if (wbc->for_reclaim) {
@@ -2708,7 +2713,7 @@ int f2fs_recover_xattr_data(struct inode *inode, struct page *page)
goto recover_xnid;
/* 1: invalidate the previous xattr nid */
- err = f2fs_get_node_info(sbi, prev_xnid, &ni);
+ err = f2fs_get_node_info(sbi, prev_xnid, &ni, false);
if (err)
return err;
@@ -2748,7 +2753,7 @@ int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page)
struct page *ipage;
int err;
- err = f2fs_get_node_info(sbi, ino, &old_ni);
+ err = f2fs_get_node_info(sbi, ino, &old_ni, false);
if (err)
return err;
diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c
index ed21e34..ba7eeb3 100644
--- a/fs/f2fs/recovery.c
+++ b/fs/f2fs/recovery.c
@@ -604,7 +604,7 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
f2fs_wait_on_page_writeback(dn.node_page, NODE, true, true);
- err = f2fs_get_node_info(sbi, dn.nid, &ni);
+ err = f2fs_get_node_info(sbi, dn.nid, &ni, false);
if (err)
goto err;
diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
index 194c081..58dd4de 100644
--- a/fs/f2fs/segment.c
+++ b/fs/f2fs/segment.c
@@ -253,7 +253,7 @@ static int __revoke_inmem_pages(struct inode *inode,
goto next;
}
- err = f2fs_get_node_info(sbi, dn.nid, &ni);
+ err = f2fs_get_node_info(sbi, dn.nid, &ni, false);
if (err) {
f2fs_put_dnode(&dn);
return err;
diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
index f4e8de1..ae72211 100644
--- a/fs/f2fs/super.c
+++ b/fs/f2fs/super.c
@@ -2442,7 +2442,6 @@ static ssize_t f2fs_quota_read(struct super_block *sb, int type, char *data,
size_t toread;
loff_t i_size = i_size_read(inode);
struct page *page;
- char *kaddr;
if (off > i_size)
return 0;
@@ -2476,9 +2475,7 @@ static ssize_t f2fs_quota_read(struct super_block *sb, int type, char *data,
return -EIO;
}
- kaddr = kmap_atomic(page);
- memcpy(data, kaddr + offset, tocopy);
- kunmap_atomic(kaddr);
+ memcpy_from_page(data, page, offset, tocopy);
f2fs_put_page(page, 1);
offset = 0;
@@ -2500,7 +2497,6 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type,
size_t towrite = len;
struct page *page;
void *fsdata = NULL;
- char *kaddr;
int err = 0;
int tocopy;
@@ -2520,10 +2516,7 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type,
break;
}
- kaddr = kmap_atomic(page);
- memcpy(kaddr + offset, data, tocopy);
- kunmap_atomic(kaddr);
- flush_dcache_page(page);
+ memcpy_to_page(page, offset, data, tocopy);
a_ops->write_end(NULL, mapping, off, tocopy, tocopy,
page, fsdata);
diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
index a28968b..d5a50e7 100644
--- a/fs/f2fs/verity.c
+++ b/fs/f2fs/verity.c
@@ -47,16 +47,13 @@ static int pagecache_read(struct inode *inode, void *buf, size_t count,
size_t n = min_t(size_t, count,
PAGE_SIZE - offset_in_page(pos));
struct page *page;
- void *addr;
page = read_mapping_page(inode->i_mapping, pos >> PAGE_SHIFT,
NULL);
if (IS_ERR(page))
return PTR_ERR(page);
- addr = kmap_atomic(page);
- memcpy(buf, addr + offset_in_page(pos), n);
- kunmap_atomic(addr);
+ memcpy_from_page(buf, page, offset_in_page(pos), n);
put_page(page);
@@ -81,8 +78,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count,
size_t n = min_t(size_t, count,
PAGE_SIZE - offset_in_page(pos));
struct page *page;
- void *fsdata;
- void *addr;
+ void *fsdata = NULL;
int res;
res = pagecache_write_begin(NULL, inode->i_mapping, pos, n, 0,
@@ -90,9 +86,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count,
if (res)
return res;
- addr = kmap_atomic(page);
- memcpy(addr + offset_in_page(pos), buf, n);
- kunmap_atomic(addr);
+ memcpy_to_page(page, offset_in_page(pos), buf, n);
res = pagecache_write_end(NULL, inode->i_mapping, pos, n, n,
page, fsdata);
diff --git a/fs/file.c b/fs/file.c
index 214364e..ee1c350 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -646,6 +646,7 @@ static struct file *pick_file(struct files_struct *files, unsigned fd)
file = ERR_PTR(-EINVAL);
goto out_unlock;
}
+ fd = array_index_nospec(fd, fdt->max_fds);
file = fdt->fd[fd];
if (!file) {
file = ERR_PTR(-EBADF);
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index cc95a1c..2b19d28 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -1295,7 +1295,7 @@ static ssize_t fuse_cache_write_iter(struct kiocb *iocb, struct iov_iter *from)
return err;
if (fc->handle_killpriv_v2 &&
- should_remove_suid(file_dentry(file))) {
+ setattr_should_drop_suidgid(&init_user_ns, file_inode(file))) {
goto writethrough;
}
diff --git a/fs/fuse/ioctl.c b/fs/fuse/ioctl.c
index e91d407..dbff26f 100644
--- a/fs/fuse/ioctl.c
+++ b/fs/fuse/ioctl.c
@@ -419,6 +419,12 @@ static struct fuse_file *fuse_priv_ioctl_prepare(struct inode *inode)
struct fuse_mount *fm = get_fuse_mount(inode);
bool isdir = S_ISDIR(inode->i_mode);
+ if (!fuse_allow_current_process(fm->fc))
+ return ERR_PTR(-EACCES);
+
+ if (fuse_is_bad(inode))
+ return ERR_PTR(-EIO);
+
if (!S_ISREG(inode->i_mode) && !isdir)
return ERR_PTR(-ENOTTY);
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 005e920..ee212c9 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -152,7 +152,6 @@ static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *w
{
struct inode *inode = page->mapping->host;
struct gfs2_inode *ip = GFS2_I(inode);
- struct gfs2_sbd *sdp = GFS2_SB(inode);
if (PageChecked(page)) {
ClearPageChecked(page);
@@ -160,7 +159,7 @@ static int __gfs2_jdata_writepage(struct page *page, struct writeback_control *w
create_empty_buffers(page, inode->i_sb->s_blocksize,
BIT(BH_Dirty)|BIT(BH_Uptodate));
}
- gfs2_page_add_databufs(ip, page, 0, sdp->sd_vfs->s_blocksize);
+ gfs2_page_add_databufs(ip, page, 0, PAGE_SIZE);
}
return gfs2_write_jdata_page(page, wbc);
}
@@ -452,8 +451,6 @@ static int stuffed_readpage(struct gfs2_inode *ip, struct page *page)
return error;
kaddr = kmap_atomic(page);
- if (dsize > gfs2_max_stuffed_size(ip))
- dsize = gfs2_max_stuffed_size(ip);
memcpy(kaddr, dibh->b_data + sizeof(struct gfs2_dinode), dsize);
memset(kaddr + dsize, 0, PAGE_SIZE - dsize);
kunmap_atomic(kaddr);
diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index f785af2..0ec1eaf 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -61,9 +61,6 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
void *kaddr = kmap(page);
u64 dsize = i_size_read(inode);
- if (dsize > gfs2_max_stuffed_size(ip))
- dsize = gfs2_max_stuffed_size(ip);
-
memcpy(kaddr, dibh->b_data + sizeof(struct gfs2_dinode), dsize);
memset(kaddr + dsize, 0, PAGE_SIZE - dsize);
kunmap(page);
diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
index 79c621c..450032b 100644
--- a/fs/gfs2/glops.c
+++ b/fs/gfs2/glops.c
@@ -398,38 +398,39 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
struct timespec64 atime;
u16 height, depth;
umode_t mode = be32_to_cpu(str->di_mode);
- bool is_new = ip->i_inode.i_state & I_NEW;
+ struct inode *inode = &ip->i_inode;
+ bool is_new = inode->i_state & I_NEW;
if (unlikely(ip->i_no_addr != be64_to_cpu(str->di_num.no_addr)))
goto corrupt;
- if (unlikely(!is_new && inode_wrong_type(&ip->i_inode, mode)))
+ if (unlikely(!is_new && inode_wrong_type(inode, mode)))
goto corrupt;
ip->i_no_formal_ino = be64_to_cpu(str->di_num.no_formal_ino);
- ip->i_inode.i_mode = mode;
+ inode->i_mode = mode;
if (is_new) {
- ip->i_inode.i_rdev = 0;
+ inode->i_rdev = 0;
switch (mode & S_IFMT) {
case S_IFBLK:
case S_IFCHR:
- ip->i_inode.i_rdev = MKDEV(be32_to_cpu(str->di_major),
- be32_to_cpu(str->di_minor));
+ inode->i_rdev = MKDEV(be32_to_cpu(str->di_major),
+ be32_to_cpu(str->di_minor));
break;
}
}
- i_uid_write(&ip->i_inode, be32_to_cpu(str->di_uid));
- i_gid_write(&ip->i_inode, be32_to_cpu(str->di_gid));
- set_nlink(&ip->i_inode, be32_to_cpu(str->di_nlink));
- i_size_write(&ip->i_inode, be64_to_cpu(str->di_size));
- gfs2_set_inode_blocks(&ip->i_inode, be64_to_cpu(str->di_blocks));
+ i_uid_write(inode, be32_to_cpu(str->di_uid));
+ i_gid_write(inode, be32_to_cpu(str->di_gid));
+ set_nlink(inode, be32_to_cpu(str->di_nlink));
+ i_size_write(inode, be64_to_cpu(str->di_size));
+ gfs2_set_inode_blocks(inode, be64_to_cpu(str->di_blocks));
atime.tv_sec = be64_to_cpu(str->di_atime);
atime.tv_nsec = be32_to_cpu(str->di_atime_nsec);
- if (timespec64_compare(&ip->i_inode.i_atime, &atime) < 0)
- ip->i_inode.i_atime = atime;
- ip->i_inode.i_mtime.tv_sec = be64_to_cpu(str->di_mtime);
- ip->i_inode.i_mtime.tv_nsec = be32_to_cpu(str->di_mtime_nsec);
- ip->i_inode.i_ctime.tv_sec = be64_to_cpu(str->di_ctime);
- ip->i_inode.i_ctime.tv_nsec = be32_to_cpu(str->di_ctime_nsec);
+ if (timespec64_compare(&inode->i_atime, &atime) < 0)
+ inode->i_atime = atime;
+ inode->i_mtime.tv_sec = be64_to_cpu(str->di_mtime);
+ inode->i_mtime.tv_nsec = be32_to_cpu(str->di_mtime_nsec);
+ inode->i_ctime.tv_sec = be64_to_cpu(str->di_ctime);
+ inode->i_ctime.tv_nsec = be32_to_cpu(str->di_ctime_nsec);
ip->i_goal = be64_to_cpu(str->di_goal_meta);
ip->i_generation = be64_to_cpu(str->di_generation);
@@ -437,7 +438,7 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
ip->i_diskflags = be32_to_cpu(str->di_flags);
ip->i_eattr = be64_to_cpu(str->di_eattr);
/* i_diskflags and i_eattr must be set before gfs2_set_inode_flags() */
- gfs2_set_inode_flags(&ip->i_inode);
+ gfs2_set_inode_flags(inode);
height = be16_to_cpu(str->di_height);
if (unlikely(height > GFS2_MAX_META_HEIGHT))
goto corrupt;
@@ -449,8 +450,11 @@ static int gfs2_dinode_in(struct gfs2_inode *ip, const void *buf)
ip->i_depth = (u8)depth;
ip->i_entries = be32_to_cpu(str->di_entries);
- if (S_ISREG(ip->i_inode.i_mode))
- gfs2_set_aops(&ip->i_inode);
+ if (gfs2_is_stuffed(ip) && inode->i_size > gfs2_max_stuffed_size(ip))
+ goto corrupt;
+
+ if (S_ISREG(inode->i_mode))
+ gfs2_set_aops(inode);
return 0;
corrupt:
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index 0f2e053..775ac3f 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -138,8 +138,10 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
return -EIO;
error = gfs2_find_jhead(sdp->sd_jdesc, &head, false);
- if (error || gfs2_withdrawn(sdp))
+ if (error) {
+ gfs2_consist(sdp);
return error;
+ }
if (!(head.lh_flags & GFS2_LOG_HEAD_UNMOUNT)) {
gfs2_consist(sdp);
@@ -151,7 +153,9 @@ int gfs2_make_fs_rw(struct gfs2_sbd *sdp)
gfs2_log_pointers_init(sdp, head.lh_blkno);
error = gfs2_quota_init(sdp);
- if (!error && !gfs2_withdrawn(sdp))
+ if (!error && gfs2_withdrawn(sdp))
+ error = -EIO;
+ if (!error)
set_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags);
return error;
}
@@ -378,6 +382,7 @@ static int gfs2_lock_fs_check_clean(struct gfs2_sbd *sdp)
void gfs2_dinode_out(const struct gfs2_inode *ip, void *buf)
{
+ const struct inode *inode = &ip->i_inode;
struct gfs2_dinode *str = buf;
str->di_header.mh_magic = cpu_to_be32(GFS2_MAGIC);
@@ -385,15 +390,15 @@ void gfs2_dinode_out(const struct gfs2_inode *ip, void *buf)
str->di_header.mh_format = cpu_to_be32(GFS2_FORMAT_DI);
str->di_num.no_addr = cpu_to_be64(ip->i_no_addr);
str->di_num.no_formal_ino = cpu_to_be64(ip->i_no_formal_ino);
- str->di_mode = cpu_to_be32(ip->i_inode.i_mode);
- str->di_uid = cpu_to_be32(i_uid_read(&ip->i_inode));
- str->di_gid = cpu_to_be32(i_gid_read(&ip->i_inode));
- str->di_nlink = cpu_to_be32(ip->i_inode.i_nlink);
- str->di_size = cpu_to_be64(i_size_read(&ip->i_inode));
- str->di_blocks = cpu_to_be64(gfs2_get_inode_blocks(&ip->i_inode));
- str->di_atime = cpu_to_be64(ip->i_inode.i_atime.tv_sec);
- str->di_mtime = cpu_to_be64(ip->i_inode.i_mtime.tv_sec);
- str->di_ctime = cpu_to_be64(ip->i_inode.i_ctime.tv_sec);
+ str->di_mode = cpu_to_be32(inode->i_mode);
+ str->di_uid = cpu_to_be32(i_uid_read(inode));
+ str->di_gid = cpu_to_be32(i_gid_read(inode));
+ str->di_nlink = cpu_to_be32(inode->i_nlink);
+ str->di_size = cpu_to_be64(i_size_read(inode));
+ str->di_blocks = cpu_to_be64(gfs2_get_inode_blocks(inode));
+ str->di_atime = cpu_to_be64(inode->i_atime.tv_sec);
+ str->di_mtime = cpu_to_be64(inode->i_mtime.tv_sec);
+ str->di_ctime = cpu_to_be64(inode->i_ctime.tv_sec);
str->di_goal_meta = cpu_to_be64(ip->i_goal);
str->di_goal_data = cpu_to_be64(ip->i_goal);
@@ -401,16 +406,16 @@ void gfs2_dinode_out(const struct gfs2_inode *ip, void *buf)
str->di_flags = cpu_to_be32(ip->i_diskflags);
str->di_height = cpu_to_be16(ip->i_height);
- str->di_payload_format = cpu_to_be32(S_ISDIR(ip->i_inode.i_mode) &&
+ str->di_payload_format = cpu_to_be32(S_ISDIR(inode->i_mode) &&
!(ip->i_diskflags & GFS2_DIF_EXHASH) ?
GFS2_FORMAT_DE : 0);
str->di_depth = cpu_to_be16(ip->i_depth);
str->di_entries = cpu_to_be32(ip->i_entries);
str->di_eattr = cpu_to_be64(ip->i_eattr);
- str->di_atime_nsec = cpu_to_be32(ip->i_inode.i_atime.tv_nsec);
- str->di_mtime_nsec = cpu_to_be32(ip->i_inode.i_mtime.tv_nsec);
- str->di_ctime_nsec = cpu_to_be32(ip->i_inode.i_ctime.tv_nsec);
+ str->di_atime_nsec = cpu_to_be32(inode->i_atime.tv_nsec);
+ str->di_mtime_nsec = cpu_to_be32(inode->i_mtime.tv_nsec);
+ str->di_ctime_nsec = cpu_to_be32(inode->i_ctime.tv_nsec);
}
/**
diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
index c0a73a6..397e02a 100644
--- a/fs/hfs/bnode.c
+++ b/fs/hfs/bnode.c
@@ -281,6 +281,7 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
tree->node_hash[hash] = node;
tree->node_hash_cnt++;
} else {
+ hfs_bnode_get(node2);
spin_unlock(&tree->hash_lock);
kfree(node);
wait_event(node2->lock_wq, !test_bit(HFS_BNODE_NEW, &node2->flags));
diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
index b9e3db3..392edb6 100644
--- a/fs/hfsplus/super.c
+++ b/fs/hfsplus/super.c
@@ -295,11 +295,11 @@ static void hfsplus_put_super(struct super_block *sb)
hfsplus_sync_fs(sb, 1);
}
+ iput(sbi->alloc_file);
+ iput(sbi->hidden_dir);
hfs_btree_close(sbi->attr_tree);
hfs_btree_close(sbi->cat_tree);
hfs_btree_close(sbi->ext_tree);
- iput(sbi->alloc_file);
- iput(sbi->hidden_dir);
kfree(sbi->s_vhdr_buf);
kfree(sbi->s_backup_vhdr_buf);
unload_nls(sbi->nls);
diff --git a/fs/inode.c b/fs/inode.c
index 8279c70..079b64f 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -1865,40 +1865,12 @@ void touch_atime(const struct path *path)
EXPORT_SYMBOL(touch_atime);
/*
- * The logic we want is
- *
- * if suid or (sgid and xgrp)
- * remove privs
- */
-int should_remove_suid(struct dentry *dentry)
-{
- umode_t mode = d_inode(dentry)->i_mode;
- int kill = 0;
-
- /* suid always must be killed */
- if (unlikely(mode & S_ISUID))
- kill = ATTR_KILL_SUID;
-
- /*
- * sgid without any exec bits is just a mandatory locking mark; leave
- * it alone. If some exec bits are set, it's a real sgid; kill it.
- */
- if (unlikely((mode & S_ISGID) && (mode & S_IXGRP)))
- kill |= ATTR_KILL_SGID;
-
- if (unlikely(kill && !capable(CAP_FSETID) && S_ISREG(mode)))
- return kill;
-
- return 0;
-}
-EXPORT_SYMBOL(should_remove_suid);
-
-/*
* Return mask of changes for notify_change() that need to be done as a
* response to write or truncate. Return 0 if nothing has to be changed.
* Negative value on error (change should be denied).
*/
-int dentry_needs_remove_privs(struct dentry *dentry)
+int dentry_needs_remove_privs(struct user_namespace *mnt_userns,
+ struct dentry *dentry)
{
struct inode *inode = d_inode(dentry);
int mask = 0;
@@ -1907,7 +1879,7 @@ int dentry_needs_remove_privs(struct dentry *dentry)
if (IS_NOSEC(inode))
return 0;
- mask = should_remove_suid(dentry);
+ mask = setattr_should_drop_suidgid(mnt_userns, inode);
ret = security_inode_need_killpriv(dentry);
if (ret < 0)
return ret;
@@ -1949,7 +1921,7 @@ int file_remove_privs(struct file *file)
if (IS_NOSEC(inode) || !S_ISREG(inode->i_mode))
return 0;
- kill = dentry_needs_remove_privs(dentry);
+ kill = dentry_needs_remove_privs(file_mnt_user_ns(file), dentry);
if (kill < 0)
return kill;
if (kill)
@@ -2165,10 +2137,6 @@ void inode_init_owner(struct user_namespace *mnt_userns, struct inode *inode,
/* Directories are special, and always inherit S_ISGID */
if (S_ISDIR(mode))
mode |= S_ISGID;
- else if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP) &&
- !in_group_p(i_gid_into_mnt(mnt_userns, dir)) &&
- !capable_wrt_inode_uidgid(mnt_userns, dir, CAP_FSETID))
- mode &= ~S_ISGID;
} else
inode_fsgid_set(inode, mnt_userns);
inode->i_mode = mode;
@@ -2324,3 +2292,53 @@ struct timespec64 current_time(struct inode *inode)
return timestamp_truncate(now, inode);
}
EXPORT_SYMBOL(current_time);
+
+/**
+ * in_group_or_capable - check whether caller is CAP_FSETID privileged
+ * @mnt_userns: user namespace of the mount @inode was found from
+ * @inode: inode to check
+ * @gid: the new/current gid of @inode
+ *
+ * Check wether @gid is in the caller's group list or if the caller is
+ * privileged with CAP_FSETID over @inode. This can be used to determine
+ * whether the setgid bit can be kept or must be dropped.
+ *
+ * Return: true if the caller is sufficiently privileged, false if not.
+ */
+bool in_group_or_capable(struct user_namespace *mnt_userns,
+ const struct inode *inode, kgid_t gid)
+{
+ if (in_group_p(gid))
+ return true;
+ if (capable_wrt_inode_uidgid(mnt_userns, inode, CAP_FSETID))
+ return true;
+ return false;
+}
+
+/**
+ * mode_strip_sgid - handle the sgid bit for non-directories
+ * @mnt_userns: User namespace of the mount the inode was created from
+ * @dir: parent directory inode
+ * @mode: mode of the file to be created in @dir
+ *
+ * If the @mode of the new file has both the S_ISGID and S_IXGRP bit
+ * raised and @dir has the S_ISGID bit raised ensure that the caller is
+ * either in the group of the parent directory or they have CAP_FSETID
+ * in their user namespace and are privileged over the parent directory.
+ * In all other cases, strip the S_ISGID bit from @mode.
+ *
+ * Return: the new mode to use for the file
+ */
+umode_t mode_strip_sgid(struct user_namespace *mnt_userns,
+ const struct inode *dir, umode_t mode)
+{
+ if ((mode & (S_ISGID | S_IXGRP)) != (S_ISGID | S_IXGRP))
+ return mode;
+ if (S_ISDIR(mode) || !dir || !(dir->i_mode & S_ISGID))
+ return mode;
+ if (in_group_or_capable(mnt_userns, dir,
+ i_gid_into_mnt(mnt_userns, dir)))
+ return mode;
+ return mode & ~S_ISGID;
+}
+EXPORT_SYMBOL(mode_strip_sgid);
diff --git a/fs/internal.h b/fs/internal.h
index 9075490..46df4ce 100644
--- a/fs/internal.h
+++ b/fs/internal.h
@@ -149,7 +149,9 @@ extern int vfs_open(const struct path *, struct file *);
*/
extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
extern void inode_add_lru(struct inode *inode);
-extern int dentry_needs_remove_privs(struct dentry *dentry);
+int dentry_needs_remove_privs(struct user_namespace *, struct dentry *dentry);
+bool in_group_or_capable(struct user_namespace *mnt_userns,
+ const struct inode *inode, kgid_t gid);
/*
* fs-writeback.c
@@ -229,3 +231,9 @@ struct xattr_ctx {
int setxattr_copy(const char __user *name, struct xattr_ctx *ctx);
int do_setxattr(struct user_namespace *mnt_userns, struct dentry *dentry,
struct xattr_ctx *ctx);
+
+/*
+ * fs/attr.c
+ */
+int setattr_should_drop_sgid(struct user_namespace *mnt_userns,
+ const struct inode *inode);
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index a57c0c8..5523206 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -1001,36 +1001,28 @@ do_get_write_access(handle_t *handle, struct journal_head *jh,
* ie. locked but not dirty) or tune2fs (which may actually have
* the buffer dirtied, ugh.) */
- if (buffer_dirty(bh)) {
+ if (buffer_dirty(bh) && jh->b_transaction) {
+ warn_dirty_buffer(bh);
/*
- * First question: is this buffer already part of the current
- * transaction or the existing committing transaction?
- */
- if (jh->b_transaction) {
- J_ASSERT_JH(jh,
- jh->b_transaction == transaction ||
- jh->b_transaction ==
- journal->j_committing_transaction);
- if (jh->b_next_transaction)
- J_ASSERT_JH(jh, jh->b_next_transaction ==
- transaction);
- warn_dirty_buffer(bh);
- }
- /*
- * In any case we need to clean the dirty flag and we must
- * do it under the buffer lock to be sure we don't race
- * with running write-out.
+ * We need to clean the dirty flag and we must do it under the
+ * buffer lock to be sure we don't race with running write-out.
*/
JBUFFER_TRACE(jh, "Journalling dirty buffer");
clear_buffer_dirty(bh);
+ /*
+ * The buffer is going to be added to BJ_Reserved list now and
+ * nothing guarantees jbd2_journal_dirty_metadata() will be
+ * ever called for it. So we need to set jbddirty bit here to
+ * make sure the buffer is dirtied and written out when the
+ * journaling machinery is done with it.
+ */
set_buffer_jbddirty(bh);
}
- unlock_buffer(bh);
-
error = -EROFS;
if (is_handle_aborted(handle)) {
spin_unlock(&jh->b_state_lock);
+ unlock_buffer(bh);
goto out;
}
error = 0;
@@ -1040,8 +1032,10 @@ do_get_write_access(handle_t *handle, struct journal_head *jh,
* b_next_transaction points to it
*/
if (jh->b_transaction == transaction ||
- jh->b_next_transaction == transaction)
+ jh->b_next_transaction == transaction) {
+ unlock_buffer(bh);
goto done;
+ }
/*
* this is the first time this transaction is touching this buffer,
@@ -1065,10 +1059,24 @@ do_get_write_access(handle_t *handle, struct journal_head *jh,
*/
smp_wmb();
spin_lock(&journal->j_list_lock);
+ if (test_clear_buffer_dirty(bh)) {
+ /*
+ * Execute buffer dirty clearing and jh->b_transaction
+ * assignment under journal->j_list_lock locked to
+ * prevent bh being removed from checkpoint list if
+ * the buffer is in an intermediate state (not dirty
+ * and jh->b_transaction is NULL).
+ */
+ JBUFFER_TRACE(jh, "Journalling dirty buffer");
+ set_buffer_jbddirty(bh);
+ }
__jbd2_journal_file_buffer(jh, transaction, BJ_Reserved);
spin_unlock(&journal->j_list_lock);
+ unlock_buffer(bh);
goto done;
}
+ unlock_buffer(bh);
+
/*
* If there is already a copy-out version of this buffer, then we don't
* need to make another one
diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
index bd7d58d..97a3c09 100644
--- a/fs/jffs2/file.c
+++ b/fs/jffs2/file.c
@@ -138,19 +138,18 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
struct jffs2_inode_info *f = JFFS2_INODE_INFO(inode);
struct jffs2_sb_info *c = JFFS2_SB_INFO(inode->i_sb);
pgoff_t index = pos >> PAGE_SHIFT;
- uint32_t pageofs = index << PAGE_SHIFT;
int ret = 0;
jffs2_dbg(1, "%s()\n", __func__);
- if (pageofs > inode->i_size) {
- /* Make new hole frag from old EOF to new page */
+ if (pos > inode->i_size) {
+ /* Make new hole frag from old EOF to new position */
struct jffs2_raw_inode ri;
struct jffs2_full_dnode *fn;
uint32_t alloc_len;
- jffs2_dbg(1, "Writing new hole frag 0x%x-0x%x between current EOF and new page\n",
- (unsigned int)inode->i_size, pageofs);
+ jffs2_dbg(1, "Writing new hole frag 0x%x-0x%x between current EOF and new position\n",
+ (unsigned int)inode->i_size, (uint32_t)pos);
ret = jffs2_reserve_space(c, sizeof(ri), &alloc_len,
ALLOC_NORMAL, JFFS2_SUMMARY_INODE_SIZE);
@@ -170,10 +169,10 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
ri.mode = cpu_to_jemode(inode->i_mode);
ri.uid = cpu_to_je16(i_uid_read(inode));
ri.gid = cpu_to_je16(i_gid_read(inode));
- ri.isize = cpu_to_je32(max((uint32_t)inode->i_size, pageofs));
+ ri.isize = cpu_to_je32((uint32_t)pos);
ri.atime = ri.ctime = ri.mtime = cpu_to_je32(JFFS2_NOW());
ri.offset = cpu_to_je32(inode->i_size);
- ri.dsize = cpu_to_je32(pageofs - inode->i_size);
+ ri.dsize = cpu_to_je32((uint32_t)pos - inode->i_size);
ri.csize = cpu_to_je32(0);
ri.compr = JFFS2_COMPR_ZERO;
ri.node_crc = cpu_to_je32(crc32(0, &ri, sizeof(ri)-8));
@@ -203,7 +202,7 @@ static int jffs2_write_begin(struct file *filp, struct address_space *mapping,
goto out_err;
}
jffs2_complete_reservation(c);
- inode->i_size = pageofs;
+ inode->i_size = pos;
mutex_unlock(&f->sem);
}
diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
index f401bc0..0034b0f 100644
--- a/fs/jfs/jfs_dmap.c
+++ b/fs/jfs/jfs_dmap.c
@@ -193,7 +193,8 @@ int dbMount(struct inode *ipbmap)
bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
- if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG) {
+ if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG ||
+ bmp->db_agl2size < 0) {
err = -EINVAL;
goto err_release_metapage;
}
diff --git a/fs/ksmbd/connection.c b/fs/ksmbd/connection.c
index 6a1d1fb..21cda82 100644
--- a/fs/ksmbd/connection.c
+++ b/fs/ksmbd/connection.c
@@ -274,7 +274,7 @@ int ksmbd_conn_handler_loop(void *p)
{
struct ksmbd_conn *conn = (struct ksmbd_conn *)p;
struct ksmbd_transport *t = conn->transport;
- unsigned int pdu_size;
+ unsigned int pdu_size, max_allowed_pdu_size;
char hdr_buf[4] = {0,};
int size;
@@ -292,28 +292,35 @@ int ksmbd_conn_handler_loop(void *p)
kvfree(conn->request_buf);
conn->request_buf = NULL;
- size = t->ops->read(t, hdr_buf, sizeof(hdr_buf));
+ size = t->ops->read(t, hdr_buf, sizeof(hdr_buf), -1);
if (size != sizeof(hdr_buf))
break;
pdu_size = get_rfc1002_len(hdr_buf);
ksmbd_debug(CONN, "RFC1002 header %u bytes\n", pdu_size);
- /*
- * Check if pdu size is valid (min : smb header size,
- * max : 0x00FFFFFF).
- */
- if (pdu_size < __SMB2_HEADER_STRUCTURE_SIZE ||
- pdu_size > MAX_STREAM_PROT_LEN) {
- continue;
+ if (conn->status == KSMBD_SESS_GOOD)
+ max_allowed_pdu_size =
+ SMB3_MAX_MSGSIZE + conn->vals->max_write_size;
+ else
+ max_allowed_pdu_size = SMB3_MAX_MSGSIZE;
+
+ if (pdu_size > max_allowed_pdu_size) {
+ pr_err_ratelimited("PDU length(%u) excceed maximum allowed pdu size(%u) on connection(%d)\n",
+ pdu_size, max_allowed_pdu_size,
+ conn->status);
+ break;
}
+ /*
+ * Check maximum pdu size(0x00FFFFFF).
+ */
+ if (pdu_size > MAX_STREAM_PROT_LEN)
+ break;
+
/* 4 for rfc1002 length field */
size = pdu_size + 4;
- conn->request_buf = kvmalloc(size,
- GFP_KERNEL |
- __GFP_NOWARN |
- __GFP_NORETRY);
+ conn->request_buf = kvmalloc(size, GFP_KERNEL);
if (!conn->request_buf)
break;
@@ -325,7 +332,7 @@ int ksmbd_conn_handler_loop(void *p)
* We already read 4 bytes to find out PDU size, now
* read in PDU
*/
- size = t->ops->read(t, conn->request_buf + 4, pdu_size);
+ size = t->ops->read(t, conn->request_buf + 4, pdu_size, 2);
if (size < 0) {
pr_err("sock_read failed: %d\n", size);
break;
diff --git a/fs/ksmbd/connection.h b/fs/ksmbd/connection.h
index 8694aef..4b15c5e 100644
--- a/fs/ksmbd/connection.h
+++ b/fs/ksmbd/connection.h
@@ -122,7 +122,8 @@ struct ksmbd_conn_ops {
struct ksmbd_transport_ops {
int (*prepare)(struct ksmbd_transport *t);
void (*disconnect)(struct ksmbd_transport *t);
- int (*read)(struct ksmbd_transport *t, char *buf, unsigned int size);
+ int (*read)(struct ksmbd_transport *t, char *buf,
+ unsigned int size, int max_retries);
int (*writev)(struct ksmbd_transport *t, struct kvec *iovs, int niov,
int size, bool need_invalidate_rkey,
unsigned int remote_key);
diff --git a/fs/ksmbd/ksmbd_netlink.h b/fs/ksmbd/ksmbd_netlink.h
index 71bfb7d..fae859d 100644
--- a/fs/ksmbd/ksmbd_netlink.h
+++ b/fs/ksmbd/ksmbd_netlink.h
@@ -104,7 +104,9 @@ struct ksmbd_startup_request {
*/
__u32 sub_auth[3]; /* Subauth value for Security ID */
__u32 smb2_max_credits; /* MAX credits */
- __u32 reserved[128]; /* Reserved room */
+ __u32 smbd_max_io_size; /* smbd read write size */
+ __u32 max_connections; /* Number of maximum simultaneous connections */
+ __u32 reserved[126]; /* Reserved room */
__u32 ifc_list_sz; /* interfaces list size */
__s8 ____payload[];
};
diff --git a/fs/ksmbd/ndr.c b/fs/ksmbd/ndr.c
index 5052be9..28f44f0 100644
--- a/fs/ksmbd/ndr.c
+++ b/fs/ksmbd/ndr.c
@@ -242,7 +242,7 @@ int ndr_decode_dos_attr(struct ndr *n, struct xattr_dos_attrib *da)
return ret;
if (da->version != 3 && da->version != 4) {
- pr_err("v%d version is not supported\n", da->version);
+ ksmbd_debug(VFS, "v%d version is not supported\n", da->version);
return -EINVAL;
}
@@ -251,7 +251,7 @@ int ndr_decode_dos_attr(struct ndr *n, struct xattr_dos_attrib *da)
return ret;
if (da->version != version2) {
- pr_err("ndr version mismatched(version: %d, version2: %d)\n",
+ ksmbd_debug(VFS, "ndr version mismatched(version: %d, version2: %d)\n",
da->version, version2);
return -EINVAL;
}
@@ -453,7 +453,7 @@ int ndr_decode_v4_ntacl(struct ndr *n, struct xattr_ntacl *acl)
if (ret)
return ret;
if (acl->version != 4) {
- pr_err("v%d version is not supported\n", acl->version);
+ ksmbd_debug(VFS, "v%d version is not supported\n", acl->version);
return -EINVAL;
}
@@ -461,7 +461,7 @@ int ndr_decode_v4_ntacl(struct ndr *n, struct xattr_ntacl *acl)
if (ret)
return ret;
if (acl->version != version2) {
- pr_err("ndr version mismatched(version: %d, version2: %d)\n",
+ ksmbd_debug(VFS, "ndr version mismatched(version: %d, version2: %d)\n",
acl->version, version2);
return -EINVAL;
}
diff --git a/fs/ksmbd/server.h b/fs/ksmbd/server.h
index ac9d932..db72781 100644
--- a/fs/ksmbd/server.h
+++ b/fs/ksmbd/server.h
@@ -41,6 +41,7 @@ struct ksmbd_server_config {
unsigned int share_fake_fscaps;
struct smb_sid domain_sid;
unsigned int auth_mechs;
+ unsigned int max_connections;
char *conf[SERVER_CONF_WORK_GROUP + 1];
};
diff --git a/fs/ksmbd/smb2misc.c b/fs/ksmbd/smb2misc.c
index b47be71..c579d0e 100644
--- a/fs/ksmbd/smb2misc.c
+++ b/fs/ksmbd/smb2misc.c
@@ -150,15 +150,11 @@ static int smb2_get_data_area_len(unsigned int *off, unsigned int *len,
break;
case SMB2_LOCK:
{
- int lock_count;
+ unsigned short lock_count;
- /*
- * smb2_lock request size is 48 included single
- * smb2_lock_element structure size.
- */
- lock_count = le16_to_cpu(((struct smb2_lock_req *)hdr)->LockCount) - 1;
+ lock_count = le16_to_cpu(((struct smb2_lock_req *)hdr)->LockCount);
if (lock_count > 0) {
- *off = __SMB2_HEADER_STRUCTURE_SIZE + 48;
+ *off = offsetof(struct smb2_lock_req, locks);
*len = sizeof(struct smb2_lock_element) * lock_count;
}
break;
@@ -418,20 +414,19 @@ int ksmbd_smb2_check_message(struct ksmbd_work *work)
goto validate_credit;
/*
- * windows client also pad up to 8 bytes when compounding.
- * If pad is longer than eight bytes, log the server behavior
- * (once), since may indicate a problem but allow it and
- * continue since the frame is parseable.
+ * SMB2 NEGOTIATE request will be validated when message
+ * handling proceeds.
*/
- if (clc_len < len) {
- ksmbd_debug(SMB,
- "cli req padded more than expected. Length %d not %d for cmd:%d mid:%llu\n",
- len, clc_len, command,
- le64_to_cpu(hdr->MessageId));
+ if (command == SMB2_NEGOTIATE_HE)
goto validate_credit;
- }
- ksmbd_debug(SMB,
+ /*
+ * Allow a message that padded to 8byte boundary.
+ */
+ if (clc_len < len && (len - clc_len) < 8)
+ goto validate_credit;
+
+ pr_err_ratelimited(
"cli req too short, len %d not %d. cmd:%d mid:%llu\n",
len, clc_len, command,
le64_to_cpu(hdr->MessageId));
diff --git a/fs/ksmbd/smb2pdu.c b/fs/ksmbd/smb2pdu.c
index 65c85ca..1cd6170 100644
--- a/fs/ksmbd/smb2pdu.c
+++ b/fs/ksmbd/smb2pdu.c
@@ -2979,8 +2979,11 @@ int smb2_open(struct ksmbd_work *work)
sizeof(struct smb_acl) +
sizeof(struct smb_ace) * ace_num * 2,
GFP_KERNEL);
- if (!pntsd)
+ if (!pntsd) {
+ posix_acl_release(fattr.cf_acls);
+ posix_acl_release(fattr.cf_dacls);
goto err_out;
+ }
rc = build_sec_desc(user_ns,
pntsd, NULL, 0,
@@ -4920,6 +4923,10 @@ static int smb2_get_info_filesystem(struct ksmbd_work *work,
info->Attributes |= cpu_to_le32(server_conf.share_fake_fscaps);
+ if (test_share_config_flag(work->tcon->share_conf,
+ KSMBD_SHARE_FLAG_STREAMS))
+ info->Attributes |= cpu_to_le32(FILE_NAMED_STREAMS);
+
info->MaxPathNameComponentLength = cpu_to_le32(stfs.f_namelen);
len = smbConvertToUTF16((__le16 *)info->FileSystemName,
"NTFS", PATH_MAX, conn->local_nls, 0);
@@ -7415,13 +7422,16 @@ static int fsctl_query_allocated_ranges(struct ksmbd_work *work, u64 id,
if (in_count == 0)
return -EINVAL;
+ start = le64_to_cpu(qar_req->file_offset);
+ length = le64_to_cpu(qar_req->length);
+
+ if (start < 0 || length < 0)
+ return -EINVAL;
+
fp = ksmbd_lookup_fd_fast(work, id);
if (!fp)
return -ENOENT;
- start = le64_to_cpu(qar_req->file_offset);
- length = le64_to_cpu(qar_req->length);
-
ret = ksmbd_vfs_fqar_lseek(fp, start, length,
qar_rsp, in_count, out_count);
if (ret && ret != -E2BIG)
@@ -7725,7 +7735,7 @@ int smb2_ioctl(struct ksmbd_work *work)
off = le64_to_cpu(zero_data->FileOffset);
bfz = le64_to_cpu(zero_data->BeyondFinalZero);
- if (off > bfz) {
+ if (off < 0 || bfz < 0 || off > bfz) {
ret = -EINVAL;
goto out;
}
@@ -8613,6 +8623,7 @@ int smb3_decrypt_req(struct ksmbd_work *work)
bool smb3_11_final_sess_setup_resp(struct ksmbd_work *work)
{
struct ksmbd_conn *conn = work->conn;
+ struct ksmbd_session *sess = work->sess;
struct smb2_hdr *rsp = work->response_buf;
if (conn->dialect < SMB30_PROT_ID)
@@ -8622,6 +8633,7 @@ bool smb3_11_final_sess_setup_resp(struct ksmbd_work *work)
rsp = ksmbd_resp_buf_next(work);
if (le16_to_cpu(rsp->Command) == SMB2_SESSION_SETUP_HE &&
+ sess->user && !user_guest(sess->user) &&
rsp->Status == STATUS_SUCCESS)
return true;
return false;
diff --git a/fs/ksmbd/smb2pdu.h b/fs/ksmbd/smb2pdu.h
index 4f85749..ddc3cea 100644
--- a/fs/ksmbd/smb2pdu.h
+++ b/fs/ksmbd/smb2pdu.h
@@ -113,8 +113,9 @@
#define SMB21_DEFAULT_IOSIZE (1024 * 1024)
#define SMB3_DEFAULT_IOSIZE (4 * 1024 * 1024)
#define SMB3_DEFAULT_TRANS_SIZE (1024 * 1024)
-#define SMB3_MIN_IOSIZE (64 * 1024)
-#define SMB3_MAX_IOSIZE (8 * 1024 * 1024)
+#define SMB3_MIN_IOSIZE (64 * 1024)
+#define SMB3_MAX_IOSIZE (8 * 1024 * 1024)
+#define SMB3_MAX_MSGSIZE (4 * 4096)
/*
* SMB2 Header Definition
diff --git a/fs/ksmbd/smb_common.c b/fs/ksmbd/smb_common.c
index 22f4609..f034b75c 100644
--- a/fs/ksmbd/smb_common.c
+++ b/fs/ksmbd/smb_common.c
@@ -434,7 +434,7 @@ int ksmbd_extract_shortname(struct ksmbd_conn *conn, const char *longname,
static int __smb2_negotiate(struct ksmbd_conn *conn)
{
- return (conn->dialect >= SMB21_PROT_ID &&
+ return (conn->dialect >= SMB20_PROT_ID &&
conn->dialect <= SMB311_PROT_ID);
}
@@ -442,9 +442,26 @@ static int smb_handle_negotiate(struct ksmbd_work *work)
{
struct smb_negotiate_rsp *neg_rsp = work->response_buf;
- ksmbd_debug(SMB, "Unsupported SMB protocol\n");
- neg_rsp->hdr.Status.CifsError = STATUS_INVALID_LOGON_TYPE;
- return -EINVAL;
+ ksmbd_debug(SMB, "Unsupported SMB1 protocol\n");
+
+ /*
+ * Remove 4 byte direct TCP header, add 2 byte bcc and
+ * 2 byte DialectIndex.
+ */
+ *(__be32 *)work->response_buf =
+ cpu_to_be32(sizeof(struct smb_hdr) - 4 + 2 + 2);
+ neg_rsp->hdr.Status.CifsError = STATUS_SUCCESS;
+
+ neg_rsp->hdr.Command = SMB_COM_NEGOTIATE;
+ *(__le32 *)neg_rsp->hdr.Protocol = SMB1_PROTO_NUMBER;
+ neg_rsp->hdr.Flags = SMBFLG_RESPONSE;
+ neg_rsp->hdr.Flags2 = SMBFLG2_UNICODE | SMBFLG2_ERR_STATUS |
+ SMBFLG2_EXT_SEC | SMBFLG2_IS_LONG_NAME;
+
+ neg_rsp->hdr.WordCount = 1;
+ neg_rsp->DialectIndex = cpu_to_le16(work->conn->dialect);
+ neg_rsp->ByteCount = 0;
+ return 0;
}
int ksmbd_smb_negotiate_common(struct ksmbd_work *work, unsigned int command)
@@ -464,7 +481,7 @@ int ksmbd_smb_negotiate_common(struct ksmbd_work *work, unsigned int command)
}
}
- if (command == SMB2_NEGOTIATE_HE && __smb2_negotiate(conn)) {
+ if (command == SMB2_NEGOTIATE_HE) {
ret = smb2_handle_negotiate(work);
init_smb2_neg_rsp(work);
return ret;
diff --git a/fs/ksmbd/smb_common.h b/fs/ksmbd/smb_common.h
index 1eba8da..b9fe3fa 100644
--- a/fs/ksmbd/smb_common.h
+++ b/fs/ksmbd/smb_common.h
@@ -205,8 +205,15 @@
#define SMB1_PROTO_NUMBER cpu_to_le32(0x424d53ff)
#define SMB_COM_NEGOTIATE 0x72
-
#define SMB1_CLIENT_GUID_SIZE (16)
+
+#define SMBFLG_RESPONSE 0x80 /* this PDU is a response from server */
+
+#define SMBFLG2_IS_LONG_NAME cpu_to_le16(0x40)
+#define SMBFLG2_EXT_SEC cpu_to_le16(0x800)
+#define SMBFLG2_ERR_STATUS cpu_to_le16(0x4000)
+#define SMBFLG2_UNICODE cpu_to_le16(0x8000)
+
struct smb_hdr {
__be32 smb_buf_length;
__u8 Protocol[4];
@@ -246,28 +253,7 @@ struct smb_negotiate_req {
struct smb_negotiate_rsp {
struct smb_hdr hdr; /* wct = 17 */
__le16 DialectIndex; /* 0xFFFF = no dialect acceptable */
- __u8 SecurityMode;
- __le16 MaxMpxCount;
- __le16 MaxNumberVcs;
- __le32 MaxBufferSize;
- __le32 MaxRawSize;
- __le32 SessionKey;
- __le32 Capabilities; /* see below */
- __le32 SystemTimeLow;
- __le32 SystemTimeHigh;
- __le16 ServerTimeZone;
- __u8 EncryptionKeyLength;
__le16 ByteCount;
- union {
- unsigned char EncryptionKey[8]; /* cap extended security off */
- /* followed by Domain name - if extended security is off */
- /* followed by 16 bytes of server GUID */
- /* then security blob if cap_extended_security negotiated */
- struct {
- unsigned char GUID[SMB1_CLIENT_GUID_SIZE];
- unsigned char SecurityBlob[1];
- } __packed extended_response;
- } __packed u;
} __packed;
struct filesystem_attribute_info {
diff --git a/fs/ksmbd/transport_ipc.c b/fs/ksmbd/transport_ipc.c
index 3ad6881..a8313ee 100644
--- a/fs/ksmbd/transport_ipc.c
+++ b/fs/ksmbd/transport_ipc.c
@@ -26,6 +26,7 @@
#include "mgmt/ksmbd_ida.h"
#include "connection.h"
#include "transport_tcp.h"
+#include "transport_rdma.h"
#define IPC_WAIT_TIMEOUT (2 * HZ)
@@ -303,6 +304,11 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
init_smb2_max_trans_size(req->smb2_max_trans);
if (req->smb2_max_credits)
init_smb2_max_credits(req->smb2_max_credits);
+ if (req->smbd_max_io_size)
+ init_smbd_max_io_size(req->smbd_max_io_size);
+
+ if (req->max_connections)
+ server_conf.max_connections = req->max_connections;
ret = ksmbd_set_netbios_name(req->netbios_name);
ret |= ksmbd_set_server_string(req->server_string);
diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c
index a2fd5a4..9ca29cd 100644
--- a/fs/ksmbd/transport_rdma.c
+++ b/fs/ksmbd/transport_rdma.c
@@ -75,7 +75,7 @@ static int smb_direct_max_fragmented_recv_size = 1024 * 1024;
/* The maximum single-message size which can be received */
static int smb_direct_max_receive_size = 8192;
-static int smb_direct_max_read_write_size = 1024 * 1024;
+static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE;
static int smb_direct_max_outstanding_rw_ops = 8;
@@ -201,6 +201,12 @@ struct smb_direct_rdma_rw_msg {
struct scatterlist sg_list[0];
};
+void init_smbd_max_io_size(unsigned int sz)
+{
+ sz = clamp_val(sz, SMBD_MIN_IOSIZE, SMBD_MAX_IOSIZE);
+ smb_direct_max_read_write_size = sz;
+}
+
static inline int get_buf_page_count(void *buf, int size)
{
return DIV_ROUND_UP((uintptr_t)buf + size, PAGE_SIZE) -
@@ -657,7 +663,7 @@ static int smb_direct_post_recv(struct smb_direct_transport *t,
}
static int smb_direct_read(struct ksmbd_transport *t, char *buf,
- unsigned int size)
+ unsigned int size, int unused)
{
struct smb_direct_recvmsg *recvmsg;
struct smb_direct_data_transfer *data_transfer;
diff --git a/fs/ksmbd/transport_rdma.h b/fs/ksmbd/transport_rdma.h
index 0fa8adc..04a7a37 100644
--- a/fs/ksmbd/transport_rdma.h
+++ b/fs/ksmbd/transport_rdma.h
@@ -9,6 +9,10 @@
#define SMB_DIRECT_PORT 5445
+#define SMBD_DEFAULT_IOSIZE (8 * 1024 * 1024)
+#define SMBD_MIN_IOSIZE (512 * 1024)
+#define SMBD_MAX_IOSIZE (16 * 1024 * 1024)
+
/* SMB DIRECT negotiation request packet [MS-SMBD] 2.2.1 */
struct smb_direct_negotiate_req {
__le16 min_version;
@@ -54,10 +58,12 @@ struct smb_direct_data_transfer {
int ksmbd_rdma_init(void);
int ksmbd_rdma_destroy(void);
bool ksmbd_rdma_capable_netdev(struct net_device *netdev);
+void init_smbd_max_io_size(unsigned int sz);
#else
static inline int ksmbd_rdma_init(void) { return 0; }
static inline int ksmbd_rdma_destroy(void) { return 0; }
static inline bool ksmbd_rdma_capable_netdev(struct net_device *netdev) { return false; }
+static inline void init_smbd_max_io_size(unsigned int sz) { }
#endif
#endif /* __KSMBD_TRANSPORT_RDMA_H__ */
diff --git a/fs/ksmbd/transport_tcp.c b/fs/ksmbd/transport_tcp.c
index 293af92..d1d7954 100644
--- a/fs/ksmbd/transport_tcp.c
+++ b/fs/ksmbd/transport_tcp.c
@@ -15,6 +15,8 @@
#define IFACE_STATE_DOWN BIT(0)
#define IFACE_STATE_CONFIGURED BIT(1)
+static atomic_t active_num_conn;
+
struct interface {
struct task_struct *ksmbd_kthread;
struct socket *ksmbd_socket;
@@ -185,8 +187,10 @@ static int ksmbd_tcp_new_connection(struct socket *client_sk)
struct tcp_transport *t;
t = alloc_transport(client_sk);
- if (!t)
+ if (!t) {
+ sock_release(client_sk);
return -ENOMEM;
+ }
csin = KSMBD_TCP_PEER_SOCKADDR(KSMBD_TRANS(t)->conn);
if (kernel_getpeername(client_sk, csin) < 0) {
@@ -239,6 +243,15 @@ static int ksmbd_kthread_fn(void *p)
continue;
}
+ if (server_conf.max_connections &&
+ atomic_inc_return(&active_num_conn) >= server_conf.max_connections) {
+ pr_info_ratelimited("Limit the maximum number of connections(%u)\n",
+ atomic_read(&active_num_conn));
+ atomic_dec(&active_num_conn);
+ sock_release(client_sk);
+ continue;
+ }
+
ksmbd_debug(CONN, "connect success: accepted new connection\n");
client_sk->sk->sk_rcvtimeo = KSMBD_TCP_RECV_TIMEOUT;
client_sk->sk->sk_sndtimeo = KSMBD_TCP_SEND_TIMEOUT;
@@ -278,16 +291,18 @@ static int ksmbd_tcp_run_kthread(struct interface *iface)
/**
* ksmbd_tcp_readv() - read data from socket in given iovec
- * @t: TCP transport instance
- * @iov_orig: base IO vector
- * @nr_segs: number of segments in base iov
- * @to_read: number of bytes to read from socket
+ * @t: TCP transport instance
+ * @iov_orig: base IO vector
+ * @nr_segs: number of segments in base iov
+ * @to_read: number of bytes to read from socket
+ * @max_retries: maximum retry count
*
* Return: on success return number of bytes read from socket,
* otherwise return error number
*/
static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig,
- unsigned int nr_segs, unsigned int to_read)
+ unsigned int nr_segs, unsigned int to_read,
+ int max_retries)
{
int length = 0;
int total_read;
@@ -295,7 +310,6 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig,
struct msghdr ksmbd_msg;
struct kvec *iov;
struct ksmbd_conn *conn = KSMBD_TRANS(t)->conn;
- int max_retry = 2;
iov = get_conn_iovec(t, nr_segs);
if (!iov)
@@ -322,14 +336,23 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig,
} else if (conn->status == KSMBD_SESS_NEED_RECONNECT) {
total_read = -EAGAIN;
break;
- } else if ((length == -ERESTARTSYS || length == -EAGAIN) &&
- max_retry) {
+ } else if (length == -ERESTARTSYS || length == -EAGAIN) {
+ /*
+ * If max_retries is negative, Allow unlimited
+ * retries to keep connection with inactive sessions.
+ */
+ if (max_retries == 0) {
+ total_read = length;
+ break;
+ } else if (max_retries > 0) {
+ max_retries--;
+ }
+
usleep_range(1000, 2000);
length = 0;
- max_retry--;
continue;
} else if (length <= 0) {
- total_read = -EAGAIN;
+ total_read = length;
break;
}
}
@@ -345,14 +368,15 @@ static int ksmbd_tcp_readv(struct tcp_transport *t, struct kvec *iov_orig,
* Return: on success return number of bytes read from socket,
* otherwise return error number
*/
-static int ksmbd_tcp_read(struct ksmbd_transport *t, char *buf, unsigned int to_read)
+static int ksmbd_tcp_read(struct ksmbd_transport *t, char *buf,
+ unsigned int to_read, int max_retries)
{
struct kvec iov;
iov.iov_base = buf;
iov.iov_len = to_read;
- return ksmbd_tcp_readv(TCP_TRANS(t), &iov, 1, to_read);
+ return ksmbd_tcp_readv(TCP_TRANS(t), &iov, 1, to_read, max_retries);
}
static int ksmbd_tcp_writev(struct ksmbd_transport *t, struct kvec *iov,
@@ -368,6 +392,8 @@ static int ksmbd_tcp_writev(struct ksmbd_transport *t, struct kvec *iov,
static void ksmbd_tcp_disconnect(struct ksmbd_transport *t)
{
free_transport(TCP_TRANS(t));
+ if (server_conf.max_connections)
+ atomic_dec(&active_num_conn);
}
static void tcp_destroy_socket(struct socket *ksmbd_socket)
diff --git a/fs/lockd/clnt4xdr.c b/fs/lockd/clnt4xdr.c
index 7df6324..8161667 100644
--- a/fs/lockd/clnt4xdr.c
+++ b/fs/lockd/clnt4xdr.c
@@ -261,7 +261,6 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, struct nlm_res *result)
u32 exclusive;
int error;
__be32 *p;
- s32 end;
memset(lock, 0, sizeof(*lock));
locks_init_lock(fl);
@@ -285,13 +284,7 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, struct nlm_res *result)
fl->fl_type = exclusive != 0 ? F_WRLCK : F_RDLCK;
p = xdr_decode_hyper(p, &l_offset);
xdr_decode_hyper(p, &l_len);
- end = l_offset + l_len - 1;
-
- fl->fl_start = (loff_t)l_offset;
- if (l_len == 0 || end < 0)
- fl->fl_end = OFFSET_MAX;
- else
- fl->fl_end = (loff_t)end;
+ nlm4svc_set_file_lock_range(fl, l_offset, l_len);
error = 0;
out:
return error;
diff --git a/fs/lockd/xdr4.c b/fs/lockd/xdr4.c
index 72f7d19..b303ecd 100644
--- a/fs/lockd/xdr4.c
+++ b/fs/lockd/xdr4.c
@@ -33,6 +33,17 @@ loff_t_to_s64(loff_t offset)
return res;
}
+void nlm4svc_set_file_lock_range(struct file_lock *fl, u64 off, u64 len)
+{
+ s64 end = off + len - 1;
+
+ fl->fl_start = off;
+ if (len == 0 || end < 0)
+ fl->fl_end = OFFSET_MAX;
+ else
+ fl->fl_end = end;
+}
+
/*
* NLM file handles are defined by specification to be a variable-length
* XDR opaque no longer than 1024 bytes. However, this implementation
@@ -80,7 +91,7 @@ svcxdr_decode_lock(struct xdr_stream *xdr, struct nlm_lock *lock)
locks_init_lock(fl);
fl->fl_flags = FL_POSIX;
fl->fl_type = F_RDLCK;
-
+ nlm4svc_set_file_lock_range(fl, lock->lock_start, lock->lock_len);
return true;
}
diff --git a/fs/locks.c b/fs/locks.c
index 82a4487..881fd16 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -1901,9 +1901,10 @@ int generic_setlease(struct file *filp, long arg, struct file_lock **flp,
void **priv)
{
struct inode *inode = locks_inode(filp);
+ kuid_t uid = i_uid_into_mnt(file_mnt_user_ns(filp), inode);
int error;
- if ((!uid_eq(current_fsuid(), inode->i_uid)) && !capable(CAP_LEASE))
+ if ((!uid_eq(current_fsuid(), uid)) && !capable(CAP_LEASE))
return -EACCES;
if (!S_ISREG(inode->i_mode))
return -EINVAL;
diff --git a/fs/namei.c b/fs/namei.c
index 81b31d9..02e9960 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -3001,6 +3001,65 @@ void unlock_rename(struct dentry *p1, struct dentry *p2)
EXPORT_SYMBOL(unlock_rename);
/**
+ * mode_strip_umask - handle vfs umask stripping
+ * @dir: parent directory of the new inode
+ * @mode: mode of the new inode to be created in @dir
+ *
+ * Umask stripping depends on whether or not the filesystem supports POSIX
+ * ACLs. If the filesystem doesn't support it umask stripping is done directly
+ * in here. If the filesystem does support POSIX ACLs umask stripping is
+ * deferred until the filesystem calls posix_acl_create().
+ *
+ * Returns: mode
+ */
+static inline umode_t mode_strip_umask(const struct inode *dir, umode_t mode)
+{
+ if (!IS_POSIXACL(dir))
+ mode &= ~current_umask();
+ return mode;
+}
+
+/**
+ * vfs_prepare_mode - prepare the mode to be used for a new inode
+ * @mnt_userns: user namespace of the mount the inode was found from
+ * @dir: parent directory of the new inode
+ * @mode: mode of the new inode
+ * @mask_perms: allowed permission by the vfs
+ * @type: type of file to be created
+ *
+ * This helper consolidates and enforces vfs restrictions on the @mode of a new
+ * object to be created.
+ *
+ * Umask stripping depends on whether the filesystem supports POSIX ACLs (see
+ * the kernel documentation for mode_strip_umask()). Moving umask stripping
+ * after setgid stripping allows the same ordering for both non-POSIX ACL and
+ * POSIX ACL supporting filesystems.
+ *
+ * Note that it's currently valid for @type to be 0 if a directory is created.
+ * Filesystems raise that flag individually and we need to check whether each
+ * filesystem can deal with receiving S_IFDIR from the vfs before we enforce a
+ * non-zero type.
+ *
+ * Returns: mode to be passed to the filesystem
+ */
+static inline umode_t vfs_prepare_mode(struct user_namespace *mnt_userns,
+ const struct inode *dir, umode_t mode,
+ umode_t mask_perms, umode_t type)
+{
+ mode = mode_strip_sgid(mnt_userns, dir, mode);
+ mode = mode_strip_umask(dir, mode);
+
+ /*
+ * Apply the vfs mandated allowed permission mask and set the type of
+ * file to be created before we call into the filesystem.
+ */
+ mode &= (mask_perms & ~S_IFMT);
+ mode |= (type & S_IFMT);
+
+ return mode;
+}
+
+/**
* vfs_create - create new file
* @mnt_userns: user namespace of the mount the inode was found from
* @dir: inode of @dentry
@@ -3025,8 +3084,8 @@ int vfs_create(struct user_namespace *mnt_userns, struct inode *dir,
if (!dir->i_op->create)
return -EACCES; /* shouldn't it be ENOSYS? */
- mode &= S_IALLUGO;
- mode |= S_IFREG;
+
+ mode = vfs_prepare_mode(mnt_userns, dir, mode, S_IALLUGO, S_IFREG);
error = security_inode_create(dir, dentry, mode);
if (error)
return error;
@@ -3291,8 +3350,7 @@ static struct dentry *lookup_open(struct nameidata *nd, struct file *file,
if (open_flag & O_CREAT) {
if (open_flag & O_EXCL)
open_flag &= ~O_TRUNC;
- if (!IS_POSIXACL(dir->d_inode))
- mode &= ~current_umask();
+ mode = vfs_prepare_mode(mnt_userns, dir->d_inode, mode, mode, mode);
if (likely(got_write))
create_error = may_o_create(mnt_userns, &nd->path,
dentry, mode);
@@ -3525,8 +3583,7 @@ struct dentry *vfs_tmpfile(struct user_namespace *mnt_userns,
child = d_alloc(dentry, &slash_name);
if (unlikely(!child))
goto out_err;
- if (!IS_POSIXACL(dir))
- mode &= ~current_umask();
+ mode = vfs_prepare_mode(mnt_userns, dir, mode, mode, mode);
error = dir->i_op->tmpfile(mnt_userns, dir, child, mode);
if (error)
goto out_err;
@@ -3804,6 +3861,7 @@ int vfs_mknod(struct user_namespace *mnt_userns, struct inode *dir,
if (!dir->i_op->mknod)
return -EPERM;
+ mode = vfs_prepare_mode(mnt_userns, dir, mode, mode, mode);
error = devcgroup_inode_mknod(mode, dev);
if (error)
return error;
@@ -3854,9 +3912,8 @@ static int do_mknodat(int dfd, struct filename *name, umode_t mode,
if (IS_ERR(dentry))
goto out1;
- if (!IS_POSIXACL(path.dentry->d_inode))
- mode &= ~current_umask();
- error = security_path_mknod(&path, dentry, mode, dev);
+ error = security_path_mknod(&path, dentry,
+ mode_strip_umask(path.dentry->d_inode, mode), dev);
if (error)
goto out2;
@@ -3926,7 +3983,7 @@ int vfs_mkdir(struct user_namespace *mnt_userns, struct inode *dir,
if (!dir->i_op->mkdir)
return -EPERM;
- mode &= (S_IRWXUGO|S_ISVTX);
+ mode = vfs_prepare_mode(mnt_userns, dir, mode, S_IRWXUGO | S_ISVTX, 0);
error = security_inode_mkdir(dir, dentry, mode);
if (error)
return error;
@@ -3954,9 +4011,8 @@ int do_mkdirat(int dfd, struct filename *name, umode_t mode)
if (IS_ERR(dentry))
goto out_putname;
- if (!IS_POSIXACL(path.dentry->d_inode))
- mode &= ~current_umask();
- error = security_path_mkdir(&path, dentry, mode);
+ error = security_path_mkdir(&path, dentry,
+ mode_strip_umask(path.dentry->d_inode, mode));
if (!error) {
struct user_namespace *mnt_userns;
mnt_userns = mnt_user_ns(path.mnt);
diff --git a/fs/namespace.c b/fs/namespace.c
index d946298..1a9df6a 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -3968,6 +3968,23 @@ static int can_idmap_mount(const struct mount_kattr *kattr, struct mount *mnt)
return 0;
}
+/**
+ * mnt_allow_writers() - check whether the attribute change allows writers
+ * @kattr: the new mount attributes
+ * @mnt: the mount to which @kattr will be applied
+ *
+ * Check whether thew new mount attributes in @kattr allow concurrent writers.
+ *
+ * Return: true if writers need to be held, false if not
+ */
+static inline bool mnt_allow_writers(const struct mount_kattr *kattr,
+ const struct mount *mnt)
+{
+ return (!(kattr->attr_set & MNT_READONLY) ||
+ (mnt->mnt.mnt_flags & MNT_READONLY)) &&
+ !kattr->mnt_userns;
+}
+
static struct mount *mount_setattr_prepare(struct mount_kattr *kattr,
struct mount *mnt, int *err)
{
@@ -3998,8 +4015,7 @@ static struct mount *mount_setattr_prepare(struct mount_kattr *kattr,
last = m;
- if ((kattr->attr_set & MNT_READONLY) &&
- !(m->mnt.mnt_flags & MNT_READONLY)) {
+ if (!mnt_allow_writers(kattr, m)) {
*err = mnt_hold_writers(m);
if (*err)
goto out;
@@ -4050,13 +4066,8 @@ static void mount_setattr_commit(struct mount_kattr *kattr,
WRITE_ONCE(m->mnt.mnt_flags, flags);
}
- /*
- * We either set MNT_READONLY above so make it visible
- * before ~MNT_WRITE_HOLD or we failed to recursively
- * apply mount options.
- */
- if ((kattr->attr_set & MNT_READONLY) &&
- (m->mnt.mnt_flags & MNT_WRITE_HOLD))
+ /* If we had to hold writers unblock them. */
+ if (m->mnt.mnt_flags & MNT_WRITE_HOLD)
mnt_unhold_writers(m);
if (!err && kattr->propagation)
@@ -4110,9 +4121,9 @@ static int do_mount_setattr(struct path *path, struct mount_kattr *kattr)
unlock_mount_hash();
if (kattr->propagation) {
- namespace_unlock();
if (err)
cleanup_group_ids(mnt, NULL);
+ namespace_unlock();
}
return err;
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index ad5114e..dd53d0f9 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -484,8 +484,9 @@ static int nfs_swap_activate(struct swap_info_struct *sis, struct file *file,
{
unsigned long blocks;
long long isize;
- struct rpc_clnt *clnt = NFS_CLIENT(file->f_mapping->host);
- struct inode *inode = file->f_mapping->host;
+ struct inode *inode = file_inode(file);
+ struct rpc_clnt *clnt = NFS_CLIENT(inode);
+ struct nfs_client *cl = NFS_SERVER(inode)->nfs_client;
spin_lock(&inode->i_lock);
blocks = inode->i_blocks;
@@ -498,14 +499,22 @@ static int nfs_swap_activate(struct swap_info_struct *sis, struct file *file,
*span = sis->pages;
+
+ if (cl->rpc_ops->enable_swap)
+ cl->rpc_ops->enable_swap(inode);
+
return rpc_clnt_swap_activate(clnt);
}
static void nfs_swap_deactivate(struct file *file)
{
- struct rpc_clnt *clnt = NFS_CLIENT(file->f_mapping->host);
+ struct inode *inode = file_inode(file);
+ struct rpc_clnt *clnt = NFS_CLIENT(inode);
+ struct nfs_client *cl = NFS_SERVER(inode)->nfs_client;
rpc_clnt_swap_deactivate(clnt);
+ if (cl->rpc_ops->disable_swap)
+ cl->rpc_ops->disable_swap(file_inode(file));
}
const struct address_space_operations nfs_file_aops = {
diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
index d210385..45eec08 100644
--- a/fs/nfs/filelayout/filelayout.c
+++ b/fs/nfs/filelayout/filelayout.c
@@ -783,6 +783,12 @@ filelayout_alloc_lseg(struct pnfs_layout_hdr *layoutid,
return &fl->generic_hdr;
}
+static bool
+filelayout_lseg_is_striped(const struct nfs4_filelayout_segment *flseg)
+{
+ return flseg->num_fh > 1;
+}
+
/*
* filelayout_pg_test(). Called by nfs_can_coalesce_requests()
*
@@ -803,6 +809,8 @@ filelayout_pg_test(struct nfs_pageio_descriptor *pgio, struct nfs_page *prev,
size = pnfs_generic_pg_test(pgio, prev, req);
if (!size)
return 0;
+ else if (!filelayout_lseg_is_striped(FILELAYOUT_LSEG(pgio->pg_lseg)))
+ return size;
/* see if req and prev are in the same stripe */
if (prev) {
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index f8672a3..0a1e1c6 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -42,6 +42,7 @@ enum nfs4_client_state {
NFS4CLNT_LEASE_MOVED,
NFS4CLNT_DELEGATION_EXPIRED,
NFS4CLNT_RUN_MANAGER,
+ NFS4CLNT_MANAGER_AVAILABLE,
NFS4CLNT_RECALL_RUNNING,
NFS4CLNT_RECALL_ANY_LAYOUT_READ,
NFS4CLNT_RECALL_ANY_LAYOUT_RW,
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index b6b1fad..0546fa1 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -1993,8 +1993,7 @@ _nfs4_opendata_reclaim_to_nfs4_state(struct nfs4_opendata *data)
if (!data->rpc_done) {
if (data->rpc_status)
return ERR_PTR(data->rpc_status);
- /* cached opens have already been processed */
- goto update;
+ return nfs4_try_open_cached(data);
}
ret = nfs_refresh_inode(inode, &data->f_attr);
@@ -2003,7 +2002,7 @@ _nfs4_opendata_reclaim_to_nfs4_state(struct nfs4_opendata *data)
if (data->o_res.delegation_type != 0)
nfs4_opendata_check_deleg(data, state);
-update:
+
if (!update_open_stateid(state, &data->o_res.stateid,
NULL, data->o_arg.fmode))
return ERR_PTR(-EAGAIN);
@@ -10530,6 +10529,26 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
return error + error2 + error3;
}
+static void nfs4_enable_swap(struct inode *inode)
+{
+ /* The state manager thread must always be running.
+ * It will notice the client is a swapper, and stay put.
+ */
+ struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
+
+ nfs4_schedule_state_manager(clp);
+}
+
+static void nfs4_disable_swap(struct inode *inode)
+{
+ /* The state manager thread will now exit once it is
+ * woken.
+ */
+ struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
+
+ nfs4_schedule_state_manager(clp);
+}
+
static const struct inode_operations nfs4_dir_inode_operations = {
.create = nfs_create,
.lookup = nfs_lookup,
@@ -10607,6 +10626,8 @@ const struct nfs_rpc_ops nfs_v4_clientops = {
.create_server = nfs4_create_server,
.clone_server = nfs_clone_server,
.discover_trunking = nfs4_discover_trunking,
+ .enable_swap = nfs4_enable_swap,
+ .disable_swap = nfs4_disable_swap,
};
static const struct xattr_handler nfs4_xattr_nfs4_acl_handler = {
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 0cd803b..7223816 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1209,10 +1209,17 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
{
struct task_struct *task;
char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1];
+ struct rpc_clnt *cl = clp->cl_rpcclient;
+
+ while (cl != cl->cl_parent)
+ cl = cl->cl_parent;
set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state);
- if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
+ if (test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state) != 0) {
+ wake_up_var(&clp->cl_state);
return;
+ }
+ set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
__module_get(THIS_MODULE);
refcount_inc(&clp->cl_count);
@@ -1230,6 +1237,7 @@ void nfs4_schedule_state_manager(struct nfs_client *clp)
if (!nfs_client_init_is_complete(clp))
nfs_mark_client_ready(clp, PTR_ERR(task));
nfs4_clear_state_manager_bit(clp);
+ clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
nfs_put_client(clp);
module_put(THIS_MODULE);
}
@@ -2689,12 +2697,8 @@ static void nfs4_state_manager(struct nfs_client *clp)
clear_bit(NFS4CLNT_RECALL_RUNNING, &clp->cl_state);
}
- /* Did we race with an attempt to give us more work? */
- if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))
- return;
- if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0)
- return;
- memflags = memalloc_nofs_save();
+ return;
+
} while (refcount_read(&clp->cl_count) > 1 && !signalled());
goto out_drain;
@@ -2715,9 +2719,31 @@ static void nfs4_state_manager(struct nfs_client *clp)
static int nfs4_run_state_manager(void *ptr)
{
struct nfs_client *clp = ptr;
+ struct rpc_clnt *cl = clp->cl_rpcclient;
+
+ while (cl != cl->cl_parent)
+ cl = cl->cl_parent;
allow_signal(SIGKILL);
+again:
+ set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state);
nfs4_state_manager(clp);
+ if (atomic_read(&cl->cl_swapper)) {
+ wait_var_event_interruptible(&clp->cl_state,
+ test_bit(NFS4CLNT_RUN_MANAGER,
+ &clp->cl_state));
+ if (atomic_read(&cl->cl_swapper) &&
+ test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state))
+ goto again;
+ /* Either no longer a swapper, or were signalled */
+ }
+ clear_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state);
+
+ if (refcount_read(&clp->cl_count) > 1 && !signalled() &&
+ test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state) &&
+ !test_and_set_bit(NFS4CLNT_MANAGER_AVAILABLE, &clp->cl_state))
+ goto again;
+
nfs_put_client(clp);
module_put_and_exit(0);
return 0;
diff --git a/fs/nfs/nfs4trace.h b/fs/nfs/nfs4trace.h
index 7a2567a..bcd18e9 100644
--- a/fs/nfs/nfs4trace.h
+++ b/fs/nfs/nfs4trace.h
@@ -584,32 +584,34 @@ TRACE_DEFINE_ENUM(NFS4CLNT_MOVED);
TRACE_DEFINE_ENUM(NFS4CLNT_LEASE_MOVED);
TRACE_DEFINE_ENUM(NFS4CLNT_DELEGATION_EXPIRED);
TRACE_DEFINE_ENUM(NFS4CLNT_RUN_MANAGER);
+TRACE_DEFINE_ENUM(NFS4CLNT_MANAGER_AVAILABLE);
TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_RUNNING);
TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_ANY_LAYOUT_READ);
TRACE_DEFINE_ENUM(NFS4CLNT_RECALL_ANY_LAYOUT_RW);
+TRACE_DEFINE_ENUM(NFS4CLNT_DELEGRETURN_DELAYED);
#define show_nfs4_clp_state(state) \
__print_flags(state, "|", \
- { NFS4CLNT_MANAGER_RUNNING, "MANAGER_RUNNING" }, \
- { NFS4CLNT_CHECK_LEASE, "CHECK_LEASE" }, \
- { NFS4CLNT_LEASE_EXPIRED, "LEASE_EXPIRED" }, \
- { NFS4CLNT_RECLAIM_REBOOT, "RECLAIM_REBOOT" }, \
- { NFS4CLNT_RECLAIM_NOGRACE, "RECLAIM_NOGRACE" }, \
- { NFS4CLNT_DELEGRETURN, "DELEGRETURN" }, \
- { NFS4CLNT_SESSION_RESET, "SESSION_RESET" }, \
- { NFS4CLNT_LEASE_CONFIRM, "LEASE_CONFIRM" }, \
- { NFS4CLNT_SERVER_SCOPE_MISMATCH, \
- "SERVER_SCOPE_MISMATCH" }, \
- { NFS4CLNT_PURGE_STATE, "PURGE_STATE" }, \
- { NFS4CLNT_BIND_CONN_TO_SESSION, \
- "BIND_CONN_TO_SESSION" }, \
- { NFS4CLNT_MOVED, "MOVED" }, \
- { NFS4CLNT_LEASE_MOVED, "LEASE_MOVED" }, \
- { NFS4CLNT_DELEGATION_EXPIRED, "DELEGATION_EXPIRED" }, \
- { NFS4CLNT_RUN_MANAGER, "RUN_MANAGER" }, \
- { NFS4CLNT_RECALL_RUNNING, "RECALL_RUNNING" }, \
- { NFS4CLNT_RECALL_ANY_LAYOUT_READ, "RECALL_ANY_LAYOUT_READ" }, \
- { NFS4CLNT_RECALL_ANY_LAYOUT_RW, "RECALL_ANY_LAYOUT_RW" })
+ { BIT(NFS4CLNT_MANAGER_RUNNING), "MANAGER_RUNNING" }, \
+ { BIT(NFS4CLNT_CHECK_LEASE), "CHECK_LEASE" }, \
+ { BIT(NFS4CLNT_LEASE_EXPIRED), "LEASE_EXPIRED" }, \
+ { BIT(NFS4CLNT_RECLAIM_REBOOT), "RECLAIM_REBOOT" }, \
+ { BIT(NFS4CLNT_RECLAIM_NOGRACE), "RECLAIM_NOGRACE" }, \
+ { BIT(NFS4CLNT_DELEGRETURN), "DELEGRETURN" }, \
+ { BIT(NFS4CLNT_SESSION_RESET), "SESSION_RESET" }, \
+ { BIT(NFS4CLNT_LEASE_CONFIRM), "LEASE_CONFIRM" }, \
+ { BIT(NFS4CLNT_SERVER_SCOPE_MISMATCH), "SERVER_SCOPE_MISMATCH" }, \
+ { BIT(NFS4CLNT_PURGE_STATE), "PURGE_STATE" }, \
+ { BIT(NFS4CLNT_BIND_CONN_TO_SESSION), "BIND_CONN_TO_SESSION" }, \
+ { BIT(NFS4CLNT_MOVED), "MOVED" }, \
+ { BIT(NFS4CLNT_LEASE_MOVED), "LEASE_MOVED" }, \
+ { BIT(NFS4CLNT_DELEGATION_EXPIRED), "DELEGATION_EXPIRED" }, \
+ { BIT(NFS4CLNT_RUN_MANAGER), "RUN_MANAGER" }, \
+ { BIT(NFS4CLNT_MANAGER_AVAILABLE), "MANAGER_AVAILABLE" }, \
+ { BIT(NFS4CLNT_RECALL_RUNNING), "RECALL_RUNNING" }, \
+ { BIT(NFS4CLNT_RECALL_ANY_LAYOUT_READ), "RECALL_ANY_LAYOUT_READ" }, \
+ { BIT(NFS4CLNT_RECALL_ANY_LAYOUT_RW), "RECALL_ANY_LAYOUT_RW" }, \
+ { BIT(NFS4CLNT_DELEGRETURN_DELAYED), "DELERETURN_DELAYED" })
TRACE_EVENT(nfs4_state_mgr,
TP_PROTO(
diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 2e0040d..97f517e 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -875,8 +875,8 @@ static const struct cred *get_backchannel_cred(struct nfs4_client *clp, struct r
if (!kcred)
return NULL;
- kcred->uid = ses->se_cb_sec.uid;
- kcred->gid = ses->se_cb_sec.gid;
+ kcred->fsuid = ses->se_cb_sec.uid;
+ kcred->fsgid = ses->se_cb_sec.gid;
return kcred;
}
}
diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
index a97873f..2673019 100644
--- a/fs/nfsd/nfs4layouts.c
+++ b/fs/nfsd/nfs4layouts.c
@@ -322,11 +322,11 @@ nfsd4_recall_file_layout(struct nfs4_layout_stateid *ls)
if (ls->ls_recalled)
goto out_unlock;
- ls->ls_recalled = true;
- atomic_inc(&ls->ls_stid.sc_file->fi_lo_recalls);
if (list_empty(&ls->ls_layouts))
goto out_unlock;
+ ls->ls_recalled = true;
+ atomic_inc(&ls->ls_stid.sc_file->fi_lo_recalls);
trace_nfsd_layout_recall(&ls->ls_stid.sc_stateid);
refcount_inc(&ls->ls_stid.sc_count);
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index 0a900b9..3eb500a 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -1088,8 +1088,10 @@ nfsd4_verify_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
return status;
out_put_dst:
nfsd_file_put(*dst);
+ *dst = NULL;
out_put_src:
nfsd_file_put(*src);
+ *src = NULL;
goto out;
}
@@ -1349,13 +1351,6 @@ nfsd4_interssc_connect(struct nl4_server *nss, struct svc_rqst *rqstp,
return status;
}
-static void
-nfsd4_interssc_disconnect(struct vfsmount *ss_mnt)
-{
- nfs_do_sb_deactive(ss_mnt->mnt_sb);
- mntput(ss_mnt);
-}
-
/*
* Verify COPY destination stateid.
*
@@ -1458,11 +1453,6 @@ nfsd4_cleanup_inter_ssc(struct vfsmount *ss_mnt, struct nfsd_file *src,
{
}
-static void
-nfsd4_interssc_disconnect(struct vfsmount *ss_mnt)
-{
-}
-
static struct file *nfs42_ssc_open(struct vfsmount *ss_mnt,
struct nfs_fh *src_fh,
nfs4_stateid *stateid)
@@ -1523,7 +1513,8 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
u64 bytes_total = copy->cp_count;
u64 src_pos = copy->cp_src_pos;
u64 dst_pos = copy->cp_dst_pos;
- __be32 status;
+ int status;
+ loff_t end;
/* See RFC 7862 p.67: */
if (bytes_total == 0)
@@ -1543,8 +1534,8 @@ static ssize_t _nfsd_copy_file_range(struct nfsd4_copy *copy)
/* for a non-zero asynchronous copy do a commit of data */
if (!copy->cp_synchronous && copy->cp_res.wr_bytes_written > 0) {
since = READ_ONCE(dst->f_wb_err);
- status = vfs_fsync_range(dst, copy->cp_dst_pos,
- copy->cp_res.wr_bytes_written, 0);
+ end = copy->cp_dst_pos + copy->cp_res.wr_bytes_written - 1;
+ status = vfs_fsync_range(dst, copy->cp_dst_pos, end, 0);
if (!status)
status = filemap_check_wb_err(dst->f_mapping, since);
if (!status)
@@ -1620,14 +1611,14 @@ static int nfsd4_do_async_copy(void *data)
copy->nf_src = kzalloc(sizeof(struct nfsd_file), GFP_KERNEL);
if (!copy->nf_src) {
copy->nfserr = nfserr_serverfault;
- nfsd4_interssc_disconnect(copy->ss_mnt);
+ /* ss_mnt will be unmounted by the laundromat */
goto do_callback;
}
copy->nf_src->nf_file = nfs42_ssc_open(copy->ss_mnt, ©->c_fh,
©->stateid);
if (IS_ERR(copy->nf_src->nf_file)) {
copy->nfserr = nfserr_offload_denied;
- nfsd4_interssc_disconnect(copy->ss_mnt);
+ /* ss_mnt will be unmounted by the laundromat */
goto do_callback;
}
}
@@ -1712,8 +1703,10 @@ nfsd4_copy(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
if (async_copy)
cleanup_async_copy(async_copy);
status = nfserrno(-ENOMEM);
- if (!copy->cp_intra)
- nfsd4_interssc_disconnect(copy->ss_mnt);
+ /*
+ * source's vfsmount of inter-copy will be unmounted
+ * by the laundromat
+ */
goto out;
}
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index dfd3877..0394dd6 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -2370,10 +2370,12 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
for (i = 0; i < argp->opcnt; i++) {
op = &argp->ops[i];
op->replay = NULL;
+ op->opdesc = NULL;
if (xdr_stream_decode_u32(argp->xdr, &op->opnum) < 0)
return 0;
if (nfsd4_opnum_in_range(argp, op)) {
+ op->opdesc = OPDESC(op);
op->status = nfsd4_dec_ops[op->opnum](argp, &op->u);
if (op->status != nfs_ok)
trace_nfsd_compound_decode_err(argp->rqstp,
@@ -2384,7 +2386,7 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp)
op->opnum = OP_ILLEGAL;
op->status = nfserr_op_illegal;
}
- op->opdesc = OPDESC(op);
+
/*
* We'll try to cache the result in the DRC if any one
* op in the compound wants to be cached:
diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
index 2301b57..def9121 100644
--- a/fs/nilfs2/btree.c
+++ b/fs/nilfs2/btree.c
@@ -480,9 +480,18 @@ static int __nilfs_btree_get_block(const struct nilfs_bmap *btree, __u64 ptr,
ret = nilfs_btnode_submit_block(btnc, ptr, 0, REQ_OP_READ, 0, &bh,
&submit_ptr);
if (ret) {
- if (ret != -EEXIST)
- return ret;
- goto out_check;
+ if (likely(ret == -EEXIST))
+ goto out_check;
+ if (ret == -ENOENT) {
+ /*
+ * Block address translation failed due to invalid
+ * value of 'ptr'. In this case, return internal code
+ * -EINVAL (broken bmap) to notify bmap layer of fatal
+ * metadata corruption.
+ */
+ ret = -EINVAL;
+ }
+ return ret;
}
if (ra) {
diff --git a/fs/nilfs2/ioctl.c b/fs/nilfs2/ioctl.c
index 640ac8f..a392067 100644
--- a/fs/nilfs2/ioctl.c
+++ b/fs/nilfs2/ioctl.c
@@ -71,7 +71,7 @@ static int nilfs_ioctl_wrap_copy(struct the_nilfs *nilfs,
if (argv->v_index > ~(__u64)0 - argv->v_nmembs)
return -EINVAL;
- buf = (void *)__get_free_pages(GFP_NOFS, 0);
+ buf = (void *)get_zeroed_page(GFP_NOFS);
if (unlikely(!buf))
return -ENOMEM;
maxmembs = PAGE_SIZE / argv->v_size;
@@ -1114,7 +1114,14 @@ static int nilfs_ioctl_set_alloc_range(struct inode *inode, void __user *argp)
minseg = range[0] + segbytes - 1;
do_div(minseg, segbytes);
+
+ if (range[1] < 4096)
+ goto out;
+
maxseg = NILFS_SB2_OFFSET_BYTES(range[1]);
+ if (maxseg < segbytes)
+ goto out;
+
do_div(maxseg, segbytes);
maxseg--;
diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
index 6d21f9b..a276a2d 100644
--- a/fs/nilfs2/segment.c
+++ b/fs/nilfs2/segment.c
@@ -2609,11 +2609,10 @@ static int nilfs_segctor_thread(void *arg)
goto loop;
end_thread:
- spin_unlock(&sci->sc_state_lock);
-
/* end sync. */
sci->sc_task = NULL;
wake_up(&sci->sc_wait_task); /* for nilfs_segctor_kill_thread() */
+ spin_unlock(&sci->sc_state_lock);
return 0;
}
diff --git a/fs/nilfs2/super.c b/fs/nilfs2/super.c
index 663e68c..9d5e4ce7 100644
--- a/fs/nilfs2/super.c
+++ b/fs/nilfs2/super.c
@@ -409,6 +409,15 @@ int nilfs_resize_fs(struct super_block *sb, __u64 newsize)
goto out;
/*
+ * Prevent underflow in second superblock position calculation.
+ * The exact minimum size check is done in nilfs_sufile_resize().
+ */
+ if (newsize < 4096) {
+ ret = -ENOSPC;
+ goto out;
+ }
+
+ /*
* Write lock is required to protect some functions depending
* on the number of segments, the number of reserved segments,
* and so forth.
@@ -473,6 +482,7 @@ static void nilfs_put_super(struct super_block *sb)
up_write(&nilfs->ns_sem);
}
+ nilfs_sysfs_delete_device_group(nilfs);
iput(nilfs->ns_sufile);
iput(nilfs->ns_cpfile);
iput(nilfs->ns_dat);
@@ -1096,6 +1106,7 @@ nilfs_fill_super(struct super_block *sb, void *data, int silent)
nilfs_put_root(fsroot);
failed_unload:
+ nilfs_sysfs_delete_device_group(nilfs);
iput(nilfs->ns_sufile);
iput(nilfs->ns_cpfile);
iput(nilfs->ns_dat);
diff --git a/fs/nilfs2/the_nilfs.c b/fs/nilfs2/the_nilfs.c
index 6aa6cef..0fa1303 100644
--- a/fs/nilfs2/the_nilfs.c
+++ b/fs/nilfs2/the_nilfs.c
@@ -87,7 +87,6 @@ void destroy_nilfs(struct the_nilfs *nilfs)
{
might_sleep();
if (nilfs_init(nilfs)) {
- nilfs_sysfs_delete_device_group(nilfs);
brelse(nilfs->ns_sbh[0]);
brelse(nilfs->ns_sbh[1]);
}
@@ -305,6 +304,10 @@ int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
goto failed;
}
+ err = nilfs_sysfs_create_device_group(sb);
+ if (unlikely(err))
+ goto sysfs_error;
+
if (valid_fs)
goto skip_recovery;
@@ -366,6 +369,9 @@ int load_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
goto failed;
failed_unload:
+ nilfs_sysfs_delete_device_group(nilfs);
+
+ sysfs_error:
iput(nilfs->ns_cpfile);
iput(nilfs->ns_sufile);
iput(nilfs->ns_dat);
@@ -544,9 +550,15 @@ static int nilfs_load_super_block(struct the_nilfs *nilfs,
{
struct nilfs_super_block **sbp = nilfs->ns_sbp;
struct buffer_head **sbh = nilfs->ns_sbh;
- u64 sb2off = NILFS_SB2_OFFSET_BYTES(nilfs->ns_bdev->bd_inode->i_size);
+ u64 sb2off, devsize = nilfs->ns_bdev->bd_inode->i_size;
int valid[2], swp = 0;
+ if (devsize < NILFS_SEG_MIN_BLOCKS * NILFS_MIN_BLOCK_SIZE + 4096) {
+ nilfs_err(sb, "device size too small");
+ return -EINVAL;
+ }
+ sb2off = NILFS_SB2_OFFSET_BYTES(devsize);
+
sbp[0] = nilfs_read_super_block(sb, NILFS_SB_OFFSET_BYTES, blocksize,
&sbh[0]);
sbp[1] = nilfs_read_super_block(sb, sb2off, blocksize, &sbh[1]);
@@ -691,10 +703,6 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb, char *data)
if (err)
goto failed_sbh;
- err = nilfs_sysfs_create_device_group(sb);
- if (err)
- goto failed_sbh;
-
set_nilfs_init(nilfs);
err = 0;
out:
diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
index 42af83b..321d55b 100644
--- a/fs/ntfs3/attrib.c
+++ b/fs/ntfs3/attrib.c
@@ -1967,7 +1967,7 @@ int attr_punch_hole(struct ntfs_inode *ni, u64 vbo, u64 bytes, u32 *frame_size)
return -ENOENT;
if (!attr_b->non_res) {
- u32 data_size = le32_to_cpu(attr->res.data_size);
+ u32 data_size = le32_to_cpu(attr_b->res.data_size);
u32 from, to;
if (vbo > data_size)
diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
index ed640e4..136236a 100644
--- a/fs/ntfs3/inode.c
+++ b/fs/ntfs3/inode.c
@@ -132,6 +132,13 @@ static struct inode *ntfs_read_mft(struct inode *inode,
if (le16_to_cpu(attr->name_off) + attr->name_len > asize)
goto out;
+ if (attr->non_res) {
+ t64 = le64_to_cpu(attr->nres.alloc_size);
+ if (le64_to_cpu(attr->nres.data_size) > t64 ||
+ le64_to_cpu(attr->nres.valid_size) > t64)
+ goto out;
+ }
+
switch (attr->type) {
case ATTR_STD:
if (attr->non_res ||
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 68d11c2..c051074 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -1978,11 +1978,25 @@ int ocfs2_write_end_nolock(struct address_space *mapping,
}
if (unlikely(copied < len) && wc->w_target_page) {
+ loff_t new_isize;
+
if (!PageUptodate(wc->w_target_page))
copied = 0;
- ocfs2_zero_new_buffers(wc->w_target_page, start+copied,
- start+len);
+ new_isize = max_t(loff_t, i_size_read(inode), pos + copied);
+ if (new_isize > page_offset(wc->w_target_page))
+ ocfs2_zero_new_buffers(wc->w_target_page, start+copied,
+ start+len);
+ else {
+ /*
+ * When page is fully beyond new isize (data copy
+ * failed), do not bother zeroing the page. Invalidate
+ * it instead so that writeback does not get confused
+ * put page & buffer dirty bits into inconsistent
+ * state.
+ */
+ block_invalidatepage(wc->w_target_page, 0, PAGE_SIZE);
+ }
}
if (wc->w_target_page)
flush_dcache_page(wc->w_target_page);
diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
index 801e60b..c28bc98 100644
--- a/fs/ocfs2/dlmglue.c
+++ b/fs/ocfs2/dlmglue.c
@@ -3403,10 +3403,12 @@ void ocfs2_dlm_shutdown(struct ocfs2_super *osb,
ocfs2_lock_res_free(&osb->osb_nfs_sync_lockres);
ocfs2_lock_res_free(&osb->osb_orphan_scan.os_lockres);
- ocfs2_cluster_disconnect(osb->cconn, hangup_pending);
- osb->cconn = NULL;
+ if (osb->cconn) {
+ ocfs2_cluster_disconnect(osb->cconn, hangup_pending);
+ osb->cconn = NULL;
- ocfs2_dlm_shutdown_debug(osb);
+ ocfs2_dlm_shutdown_debug(osb);
+ }
}
static int ocfs2_drop_lock(struct ocfs2_super *osb,
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index fc5f780f..92182d4 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -1994,7 +1994,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
}
}
- if (file && should_remove_suid(file->f_path.dentry)) {
+ if (file && setattr_should_drop_suidgid(&init_user_ns, file_inode(file))) {
ret = __ocfs2_write_remove_suid(inode, di_bh);
if (ret) {
mlog_errno(ret);
@@ -2282,7 +2282,7 @@ static int ocfs2_prepare_inode_for_write(struct file *file,
* inode. There's also the dinode i_size state which
* can be lost via setattr during extending writes (we
* set inode->i_size at the end of a write. */
- if (should_remove_suid(dentry)) {
+ if (setattr_should_drop_suidgid(&init_user_ns, inode)) {
if (meta_level == 0) {
ocfs2_inode_unlock_for_extent_tree(inode,
&di_bh,
diff --git a/fs/ocfs2/journal.c b/fs/ocfs2/journal.c
index 4f15750..86864a9 100644
--- a/fs/ocfs2/journal.c
+++ b/fs/ocfs2/journal.c
@@ -157,7 +157,7 @@ static void ocfs2_queue_replay_slots(struct ocfs2_super *osb,
replay_map->rm_state = REPLAY_DONE;
}
-static void ocfs2_free_replay_slots(struct ocfs2_super *osb)
+void ocfs2_free_replay_slots(struct ocfs2_super *osb)
{
struct ocfs2_replay_map *replay_map = osb->replay_map;
diff --git a/fs/ocfs2/journal.h b/fs/ocfs2/journal.h
index d158acb..a54f20b 100644
--- a/fs/ocfs2/journal.h
+++ b/fs/ocfs2/journal.h
@@ -150,6 +150,7 @@ int ocfs2_recovery_init(struct ocfs2_super *osb);
void ocfs2_recovery_exit(struct ocfs2_super *osb);
int ocfs2_compute_replay_slots(struct ocfs2_super *osb);
+void ocfs2_free_replay_slots(struct ocfs2_super *osb);
/*
* Journal Control:
* Initialize, Load, Shutdown, Wipe a journal.
diff --git a/fs/ocfs2/move_extents.c b/fs/ocfs2/move_extents.c
index 192cad0..b1e32ec 100644
--- a/fs/ocfs2/move_extents.c
+++ b/fs/ocfs2/move_extents.c
@@ -105,14 +105,6 @@ static int __ocfs2_move_extent(handle_t *handle,
*/
replace_rec.e_flags = ext_flags & ~OCFS2_EXT_REFCOUNTED;
- ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode),
- context->et.et_root_bh,
- OCFS2_JOURNAL_ACCESS_WRITE);
- if (ret) {
- mlog_errno(ret);
- goto out;
- }
-
ret = ocfs2_split_extent(handle, &context->et, path, index,
&replace_rec, context->meta_ac,
&context->dealloc);
@@ -121,8 +113,6 @@ static int __ocfs2_move_extent(handle_t *handle,
goto out;
}
- ocfs2_journal_dirty(handle, context->et.et_root_bh);
-
context->new_phys_cpos = new_p_cpos;
/*
@@ -444,7 +434,7 @@ static int ocfs2_find_victim_alloc_group(struct inode *inode,
bg = (struct ocfs2_group_desc *)gd_bh->b_data;
if (vict_blkno < (le64_to_cpu(bg->bg_blkno) +
- le16_to_cpu(bg->bg_bits))) {
+ (le16_to_cpu(bg->bg_bits) << bits_per_unit))) {
*ret_bh = gd_bh;
*vict_bit = (vict_blkno - blkno) >>
@@ -559,6 +549,7 @@ static void ocfs2_probe_alloc_group(struct inode *inode, struct buffer_head *bh,
last_free_bits++;
if (last_free_bits == move_len) {
+ i -= move_len;
*goal_bit = i;
*phys_cpos = base_cpos + i;
break;
@@ -1030,18 +1021,19 @@ int ocfs2_ioctl_move_extents(struct file *filp, void __user *argp)
context->range = ⦥
+ /*
+ * ok, the default theshold for the defragmentation
+ * is 1M, since our maximum clustersize was 1M also.
+ * any thought?
+ */
+ if (!range.me_threshold)
+ range.me_threshold = 1024 * 1024;
+
+ if (range.me_threshold > i_size_read(inode))
+ range.me_threshold = i_size_read(inode);
+
if (range.me_flags & OCFS2_MOVE_EXT_FL_AUTO_DEFRAG) {
context->auto_defrag = 1;
- /*
- * ok, the default theshold for the defragmentation
- * is 1M, since our maximum clustersize was 1M also.
- * any thought?
- */
- if (!range.me_threshold)
- range.me_threshold = 1024 * 1024;
-
- if (range.me_threshold > i_size_read(inode))
- range.me_threshold = i_size_read(inode);
if (range.me_flags & OCFS2_MOVE_EXT_FL_PART_DEFRAG)
context->partial = 1;
diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
index 1180703..5b8237c 100644
--- a/fs/ocfs2/namei.c
+++ b/fs/ocfs2/namei.c
@@ -197,6 +197,7 @@ static struct inode *ocfs2_get_init_inode(struct inode *dir, umode_t mode)
* callers. */
if (S_ISDIR(mode))
set_nlink(inode, 2);
+ mode = mode_strip_sgid(&init_user_ns, dir, mode);
inode_init_owner(&init_user_ns, inode, dir, mode);
status = dquot_initialize(inode);
if (status)
diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
index a03f0ca..64e8a24 100644
--- a/fs/ocfs2/super.c
+++ b/fs/ocfs2/super.c
@@ -980,28 +980,27 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
if (!ocfs2_parse_options(sb, data, &parsed_options, 0)) {
status = -EINVAL;
- goto read_super_error;
+ goto out;
}
/* probe for superblock */
status = ocfs2_sb_probe(sb, &bh, §or_size, &stats);
if (status < 0) {
mlog(ML_ERROR, "superblock probe failed!\n");
- goto read_super_error;
+ goto out;
}
status = ocfs2_initialize_super(sb, bh, sector_size, &stats);
- osb = OCFS2_SB(sb);
- if (status < 0) {
- mlog_errno(status);
- goto read_super_error;
- }
brelse(bh);
bh = NULL;
+ if (status < 0)
+ goto out;
+
+ osb = OCFS2_SB(sb);
if (!ocfs2_check_set_options(sb, &parsed_options)) {
status = -EINVAL;
- goto read_super_error;
+ goto out_super;
}
osb->s_mount_opt = parsed_options.mount_opt;
osb->s_atime_quantum = parsed_options.atime_quantum;
@@ -1018,7 +1017,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
status = ocfs2_verify_userspace_stack(osb, &parsed_options);
if (status)
- goto read_super_error;
+ goto out_super;
sb->s_magic = OCFS2_SUPER_MAGIC;
@@ -1032,7 +1031,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
status = -EACCES;
mlog(ML_ERROR, "Readonly device detected but readonly "
"mount was not specified.\n");
- goto read_super_error;
+ goto out_super;
}
/* You should not be able to start a local heartbeat
@@ -1041,7 +1040,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
status = -EROFS;
mlog(ML_ERROR, "Local heartbeat specified on readonly "
"device.\n");
- goto read_super_error;
+ goto out_super;
}
status = ocfs2_check_journals_nolocks(osb);
@@ -1050,9 +1049,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
mlog(ML_ERROR, "Recovery required on readonly "
"file system, but write access is "
"unavailable.\n");
- else
- mlog_errno(status);
- goto read_super_error;
+ goto out_super;
}
ocfs2_set_ro_flag(osb, 1);
@@ -1068,10 +1065,8 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
}
status = ocfs2_verify_heartbeat(osb);
- if (status < 0) {
- mlog_errno(status);
- goto read_super_error;
- }
+ if (status < 0)
+ goto out_super;
osb->osb_debug_root = debugfs_create_dir(osb->uuid_str,
ocfs2_debugfs_root);
@@ -1085,15 +1080,14 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
status = ocfs2_mount_volume(sb);
if (status < 0)
- goto read_super_error;
+ goto out_debugfs;
if (osb->root_inode)
inode = igrab(osb->root_inode);
if (!inode) {
status = -EIO;
- mlog_errno(status);
- goto read_super_error;
+ goto out_dismount;
}
osb->osb_dev_kset = kset_create_and_add(sb->s_id, NULL,
@@ -1101,7 +1095,7 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
if (!osb->osb_dev_kset) {
status = -ENOMEM;
mlog(ML_ERROR, "Unable to create device kset %s.\n", sb->s_id);
- goto read_super_error;
+ goto out_dismount;
}
/* Create filecheck sysfs related directories/files at
@@ -1110,14 +1104,13 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
status = -ENOMEM;
mlog(ML_ERROR, "Unable to create filecheck sysfs directory at "
"/sys/fs/ocfs2/%s/filecheck.\n", sb->s_id);
- goto read_super_error;
+ goto out_dismount;
}
root = d_make_root(inode);
if (!root) {
status = -ENOMEM;
- mlog_errno(status);
- goto read_super_error;
+ goto out_dismount;
}
sb->s_root = root;
@@ -1164,17 +1157,22 @@ static int ocfs2_fill_super(struct super_block *sb, void *data, int silent)
return status;
-read_super_error:
- brelse(bh);
+out_dismount:
+ atomic_set(&osb->vol_state, VOLUME_DISABLED);
+ wake_up(&osb->osb_mount_event);
+ ocfs2_free_replay_slots(osb);
+ ocfs2_dismount_volume(sb, 1);
+ goto out;
- if (status)
- mlog_errno(status);
-
- if (osb) {
- atomic_set(&osb->vol_state, VOLUME_DISABLED);
- wake_up(&osb->osb_mount_event);
- ocfs2_dismount_volume(sb, 1);
- }
+out_debugfs:
+ debugfs_remove_recursive(osb->osb_debug_root);
+out_super:
+ ocfs2_release_system_inodes(osb);
+ kfree(osb->recovery_map);
+ ocfs2_delete_osb(osb);
+ kfree(osb);
+out:
+ mlog_errno(status);
return status;
}
@@ -1783,11 +1781,10 @@ static int ocfs2_get_sector(struct super_block *sb,
static int ocfs2_mount_volume(struct super_block *sb)
{
int status = 0;
- int unlock_super = 0;
struct ocfs2_super *osb = OCFS2_SB(sb);
if (ocfs2_is_hard_readonly(osb))
- goto leave;
+ goto out;
mutex_init(&osb->obs_trim_fs_mutex);
@@ -1797,44 +1794,58 @@ static int ocfs2_mount_volume(struct super_block *sb)
if (status == -EBADR && ocfs2_userspace_stack(osb))
mlog(ML_ERROR, "couldn't mount because cluster name on"
" disk does not match the running cluster name.\n");
- goto leave;
+ goto out;
}
status = ocfs2_super_lock(osb, 1);
if (status < 0) {
mlog_errno(status);
- goto leave;
+ goto out_dlm;
}
- unlock_super = 1;
/* This will load up the node map and add ourselves to it. */
status = ocfs2_find_slot(osb);
if (status < 0) {
mlog_errno(status);
- goto leave;
+ goto out_super_lock;
}
/* load all node-local system inodes */
status = ocfs2_init_local_system_inodes(osb);
if (status < 0) {
mlog_errno(status);
- goto leave;
+ goto out_super_lock;
}
status = ocfs2_check_volume(osb);
if (status < 0) {
mlog_errno(status);
- goto leave;
+ goto out_system_inodes;
}
status = ocfs2_truncate_log_init(osb);
- if (status < 0)
+ if (status < 0) {
mlog_errno(status);
+ goto out_check_volume;
+ }
-leave:
- if (unlock_super)
- ocfs2_super_unlock(osb, 1);
+ ocfs2_super_unlock(osb, 1);
+ return 0;
+out_check_volume:
+ ocfs2_free_replay_slots(osb);
+out_system_inodes:
+ if (osb->local_alloc_state == OCFS2_LA_ENABLED)
+ ocfs2_shutdown_local_alloc(osb);
+ ocfs2_release_system_inodes(osb);
+ /* before journal shutdown, we should release slot_info */
+ ocfs2_free_slot_info(osb);
+ ocfs2_journal_shutdown(osb);
+out_super_lock:
+ ocfs2_super_unlock(osb, 1);
+out_dlm:
+ ocfs2_dlm_shutdown(osb, 0);
+out:
return status;
}
@@ -1907,8 +1918,7 @@ static void ocfs2_dismount_volume(struct super_block *sb, int mnt_err)
!ocfs2_is_hard_readonly(osb))
hangup_needed = 1;
- if (osb->cconn)
- ocfs2_dlm_shutdown(osb, hangup_needed);
+ ocfs2_dlm_shutdown(osb, hangup_needed);
ocfs2_blockcheck_stats_debugfs_remove(&osb->osb_ecc_stats);
debugfs_remove_recursive(osb->osb_debug_root);
diff --git a/fs/open.c b/fs/open.c
index 5e322f1..e93c3306 100644
--- a/fs/open.c
+++ b/fs/open.c
@@ -54,7 +54,7 @@ int do_truncate(struct user_namespace *mnt_userns, struct dentry *dentry,
}
/* Remove suid, sgid, and file capabilities on truncate too */
- ret = dentry_needs_remove_privs(dentry);
+ ret = dentry_needs_remove_privs(mnt_userns, dentry);
if (ret < 0)
return ret;
if (ret)
@@ -671,10 +671,10 @@ int chown_common(const struct path *path, uid_t user, gid_t group)
newattrs.ia_valid |= ATTR_GID;
newattrs.ia_gid = gid;
}
- if (!S_ISDIR(inode->i_mode))
- newattrs.ia_valid |=
- ATTR_KILL_SUID | ATTR_KILL_SGID | ATTR_KILL_PRIV;
inode_lock(inode);
+ if (!S_ISDIR(inode->i_mode))
+ newattrs.ia_valid |= ATTR_KILL_SUID | ATTR_KILL_PRIV |
+ setattr_should_drop_sgid(mnt_userns, inode);
error = security_path_chown(path, uid, gid);
if (!error)
error = notify_change(mnt_userns, path->dentry, &newattrs,
diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
index dbb944b..0cc14ce 100644
--- a/fs/overlayfs/export.c
+++ b/fs/overlayfs/export.c
@@ -791,7 +791,7 @@ static struct ovl_fh *ovl_fid_to_fh(struct fid *fid, int buflen, int fh_type)
return ERR_PTR(-ENOMEM);
/* Copy unaligned inner fh into aligned buffer */
- memcpy(&fh->fb, fid, buflen - OVL_FH_WIRE_OFFSET);
+ memcpy(fh->buf, fid, buflen - OVL_FH_WIRE_OFFSET);
return fh;
}
diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
index 2df3e74..ae4876d 100644
--- a/fs/overlayfs/overlayfs.h
+++ b/fs/overlayfs/overlayfs.h
@@ -107,7 +107,7 @@ struct ovl_fh {
u8 padding[3]; /* make sure fb.fid is 32bit aligned */
union {
struct ovl_fb fb;
- u8 buf[0];
+ DECLARE_FLEX_ARRAY(u8, buf);
};
} __packed;
diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
index 013fc59..0b7a00e 100644
--- a/fs/proc/proc_sysctl.c
+++ b/fs/proc/proc_sysctl.c
@@ -16,6 +16,7 @@
#include <linux/module.h>
#include <linux/bpf-cgroup.h>
#include <linux/mount.h>
+#include <linux/kmemleak.h>
#include "internal.h"
static const struct dentry_operations proc_sys_dentry_operations;
@@ -1384,6 +1385,38 @@ struct ctl_table_header *register_sysctl(const char *path, struct ctl_table *tab
}
EXPORT_SYMBOL(register_sysctl);
+/**
+ * __register_sysctl_init() - register sysctl table to path
+ * @path: path name for sysctl base
+ * @table: This is the sysctl table that needs to be registered to the path
+ * @table_name: The name of sysctl table, only used for log printing when
+ * registration fails
+ *
+ * The sysctl interface is used by userspace to query or modify at runtime
+ * a predefined value set on a variable. These variables however have default
+ * values pre-set. Code which depends on these variables will always work even
+ * if register_sysctl() fails. If register_sysctl() fails you'd just loose the
+ * ability to query or modify the sysctls dynamically at run time. Chances of
+ * register_sysctl() failing on init are extremely low, and so for both reasons
+ * this function does not return any error as it is used by initialization code.
+ *
+ * Context: Can only be called after your respective sysctl base path has been
+ * registered. So for instance, most base directories are registered early on
+ * init before init levels are processed through proc_sys_init() and
+ * sysctl_init().
+ */
+void __init __register_sysctl_init(const char *path, struct ctl_table *table,
+ const char *table_name)
+{
+ struct ctl_table_header *hdr = register_sysctl(path, table);
+
+ if (unlikely(!hdr)) {
+ pr_err("failed when register_sysctl %s to %s\n", table_name, path);
+ return;
+ }
+ kmemleak_not_leak(hdr);
+}
+
static char *append_path(const char *path, char *pos, const char *name)
{
int namelen;
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index c3b7674..705a41f 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -714,9 +714,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
page = pfn_swap_entry_to_page(swpent);
}
if (page) {
- int mapcount = page_mapcount(page);
-
- if (mapcount >= 2)
+ if (page_mapcount(page) >= 2 || hugetlb_pmd_shared(pte))
mss->shared_hugetlb += huge_page_size(hstate_vma(vma));
else
mss->private_hugetlb += huge_page_size(hstate_vma(vma));
diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c
index 58481f8..f7b05c6 100644
--- a/fs/reiserfs/super.c
+++ b/fs/reiserfs/super.c
@@ -1437,7 +1437,6 @@ static int reiserfs_remount(struct super_block *s, int *mount_flags, char *arg)
unsigned long safe_mask = 0;
unsigned int commit_max_age = (unsigned int)-1;
struct reiserfs_journal *journal = SB_JOURNAL(s);
- char *new_opts;
int err;
char *qf_names[REISERFS_MAXQUOTAS];
unsigned int qfmt = 0;
@@ -1445,10 +1444,6 @@ static int reiserfs_remount(struct super_block *s, int *mount_flags, char *arg)
int i;
#endif
- new_opts = kstrdup(arg, GFP_KERNEL);
- if (arg && !new_opts)
- return -ENOMEM;
-
sync_filesystem(s);
reiserfs_write_lock(s);
@@ -1599,7 +1594,6 @@ static int reiserfs_remount(struct super_block *s, int *mount_flags, char *arg)
out_err_unlock:
reiserfs_write_unlock(s);
out_err:
- kfree(new_opts);
return err;
}
diff --git a/fs/squashfs/squashfs_fs.h b/fs/squashfs/squashfs_fs.h
index b3fdc82..95f8e89 100644
--- a/fs/squashfs/squashfs_fs.h
+++ b/fs/squashfs/squashfs_fs.h
@@ -183,7 +183,7 @@ static inline int squashfs_block_size(__le32 raw)
#define SQUASHFS_ID_BLOCK_BYTES(A) (SQUASHFS_ID_BLOCKS(A) *\
sizeof(u64))
/* xattr id lookup table defines */
-#define SQUASHFS_XATTR_BYTES(A) ((A) * sizeof(struct squashfs_xattr_id))
+#define SQUASHFS_XATTR_BYTES(A) (((u64) (A)) * sizeof(struct squashfs_xattr_id))
#define SQUASHFS_XATTR_BLOCK(A) (SQUASHFS_XATTR_BYTES(A) / \
SQUASHFS_METADATA_SIZE)
diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h
index 1e90c25..0c1ae97 100644
--- a/fs/squashfs/squashfs_fs_sb.h
+++ b/fs/squashfs/squashfs_fs_sb.h
@@ -63,7 +63,7 @@ struct squashfs_sb_info {
long long bytes_used;
unsigned int inodes;
unsigned int fragments;
- int xattr_ids;
+ unsigned int xattr_ids;
unsigned int ids;
bool panic_on_errors;
};
diff --git a/fs/squashfs/xattr.h b/fs/squashfs/xattr.h
index d8a270d..f1a463d 100644
--- a/fs/squashfs/xattr.h
+++ b/fs/squashfs/xattr.h
@@ -10,12 +10,12 @@
#ifdef CONFIG_SQUASHFS_XATTR
extern __le64 *squashfs_read_xattr_id_table(struct super_block *, u64,
- u64 *, int *);
+ u64 *, unsigned int *);
extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *,
unsigned int *, unsigned long long *);
#else
static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb,
- u64 start, u64 *xattr_table_start, int *xattr_ids)
+ u64 start, u64 *xattr_table_start, unsigned int *xattr_ids)
{
struct squashfs_xattr_id_table *id_table;
diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c
index 087cab8..c8469c6 100644
--- a/fs/squashfs/xattr_id.c
+++ b/fs/squashfs/xattr_id.c
@@ -56,7 +56,7 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index,
* Read uncompressed xattr id lookup table indexes from disk into memory
*/
__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
- u64 *xattr_table_start, int *xattr_ids)
+ u64 *xattr_table_start, unsigned int *xattr_ids)
{
struct squashfs_sb_info *msblk = sb->s_fs_info;
unsigned int len, indexes;
diff --git a/fs/super.c b/fs/super.c
index 7fa3ee7..2976305 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -450,13 +450,22 @@ void generic_shutdown_super(struct super_block *sb)
cgroup_writeback_umount();
- /* evict all inodes with zero refcount */
+ /* Evict all inodes with zero refcount. */
evict_inodes(sb);
- /* only nonzero refcount inodes can have marks */
+
+ /*
+ * Clean up and evict any inodes that still have references due
+ * to fsnotify or the security policy.
+ */
fsnotify_sb_delete(sb);
- fscrypt_destroy_keyring(sb);
security_sb_delete(sb);
+ /*
+ * Now that all potentially-encrypted inodes have been evicted,
+ * the fscrypt keyring can be destroyed.
+ */
+ fscrypt_destroy_keyring(sb);
+
if (sb->s_dio_done_wq) {
destroy_workqueue(sb->s_dio_done_wq);
sb->s_dio_done_wq = NULL;
diff --git a/fs/ubifs/budget.c b/fs/ubifs/budget.c
index c0b84e9..9cb05ef 100644
--- a/fs/ubifs/budget.c
+++ b/fs/ubifs/budget.c
@@ -212,11 +212,10 @@ long long ubifs_calc_available(const struct ubifs_info *c, int min_idx_lebs)
subtract_lebs += 1;
/*
- * The GC journal head LEB is not really accessible. And since
- * different write types go to different heads, we may count only on
- * one head's space.
+ * Since different write types go to different heads, we should
+ * reserve one leb for each head.
*/
- subtract_lebs += c->jhead_cnt - 1;
+ subtract_lebs += c->jhead_cnt;
/* We also reserve one LEB for deletions, which bypass budgeting */
subtract_lebs += 1;
@@ -403,7 +402,7 @@ static int calc_dd_growth(const struct ubifs_info *c,
dd_growth = req->dirtied_page ? c->bi.page_budget : 0;
if (req->dirtied_ino)
- dd_growth += c->bi.inode_budget << (req->dirtied_ino - 1);
+ dd_growth += c->bi.inode_budget * req->dirtied_ino;
if (req->mod_dent)
dd_growth += c->bi.dent_budget;
dd_growth += req->dirtied_ino_d;
diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
index 79e371b..e7c36e3 100644
--- a/fs/ubifs/dir.c
+++ b/fs/ubifs/dir.c
@@ -1147,7 +1147,6 @@ static int ubifs_symlink(struct user_namespace *mnt_userns, struct inode *dir,
int err, sz_change, len = strlen(symname);
struct fscrypt_str disk_link;
struct ubifs_budget_req req = { .new_ino = 1, .new_dent = 1,
- .new_ino_d = ALIGN(len, 8),
.dirtied_ino = 1 };
struct fscrypt_name nm;
@@ -1163,6 +1162,7 @@ static int ubifs_symlink(struct user_namespace *mnt_userns, struct inode *dir,
* Budget request settings: new inode, new direntry and changing parent
* directory inode.
*/
+ req.new_ino_d = ALIGN(disk_link.len - 1, 8);
err = ubifs_budget_space(c, &req);
if (err)
return err;
@@ -1320,6 +1320,8 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
if (unlink) {
ubifs_assert(c, inode_is_locked(new_inode));
+ /* Budget for old inode's data when its nlink > 1. */
+ req.dirtied_ino_d = ALIGN(ubifs_inode(new_inode)->data_len, 8);
err = ubifs_purge_xattrs(new_inode);
if (err)
return err;
@@ -1572,6 +1574,10 @@ static int ubifs_xrename(struct inode *old_dir, struct dentry *old_dentry,
return err;
}
+ err = ubifs_budget_space(c, &req);
+ if (err)
+ goto out;
+
lock_4_inodes(old_dir, new_dir, NULL, NULL);
time = current_time(old_dir);
@@ -1597,6 +1603,7 @@ static int ubifs_xrename(struct inode *old_dir, struct dentry *old_dentry,
unlock_4_inodes(old_dir, new_dir, NULL, NULL);
ubifs_release_budget(c, &req);
+out:
fscrypt_free_filename(&fst_nm);
fscrypt_free_filename(&snd_nm);
return err;
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index 6b45a03..7cc2abc 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -1031,7 +1031,7 @@ static int ubifs_writepage(struct page *page, struct writeback_control *wbc)
if (page->index >= synced_i_size >> PAGE_SHIFT) {
err = inode->i_sb->s_op->write_inode(inode, NULL);
if (err)
- goto out_unlock;
+ goto out_redirty;
/*
* The inode has been written, but the write-buffer has
* not been synchronized, so in case of an unclean
@@ -1059,11 +1059,17 @@ static int ubifs_writepage(struct page *page, struct writeback_control *wbc)
if (i_size > synced_i_size) {
err = inode->i_sb->s_op->write_inode(inode, NULL);
if (err)
- goto out_unlock;
+ goto out_redirty;
}
return do_writepage(page, len);
-
+out_redirty:
+ /*
+ * redirty_page_for_writepage() won't call ubifs_dirty_inode() because
+ * it passes I_DIRTY_PAGES flag while calling __mark_inode_dirty(), so
+ * there is no need to do space budget for dirty inode.
+ */
+ redirty_page_for_writepage(wbc, page);
out_unlock:
unlock_page(page);
return err;
diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
index eb05038..32c1f42 100644
--- a/fs/ubifs/super.c
+++ b/fs/ubifs/super.c
@@ -833,7 +833,7 @@ static int alloc_wbufs(struct ubifs_info *c)
INIT_LIST_HEAD(&c->jheads[i].buds_list);
err = ubifs_wbuf_init(c, &c->jheads[i].wbuf);
if (err)
- return err;
+ goto out_wbuf;
c->jheads[i].wbuf.sync_callback = &bud_wbuf_callback;
c->jheads[i].wbuf.jhead = i;
@@ -841,7 +841,7 @@ static int alloc_wbufs(struct ubifs_info *c)
c->jheads[i].log_hash = ubifs_hash_get_desc(c);
if (IS_ERR(c->jheads[i].log_hash)) {
err = PTR_ERR(c->jheads[i].log_hash);
- goto out;
+ goto out_log_hash;
}
}
@@ -854,9 +854,18 @@ static int alloc_wbufs(struct ubifs_info *c)
return 0;
-out:
- while (i--)
+out_log_hash:
+ kfree(c->jheads[i].wbuf.buf);
+ kfree(c->jheads[i].wbuf.inodes);
+
+out_wbuf:
+ while (i--) {
+ kfree(c->jheads[i].wbuf.buf);
+ kfree(c->jheads[i].wbuf.inodes);
kfree(c->jheads[i].log_hash);
+ }
+ kfree(c->jheads);
+ c->jheads = NULL;
return err;
}
diff --git a/fs/ubifs/tnc.c b/fs/ubifs/tnc.c
index 488f3da..2469f72 100644
--- a/fs/ubifs/tnc.c
+++ b/fs/ubifs/tnc.c
@@ -267,11 +267,18 @@ static struct ubifs_znode *dirty_cow_znode(struct ubifs_info *c,
if (zbr->len) {
err = insert_old_idx(c, zbr->lnum, zbr->offs);
if (unlikely(err))
- return ERR_PTR(err);
+ /*
+ * Obsolete znodes will be freed by tnc_destroy_cnext()
+ * or free_obsolete_znodes(), copied up znodes should
+ * be added back to tnc and freed by
+ * ubifs_destroy_tnc_subtree().
+ */
+ goto out;
err = add_idx_dirt(c, zbr->lnum, zbr->len);
} else
err = 0;
+out:
zbr->znode = zn;
zbr->lnum = 0;
zbr->offs = 0;
@@ -3053,6 +3060,21 @@ static void tnc_destroy_cnext(struct ubifs_info *c)
cnext = cnext->cnext;
if (ubifs_zn_obsolete(znode))
kfree(znode);
+ else if (!ubifs_zn_cow(znode)) {
+ /*
+ * Don't forget to update clean znode count after
+ * committing failed, because ubifs will check this
+ * count while closing tnc. Non-obsolete znode could
+ * be re-dirtied during committing process, so dirty
+ * flag is untrustable. The flag 'COW_ZNODE' is set
+ * for each dirty znode before committing, and it is
+ * cleared as long as the znode become clean, so we
+ * can statistic clean znode count according to this
+ * flag.
+ */
+ atomic_long_inc(&c->clean_zn_cnt);
+ atomic_long_inc(&ubifs_clean_zn_cnt);
+ }
} while (cnext && cnext != c->cnext);
}
diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h
index c38066c..efbb455 100644
--- a/fs/ubifs/ubifs.h
+++ b/fs/ubifs/ubifs.h
@@ -1594,8 +1594,13 @@ static inline int ubifs_check_hmac(const struct ubifs_info *c,
return crypto_memneq(expected, got, c->hmac_desc_len);
}
+#ifdef CONFIG_UBIFS_FS_AUTHENTICATION
void ubifs_bad_hash(const struct ubifs_info *c, const void *node,
const u8 *hash, int lnum, int offs);
+#else
+static inline void ubifs_bad_hash(const struct ubifs_info *c, const void *node,
+ const u8 *hash, int lnum, int offs) {};
+#endif
int __ubifs_node_check_hash(const struct ubifs_info *c, const void *buf,
const u8 *expected);
diff --git a/fs/udf/file.c b/fs/udf/file.c
index 1baff8d..83410fe 100644
--- a/fs/udf/file.c
+++ b/fs/udf/file.c
@@ -148,26 +148,24 @@ static ssize_t udf_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
goto out;
down_write(&iinfo->i_data_sem);
- if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) {
- loff_t end = iocb->ki_pos + iov_iter_count(from);
-
- if (inode->i_sb->s_blocksize <
- (udf_file_entry_alloc_offset(inode) + end)) {
- err = udf_expand_file_adinicb(inode);
- if (err) {
- inode_unlock(inode);
- udf_debug("udf_expand_adinicb: err=%d\n", err);
- return err;
- }
- } else {
- iinfo->i_lenAlloc = max(end, inode->i_size);
- up_write(&iinfo->i_data_sem);
+ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB &&
+ inode->i_sb->s_blocksize < (udf_file_entry_alloc_offset(inode) +
+ iocb->ki_pos + iov_iter_count(from))) {
+ err = udf_expand_file_adinicb(inode);
+ if (err) {
+ inode_unlock(inode);
+ udf_debug("udf_expand_adinicb: err=%d\n", err);
+ return err;
}
} else
up_write(&iinfo->i_data_sem);
retval = __generic_file_write_iter(iocb, from);
out:
+ down_write(&iinfo->i_data_sem);
+ if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB && retval > 0)
+ iinfo->i_lenAlloc = inode->i_size;
+ up_write(&iinfo->i_data_sem);
inode_unlock(inode);
if (retval > 0) {
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index d2488b7..594d224 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -442,7 +442,7 @@ static int udf_get_block(struct inode *inode, sector_t block,
* Block beyond EOF and prealloc extents? Just discard preallocation
* as it is not useful and complicates things.
*/
- if (((loff_t)block) << inode->i_blkbits > iinfo->i_lenExtents)
+ if (((loff_t)block) << inode->i_blkbits >= iinfo->i_lenExtents)
udf_discard_prealloc(inode);
udf_clear_extent_cache(inode);
phys = inode_getblk(inode, block, &err, &new);
@@ -525,8 +525,10 @@ static int udf_do_extend_file(struct inode *inode,
}
if (fake) {
- udf_add_aext(inode, last_pos, &last_ext->extLocation,
- last_ext->extLength, 1);
+ err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
+ last_ext->extLength, 1);
+ if (err < 0)
+ goto out_err;
count++;
} else {
struct kernel_lb_addr tmploc;
@@ -560,7 +562,7 @@ static int udf_do_extend_file(struct inode *inode,
err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
last_ext->extLength, 1);
if (err)
- return err;
+ goto out_err;
count++;
}
if (new_block_bytes) {
@@ -569,7 +571,7 @@ static int udf_do_extend_file(struct inode *inode,
err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
last_ext->extLength, 1);
if (err)
- return err;
+ goto out_err;
count++;
}
@@ -583,6 +585,11 @@ static int udf_do_extend_file(struct inode *inode,
return -EIO;
return count;
+out_err:
+ /* Remove extents we've created so far */
+ udf_clear_extent_cache(inode);
+ udf_truncate_extents(inode);
+ return err;
}
/* Extend the final block of the file to final_block_len bytes */
@@ -797,19 +804,17 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
c = 0;
offset = 0;
count += ret;
- /* We are not covered by a preallocated extent? */
- if ((laarr[0].extLength & UDF_EXTENT_FLAG_MASK) !=
- EXT_NOT_RECORDED_ALLOCATED) {
- /* Is there any real extent? - otherwise we overwrite
- * the fake one... */
- if (count)
- c = !c;
- laarr[c].extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
- inode->i_sb->s_blocksize;
- memset(&laarr[c].extLocation, 0x00,
- sizeof(struct kernel_lb_addr));
- count++;
- }
+ /*
+ * Is there any real extent? - otherwise we overwrite the fake
+ * one...
+ */
+ if (count)
+ c = !c;
+ laarr[c].extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
+ inode->i_sb->s_blocksize;
+ memset(&laarr[c].extLocation, 0x00,
+ sizeof(struct kernel_lb_addr));
+ count++;
endnum = c + 1;
lastblock = 1;
} else {
@@ -1086,23 +1091,8 @@ static void udf_merge_extents(struct inode *inode, struct kernel_long_ad *laarr,
blocksize - 1) >> blocksize_bits)))) {
if (((li->extLength & UDF_EXTENT_LENGTH_MASK) +
- (lip1->extLength & UDF_EXTENT_LENGTH_MASK) +
- blocksize - 1) & ~UDF_EXTENT_LENGTH_MASK) {
- lip1->extLength = (lip1->extLength -
- (li->extLength &
- UDF_EXTENT_LENGTH_MASK) +
- UDF_EXTENT_LENGTH_MASK) &
- ~(blocksize - 1);
- li->extLength = (li->extLength &
- UDF_EXTENT_FLAG_MASK) +
- (UDF_EXTENT_LENGTH_MASK + 1) -
- blocksize;
- lip1->extLocation.logicalBlockNum =
- li->extLocation.logicalBlockNum +
- ((li->extLength &
- UDF_EXTENT_LENGTH_MASK) >>
- blocksize_bits);
- } else {
+ (lip1->extLength & UDF_EXTENT_LENGTH_MASK) +
+ blocksize - 1) <= UDF_EXTENT_LENGTH_MASK) {
li->extLength = lip1->extLength +
(((li->extLength &
UDF_EXTENT_LENGTH_MASK) +
@@ -1393,6 +1383,7 @@ static int udf_read_inode(struct inode *inode, bool hidden_inode)
ret = -EIO;
goto out;
}
+ iinfo->i_hidden = hidden_inode;
iinfo->i_unique = 0;
iinfo->i_lenEAttr = 0;
iinfo->i_lenExtents = 0;
@@ -1728,8 +1719,12 @@ static int udf_update_inode(struct inode *inode, int do_sync)
if (S_ISDIR(inode->i_mode) && inode->i_nlink > 0)
fe->fileLinkCount = cpu_to_le16(inode->i_nlink - 1);
- else
- fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
+ else {
+ if (iinfo->i_hidden)
+ fe->fileLinkCount = cpu_to_le16(0);
+ else
+ fe->fileLinkCount = cpu_to_le16(inode->i_nlink);
+ }
fe->informationLength = cpu_to_le64(inode->i_size);
@@ -1900,8 +1895,13 @@ struct inode *__udf_iget(struct super_block *sb, struct kernel_lb_addr *ino,
if (!inode)
return ERR_PTR(-ENOMEM);
- if (!(inode->i_state & I_NEW))
+ if (!(inode->i_state & I_NEW)) {
+ if (UDF_I(inode)->i_hidden != hidden_inode) {
+ iput(inode);
+ return ERR_PTR(-EFSCORRUPTED);
+ }
return inode;
+ }
memcpy(&UDF_I(inode)->i_location, ino, sizeof(struct kernel_lb_addr));
err = udf_read_inode(inode, hidden_inode);
diff --git a/fs/udf/super.c b/fs/udf/super.c
index aa2f609..6b85c667 100644
--- a/fs/udf/super.c
+++ b/fs/udf/super.c
@@ -147,6 +147,7 @@ static struct inode *udf_alloc_inode(struct super_block *sb)
ei->i_next_alloc_goal = 0;
ei->i_strat4096 = 0;
ei->i_streamdir = 0;
+ ei->i_hidden = 0;
init_rwsem(&ei->i_data_sem);
ei->cached_extent.lstart = -1;
spin_lock_init(&ei->i_extent_cache_lock);
diff --git a/fs/udf/udf_i.h b/fs/udf/udf_i.h
index 06ff700..312b7c9 100644
--- a/fs/udf/udf_i.h
+++ b/fs/udf/udf_i.h
@@ -44,7 +44,8 @@ struct udf_inode_info {
unsigned i_use : 1; /* unallocSpaceEntry */
unsigned i_strat4096 : 1;
unsigned i_streamdir : 1;
- unsigned reserved : 25;
+ unsigned i_hidden : 1; /* hidden system inode */
+ unsigned reserved : 24;
__u8 *i_data;
struct kernel_lb_addr i_locStreamdir;
__u64 i_lenStreams;
diff --git a/fs/udf/udf_sb.h b/fs/udf/udf_sb.h
index 4fa6205..2205859 100644
--- a/fs/udf/udf_sb.h
+++ b/fs/udf/udf_sb.h
@@ -51,6 +51,8 @@
#define MF_DUPLICATE_MD 0x01
#define MF_MIRROR_FE_LOADED 0x02
+#define EFSCORRUPTED EUCLEAN
+
struct udf_meta_data {
__u32 s_meta_file_loc;
__u32 s_mirror_file_loc;
diff --git a/fs/verity/enable.c b/fs/verity/enable.c
index 60a4372a..dfe8acc 100644
--- a/fs/verity/enable.c
+++ b/fs/verity/enable.c
@@ -391,25 +391,27 @@ int fsverity_ioctl_enable(struct file *filp, const void __user *uarg)
goto out_drop_write;
err = enable_verity(filp, &arg);
- if (err)
- goto out_allow_write_access;
/*
- * Some pages of the file may have been evicted from pagecache after
- * being used in the Merkle tree construction, then read into pagecache
- * again by another process reading from the file concurrently. Since
- * these pages didn't undergo verification against the file digest which
- * fs-verity now claims to be enforcing, we have to wipe the pagecache
- * to ensure that all future reads are verified.
+ * We no longer drop the inode's pagecache after enabling verity. This
+ * used to be done to try to avoid a race condition where pages could be
+ * evicted after being used in the Merkle tree construction, then
+ * re-instantiated by a concurrent read. Such pages are unverified, and
+ * the backing storage could have filled them with different content, so
+ * they shouldn't be used to fulfill reads once verity is enabled.
+ *
+ * But, dropping the pagecache has a big performance impact, and it
+ * doesn't fully solve the race condition anyway. So for those reasons,
+ * and also because this race condition isn't very important relatively
+ * speaking (especially for small-ish files, where the chance of a page
+ * being used, evicted, *and* re-instantiated all while enabling verity
+ * is quite small), we no longer drop the inode's pagecache.
*/
- filemap_write_and_wait(inode->i_mapping);
- invalidate_inode_pages2(inode->i_mapping);
/*
* allow_write_access() is needed to pair with deny_write_access().
* Regardless, the filesystem won't allow writing to verity files.
*/
-out_allow_write_access:
allow_write_access(filp);
out_drop_write:
mnt_drop_write_file(filp);
diff --git a/fs/verity/verify.c b/fs/verity/verify.c
index 0adb970..10e4188 100644
--- a/fs/verity/verify.c
+++ b/fs/verity/verify.c
@@ -279,15 +279,15 @@ EXPORT_SYMBOL_GPL(fsverity_enqueue_verify_work);
int __init fsverity_init_workqueue(void)
{
/*
- * Use an unbound workqueue to allow bios to be verified in parallel
- * even when they happen to complete on the same CPU. This sacrifices
- * locality, but it's worthwhile since hashing is CPU-intensive.
+ * Use a high-priority workqueue to prioritize verification work, which
+ * blocks reads from completing, over regular application tasks.
*
- * Also use a high-priority workqueue to prioritize verification work,
- * which blocks reads from completing, over regular application tasks.
+ * For performance reasons, don't use an unbound workqueue. Using an
+ * unbound workqueue for crypto operations causes excessive scheduler
+ * latency on ARM64.
*/
fsverity_read_workqueue = alloc_workqueue("fsverity_read_queue",
- WQ_UNBOUND | WQ_HIGHPRI,
+ WQ_HIGHPRI,
num_online_cpus());
if (!fsverity_read_workqueue)
return -ENOMEM;
diff --git a/fs/xfs/libxfs/xfs_ag.c b/fs/xfs/libxfs/xfs_ag.c
index 005abfd..aff6fb5 100644
--- a/fs/xfs/libxfs/xfs_ag.c
+++ b/fs/xfs/libxfs/xfs_ag.c
@@ -173,7 +173,6 @@ __xfs_free_perag(
struct xfs_perag *pag = container_of(head, struct xfs_perag, rcu_head);
ASSERT(!delayed_work_pending(&pag->pag_blockgc_work));
- ASSERT(atomic_read(&pag->pag_ref) == 0);
kmem_free(pag);
}
@@ -192,7 +191,7 @@ xfs_free_perag(
pag = radix_tree_delete(&mp->m_perag_tree, agno);
spin_unlock(&mp->m_perag_lock);
ASSERT(pag);
- ASSERT(atomic_read(&pag->pag_ref) == 0);
+ XFS_IS_CORRUPT(pag->pag_mount, atomic_read(&pag->pag_ref) != 0);
cancel_delayed_work_sync(&pag->pag_blockgc_work);
xfs_iunlink_destroy(pag);
diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c
index 29839548..dffe4ca 100644
--- a/fs/xfs/libxfs/xfs_btree.c
+++ b/fs/xfs/libxfs/xfs_btree.c
@@ -52,6 +52,71 @@ xfs_btree_magic(
}
/*
+ * These sibling pointer checks are optimised for null sibling pointers. This
+ * happens a lot, and we don't need to byte swap at runtime if the sibling
+ * pointer is NULL.
+ *
+ * These are explicitly marked at inline because the cost of calling them as
+ * functions instead of inlining them is about 36 bytes extra code per call site
+ * on x86-64. Yes, gcc-11 fails to inline them, and explicit inlining of these
+ * two sibling check functions reduces the compiled code size by over 300
+ * bytes.
+ */
+static inline xfs_failaddr_t
+xfs_btree_check_lblock_siblings(
+ struct xfs_mount *mp,
+ struct xfs_btree_cur *cur,
+ int level,
+ xfs_fsblock_t fsb,
+ __be64 dsibling)
+{
+ xfs_fsblock_t sibling;
+
+ if (dsibling == cpu_to_be64(NULLFSBLOCK))
+ return NULL;
+
+ sibling = be64_to_cpu(dsibling);
+ if (sibling == fsb)
+ return __this_address;
+ if (level >= 0) {
+ if (!xfs_btree_check_lptr(cur, sibling, level + 1))
+ return __this_address;
+ } else {
+ if (!xfs_verify_fsbno(mp, sibling))
+ return __this_address;
+ }
+
+ return NULL;
+}
+
+static inline xfs_failaddr_t
+xfs_btree_check_sblock_siblings(
+ struct xfs_mount *mp,
+ struct xfs_btree_cur *cur,
+ int level,
+ xfs_agnumber_t agno,
+ xfs_agblock_t agbno,
+ __be32 dsibling)
+{
+ xfs_agblock_t sibling;
+
+ if (dsibling == cpu_to_be32(NULLAGBLOCK))
+ return NULL;
+
+ sibling = be32_to_cpu(dsibling);
+ if (sibling == agbno)
+ return __this_address;
+ if (level >= 0) {
+ if (!xfs_btree_check_sptr(cur, sibling, level + 1))
+ return __this_address;
+ } else {
+ if (!xfs_verify_agbno(mp, agno, sibling))
+ return __this_address;
+ }
+ return NULL;
+}
+
+/*
* Check a long btree block header. Return the address of the failing check,
* or NULL if everything is ok.
*/
@@ -65,6 +130,8 @@ __xfs_btree_check_lblock(
struct xfs_mount *mp = cur->bc_mp;
xfs_btnum_t btnum = cur->bc_btnum;
int crc = xfs_has_crc(mp);
+ xfs_failaddr_t fa;
+ xfs_fsblock_t fsb = NULLFSBLOCK;
if (crc) {
if (!uuid_equal(&block->bb_u.l.bb_uuid, &mp->m_sb.sb_meta_uuid))
@@ -83,16 +150,16 @@ __xfs_btree_check_lblock(
if (be16_to_cpu(block->bb_numrecs) >
cur->bc_ops->get_maxrecs(cur, level))
return __this_address;
- if (block->bb_u.l.bb_leftsib != cpu_to_be64(NULLFSBLOCK) &&
- !xfs_btree_check_lptr(cur, be64_to_cpu(block->bb_u.l.bb_leftsib),
- level + 1))
- return __this_address;
- if (block->bb_u.l.bb_rightsib != cpu_to_be64(NULLFSBLOCK) &&
- !xfs_btree_check_lptr(cur, be64_to_cpu(block->bb_u.l.bb_rightsib),
- level + 1))
- return __this_address;
- return NULL;
+ if (bp)
+ fsb = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp));
+
+ fa = xfs_btree_check_lblock_siblings(mp, cur, level, fsb,
+ block->bb_u.l.bb_leftsib);
+ if (!fa)
+ fa = xfs_btree_check_lblock_siblings(mp, cur, level, fsb,
+ block->bb_u.l.bb_rightsib);
+ return fa;
}
/* Check a long btree block header. */
@@ -130,6 +197,9 @@ __xfs_btree_check_sblock(
struct xfs_mount *mp = cur->bc_mp;
xfs_btnum_t btnum = cur->bc_btnum;
int crc = xfs_has_crc(mp);
+ xfs_failaddr_t fa;
+ xfs_agblock_t agbno = NULLAGBLOCK;
+ xfs_agnumber_t agno = NULLAGNUMBER;
if (crc) {
if (!uuid_equal(&block->bb_u.s.bb_uuid, &mp->m_sb.sb_meta_uuid))
@@ -146,16 +216,18 @@ __xfs_btree_check_sblock(
if (be16_to_cpu(block->bb_numrecs) >
cur->bc_ops->get_maxrecs(cur, level))
return __this_address;
- if (block->bb_u.s.bb_leftsib != cpu_to_be32(NULLAGBLOCK) &&
- !xfs_btree_check_sptr(cur, be32_to_cpu(block->bb_u.s.bb_leftsib),
- level + 1))
- return __this_address;
- if (block->bb_u.s.bb_rightsib != cpu_to_be32(NULLAGBLOCK) &&
- !xfs_btree_check_sptr(cur, be32_to_cpu(block->bb_u.s.bb_rightsib),
- level + 1))
- return __this_address;
- return NULL;
+ if (bp) {
+ agbno = xfs_daddr_to_agbno(mp, xfs_buf_daddr(bp));
+ agno = xfs_daddr_to_agno(mp, xfs_buf_daddr(bp));
+ }
+
+ fa = xfs_btree_check_sblock_siblings(mp, cur, level, agno, agbno,
+ block->bb_u.s.bb_leftsib);
+ if (!fa)
+ fa = xfs_btree_check_sblock_siblings(mp, cur, level, agno,
+ agbno, block->bb_u.s.bb_rightsib);
+ return fa;
}
/* Check a short btree block header. */
@@ -373,8 +445,14 @@ xfs_btree_del_cursor(
break;
}
+ /*
+ * If we are doing a BMBT update, the number of unaccounted blocks
+ * allocated during this cursor life time should be zero. If it's not
+ * zero, then we should be shut down or on our way to shutdown due to
+ * cancelling a dirty transaction on error.
+ */
ASSERT(cur->bc_btnum != XFS_BTNUM_BMAP || cur->bc_ino.allocated == 0 ||
- xfs_is_shutdown(cur->bc_mp));
+ xfs_is_shutdown(cur->bc_mp) || error != 0);
if (unlikely(cur->bc_flags & XFS_BTREE_STAGING))
kmem_free(cur->bc_ops);
if (!(cur->bc_flags & XFS_BTREE_LONG_PTRS) && cur->bc_ag.pag)
@@ -3188,7 +3266,7 @@ xfs_btree_insrec(
struct xfs_btree_block *block; /* btree block */
struct xfs_buf *bp; /* buffer for block */
union xfs_btree_ptr nptr; /* new block ptr */
- struct xfs_btree_cur *ncur; /* new btree cursor */
+ struct xfs_btree_cur *ncur = NULL; /* new btree cursor */
union xfs_btree_key nkey; /* new block key */
union xfs_btree_key *lkey;
int optr; /* old key/record index */
@@ -3268,7 +3346,7 @@ xfs_btree_insrec(
#ifdef DEBUG
error = xfs_btree_check_block(cur, block, level, bp);
if (error)
- return error;
+ goto error0;
#endif
/*
@@ -3288,7 +3366,7 @@ xfs_btree_insrec(
for (i = numrecs - ptr; i >= 0; i--) {
error = xfs_btree_debug_check_ptr(cur, pp, i, level);
if (error)
- return error;
+ goto error0;
}
xfs_btree_shift_keys(cur, kp, 1, numrecs - ptr + 1);
@@ -3373,6 +3451,8 @@ xfs_btree_insrec(
return 0;
error0:
+ if (ncur)
+ xfs_btree_del_cursor(ncur, error);
return error;
}
@@ -4265,6 +4345,21 @@ xfs_btree_visit_block(
if (xfs_btree_ptr_is_null(cur, &rptr))
return -ENOENT;
+ /*
+ * We only visit blocks once in this walk, so we have to avoid the
+ * internal xfs_btree_lookup_get_block() optimisation where it will
+ * return the same block without checking if the right sibling points
+ * back to us and creates a cyclic reference in the btree.
+ */
+ if (cur->bc_flags & XFS_BTREE_LONG_PTRS) {
+ if (be64_to_cpu(rptr.l) == XFS_DADDR_TO_FSB(cur->bc_mp,
+ xfs_buf_daddr(bp)))
+ return -EFSCORRUPTED;
+ } else {
+ if (be32_to_cpu(rptr.s) == xfs_daddr_to_agbno(cur->bc_mp,
+ xfs_buf_daddr(bp)))
+ return -EFSCORRUPTED;
+ }
return xfs_btree_lookup_get_block(cur, level, &rptr, &block);
}
@@ -4439,20 +4534,21 @@ xfs_btree_lblock_verify(
{
struct xfs_mount *mp = bp->b_mount;
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
+ xfs_fsblock_t fsb;
+ xfs_failaddr_t fa;
/* numrecs verification */
if (be16_to_cpu(block->bb_numrecs) > max_recs)
return __this_address;
/* sibling pointer verification */
- if (block->bb_u.l.bb_leftsib != cpu_to_be64(NULLFSBLOCK) &&
- !xfs_verify_fsbno(mp, be64_to_cpu(block->bb_u.l.bb_leftsib)))
- return __this_address;
- if (block->bb_u.l.bb_rightsib != cpu_to_be64(NULLFSBLOCK) &&
- !xfs_verify_fsbno(mp, be64_to_cpu(block->bb_u.l.bb_rightsib)))
- return __this_address;
-
- return NULL;
+ fsb = XFS_DADDR_TO_FSB(mp, xfs_buf_daddr(bp));
+ fa = xfs_btree_check_lblock_siblings(mp, NULL, -1, fsb,
+ block->bb_u.l.bb_leftsib);
+ if (!fa)
+ fa = xfs_btree_check_lblock_siblings(mp, NULL, -1, fsb,
+ block->bb_u.l.bb_rightsib);
+ return fa;
}
/**
@@ -4493,7 +4589,9 @@ xfs_btree_sblock_verify(
{
struct xfs_mount *mp = bp->b_mount;
struct xfs_btree_block *block = XFS_BUF_TO_BLOCK(bp);
- xfs_agblock_t agno;
+ xfs_agnumber_t agno;
+ xfs_agblock_t agbno;
+ xfs_failaddr_t fa;
/* numrecs verification */
if (be16_to_cpu(block->bb_numrecs) > max_recs)
@@ -4501,14 +4599,13 @@ xfs_btree_sblock_verify(
/* sibling pointer verification */
agno = xfs_daddr_to_agno(mp, xfs_buf_daddr(bp));
- if (block->bb_u.s.bb_leftsib != cpu_to_be32(NULLAGBLOCK) &&
- !xfs_verify_agbno(mp, agno, be32_to_cpu(block->bb_u.s.bb_leftsib)))
- return __this_address;
- if (block->bb_u.s.bb_rightsib != cpu_to_be32(NULLAGBLOCK) &&
- !xfs_verify_agbno(mp, agno, be32_to_cpu(block->bb_u.s.bb_rightsib)))
- return __this_address;
-
- return NULL;
+ agbno = xfs_daddr_to_agbno(mp, xfs_buf_daddr(bp));
+ fa = xfs_btree_check_sblock_siblings(mp, NULL, -1, agno, agbno,
+ block->bb_u.s.bb_leftsib);
+ if (!fa)
+ fa = xfs_btree_check_sblock_siblings(mp, NULL, -1, agno, agbno,
+ block->bb_u.s.bb_rightsib);
+ return fa;
}
/*
diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c
index 1d17490..2009523 100644
--- a/fs/xfs/libxfs/xfs_inode_fork.c
+++ b/fs/xfs/libxfs/xfs_inode_fork.c
@@ -50,8 +50,13 @@ xfs_init_local_fork(
mem_size++;
if (size) {
+ /*
+ * As we round up the allocation here, we need to ensure the
+ * bytes we don't copy data into are zeroed because the log
+ * vectors still copy them into the journal.
+ */
real_size = roundup(mem_size, 4);
- ifp->if_u1.if_data = kmem_alloc(real_size, KM_NOFS);
+ ifp->if_u1.if_data = kmem_zalloc(real_size, KM_NOFS);
memcpy(ifp->if_u1.if_data, data, size);
if (zero_terminate)
ifp->if_u1.if_data[size] = '\0';
@@ -500,10 +505,11 @@ xfs_idata_realloc(
/*
* For inline data, the underlying buffer must be a multiple of 4 bytes
* in size so that it can be logged and stay on word boundaries.
- * We enforce that here.
+ * We enforce that here, and use __GFP_ZERO to ensure that size
+ * extensions always zero the unused roundup area.
*/
ifp->if_u1.if_data = krealloc(ifp->if_u1.if_data, roundup(new_size, 4),
- GFP_NOFS | __GFP_NOFAIL);
+ GFP_NOFS | __GFP_NOFAIL | __GFP_ZERO);
ifp->if_bytes = new_size;
}
diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
index e58349b..04e2a57 100644
--- a/fs/xfs/libxfs/xfs_sb.c
+++ b/fs/xfs/libxfs/xfs_sb.c
@@ -31,15 +31,66 @@
*/
/*
+ * Check that all the V4 feature bits that the V5 filesystem format requires are
+ * correctly set.
+ */
+static bool
+xfs_sb_validate_v5_features(
+ struct xfs_sb *sbp)
+{
+ /* We must not have any unknown V4 feature bits set */
+ if (sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS)
+ return false;
+
+ /*
+ * The CRC bit is considered an invalid V4 flag, so we have to add it
+ * manually to the OKBITS mask.
+ */
+ if (sbp->sb_features2 & ~(XFS_SB_VERSION2_OKBITS |
+ XFS_SB_VERSION2_CRCBIT))
+ return false;
+
+ /* Now check all the required V4 feature flags are set. */
+
+#define V5_VERS_FLAGS (XFS_SB_VERSION_NLINKBIT | \
+ XFS_SB_VERSION_ALIGNBIT | \
+ XFS_SB_VERSION_LOGV2BIT | \
+ XFS_SB_VERSION_EXTFLGBIT | \
+ XFS_SB_VERSION_DIRV2BIT | \
+ XFS_SB_VERSION_MOREBITSBIT)
+
+#define V5_FEAT_FLAGS (XFS_SB_VERSION2_LAZYSBCOUNTBIT | \
+ XFS_SB_VERSION2_ATTR2BIT | \
+ XFS_SB_VERSION2_PROJID32BIT | \
+ XFS_SB_VERSION2_CRCBIT)
+
+ if ((sbp->sb_versionnum & V5_VERS_FLAGS) != V5_VERS_FLAGS)
+ return false;
+ if ((sbp->sb_features2 & V5_FEAT_FLAGS) != V5_FEAT_FLAGS)
+ return false;
+ return true;
+}
+
+/*
* We support all XFS versions newer than a v4 superblock with V2 directories.
*/
bool
xfs_sb_good_version(
struct xfs_sb *sbp)
{
- /* all v5 filesystems are supported */
+ /*
+ * All v5 filesystems are supported, but we must check that all the
+ * required v4 feature flags are enabled correctly as the code checks
+ * those flags and not for v5 support.
+ */
if (xfs_sb_is_v5(sbp))
- return true;
+ return xfs_sb_validate_v5_features(sbp);
+
+ /* We must not have any unknown v4 feature bits set */
+ if ((sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) ||
+ ((sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) &&
+ (sbp->sb_features2 & ~XFS_SB_VERSION2_OKBITS)))
+ return false;
/* versions prior to v4 are not supported */
if (XFS_SB_VERSION_NUM(sbp) < XFS_SB_VERSION_4)
@@ -51,12 +102,6 @@ xfs_sb_good_version(
if (!(sbp->sb_versionnum & XFS_SB_VERSION_EXTFLGBIT))
return false;
- /* And must not have any unknown v4 feature bits set */
- if ((sbp->sb_versionnum & ~XFS_SB_VERSION_OKBITS) ||
- ((sbp->sb_versionnum & XFS_SB_VERSION_MOREBITSBIT) &&
- (sbp->sb_features2 & ~XFS_SB_VERSION2_OKBITS)))
- return false;
-
/* It's a supported v4 filesystem */
return true;
}
@@ -70,6 +115,8 @@ xfs_sb_version_to_features(
/* optional V4 features */
if (sbp->sb_rblocks > 0)
features |= XFS_FEAT_REALTIME;
+ if (sbp->sb_versionnum & XFS_SB_VERSION_NLINKBIT)
+ features |= XFS_FEAT_NLINK;
if (sbp->sb_versionnum & XFS_SB_VERSION_ATTRBIT)
features |= XFS_FEAT_ATTR;
if (sbp->sb_versionnum & XFS_SB_VERSION_QUOTABIT)
@@ -262,12 +309,15 @@ xfs_validate_sb_common(
bool has_dalign;
if (!xfs_verify_magic(bp, dsb->sb_magicnum)) {
- xfs_warn(mp, "bad magic number");
+ xfs_warn(mp,
+"Superblock has bad magic number 0x%x. Not an XFS filesystem?",
+ be32_to_cpu(dsb->sb_magicnum));
return -EWRONGFS;
}
if (!xfs_sb_good_version(sbp)) {
- xfs_warn(mp, "bad version");
+ xfs_warn(mp,
+"Superblock has unknown features enabled or corrupted feature masks.");
return -EWRONGFS;
}
diff --git a/fs/xfs/xfs_bmap_item.c b/fs/xfs/xfs_bmap_item.c
index 0315997..51ffdec 100644
--- a/fs/xfs/xfs_bmap_item.c
+++ b/fs/xfs/xfs_bmap_item.c
@@ -39,6 +39,7 @@ STATIC void
xfs_bui_item_free(
struct xfs_bui_log_item *buip)
{
+ kmem_free(buip->bui_item.li_lv_shadow);
kmem_cache_free(xfs_bui_zone, buip);
}
@@ -198,6 +199,7 @@ xfs_bud_item_release(
struct xfs_bud_log_item *budp = BUD_ITEM(lip);
xfs_bui_release(budp->bud_buip);
+ kmem_free(budp->bud_item.li_lv_shadow);
kmem_cache_free(xfs_bud_zone, budp);
}
diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index 73a36b7..fd2ad6a 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -851,9 +851,6 @@ xfs_alloc_file_space(
rblocks = 0;
}
- /*
- * Allocate and setup the transaction.
- */
error = xfs_trans_alloc_inode(ip, &M_RES(mp)->tr_write,
dblocks, rblocks, false, &tp);
if (error)
@@ -870,9 +867,9 @@ xfs_alloc_file_space(
if (error)
goto error;
- /*
- * Complete the transaction
- */
+ ip->i_diflags |= XFS_DIFLAG_PREALLOC;
+ xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
+
error = xfs_trans_commit(tp);
xfs_iunlock(ip, XFS_ILOCK_EXCL);
if (error)
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 240eb93..8cd0c3df 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -95,8 +95,6 @@ xfs_update_prealloc_flags(
ip->i_diflags &= ~XFS_DIFLAG_PREALLOC;
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
- if (flags & XFS_PREALLOC_SYNC)
- xfs_trans_set_sync(tp);
return xfs_trans_commit(tp);
}
@@ -911,7 +909,6 @@ xfs_file_fallocate(
struct inode *inode = file_inode(file);
struct xfs_inode *ip = XFS_I(inode);
long error;
- enum xfs_prealloc_flags flags = 0;
uint iolock = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL;
loff_t new_size = 0;
bool do_file_insert = false;
@@ -956,6 +953,10 @@ xfs_file_fallocate(
goto out_unlock;
}
+ error = file_modified(file);
+ if (error)
+ goto out_unlock;
+
if (mode & FALLOC_FL_PUNCH_HOLE) {
error = xfs_free_file_space(ip, offset, len);
if (error)
@@ -1005,8 +1006,6 @@ xfs_file_fallocate(
}
do_file_insert = true;
} else {
- flags |= XFS_PREALLOC_SET;
-
if (!(mode & FALLOC_FL_KEEP_SIZE) &&
offset + len > i_size_read(inode)) {
new_size = offset + len;
@@ -1059,13 +1058,6 @@ xfs_file_fallocate(
}
}
- if (file->f_flags & O_DSYNC)
- flags |= XFS_PREALLOC_SYNC;
-
- error = xfs_update_prealloc_flags(ip, flags);
- if (error)
- goto out_unlock;
-
/* Change file size if needed */
if (new_size) {
struct iattr iattr;
@@ -1084,8 +1076,14 @@ xfs_file_fallocate(
* leave shifted extents past EOF and hence losing access to
* the data that is contained within them.
*/
- if (do_file_insert)
+ if (do_file_insert) {
error = xfs_insert_file_space(ip, offset, len);
+ if (error)
+ goto out_unlock;
+ }
+
+ if (file->f_flags & O_DSYNC)
+ error = xfs_log_force_inode(ip);
out_unlock:
xfs_iunlock(ip, iolock);
diff --git a/fs/xfs/xfs_icreate_item.c b/fs/xfs/xfs_icreate_item.c
index 017904a..c265ae2 100644
--- a/fs/xfs/xfs_icreate_item.c
+++ b/fs/xfs/xfs_icreate_item.c
@@ -63,6 +63,7 @@ STATIC void
xfs_icreate_item_release(
struct xfs_log_item *lip)
{
+ kmem_free(ICR_ITEM(lip)->ic_item.li_lv_shadow);
kmem_cache_free(xfs_icreate_zone, ICR_ITEM(lip));
}
diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
index a607d6a..1eb7127 100644
--- a/fs/xfs/xfs_iops.c
+++ b/fs/xfs/xfs_iops.c
@@ -634,37 +634,6 @@ xfs_vn_getattr(
return 0;
}
-static void
-xfs_setattr_mode(
- struct xfs_inode *ip,
- struct iattr *iattr)
-{
- struct inode *inode = VFS_I(ip);
- umode_t mode = iattr->ia_mode;
-
- ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
-
- inode->i_mode &= S_IFMT;
- inode->i_mode |= mode & ~S_IFMT;
-}
-
-void
-xfs_setattr_time(
- struct xfs_inode *ip,
- struct iattr *iattr)
-{
- struct inode *inode = VFS_I(ip);
-
- ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
-
- if (iattr->ia_valid & ATTR_ATIME)
- inode->i_atime = iattr->ia_atime;
- if (iattr->ia_valid & ATTR_CTIME)
- inode->i_ctime = iattr->ia_ctime;
- if (iattr->ia_valid & ATTR_MTIME)
- inode->i_mtime = iattr->ia_mtime;
-}
-
static int
xfs_vn_change_ok(
struct user_namespace *mnt_userns,
@@ -764,16 +733,6 @@ xfs_setattr_nonsize(
uid = (mask & ATTR_UID) ? iattr->ia_uid : iuid;
/*
- * CAP_FSETID overrides the following restrictions:
- *
- * The set-user-ID and set-group-ID bits of a file will be
- * cleared upon successful return from chown()
- */
- if ((inode->i_mode & (S_ISUID|S_ISGID)) &&
- !capable(CAP_FSETID))
- inode->i_mode &= ~(S_ISUID|S_ISGID);
-
- /*
* Change the ownerships and register quota modifications
* in the transaction.
*/
@@ -784,7 +743,6 @@ xfs_setattr_nonsize(
olddquot1 = xfs_qm_vop_chown(tp, ip,
&ip->i_udquot, udqp);
}
- inode->i_uid = uid;
}
if (!gid_eq(igid, gid)) {
if (XFS_IS_GQUOTA_ON(mp)) {
@@ -795,15 +753,10 @@ xfs_setattr_nonsize(
olddquot2 = xfs_qm_vop_chown(tp, ip,
&ip->i_gdquot, gdqp);
}
- inode->i_gid = gid;
}
}
- if (mask & ATTR_MODE)
- xfs_setattr_mode(ip, iattr);
- if (mask & (ATTR_ATIME|ATTR_CTIME|ATTR_MTIME))
- xfs_setattr_time(ip, iattr);
-
+ setattr_copy(mnt_userns, inode, iattr);
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
XFS_STATS_INC(mp, xs_ig_attrchg);
@@ -1028,11 +981,8 @@ xfs_setattr_size(
xfs_inode_clear_eofblocks_tag(ip);
}
- if (iattr->ia_valid & ATTR_MODE)
- xfs_setattr_mode(ip, iattr);
- if (iattr->ia_valid & (ATTR_ATIME|ATTR_CTIME|ATTR_MTIME))
- xfs_setattr_time(ip, iattr);
-
+ ASSERT(!(iattr->ia_valid & (ATTR_UID | ATTR_GID)));
+ setattr_copy(mnt_userns, inode, iattr);
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
XFS_STATS_INC(mp, xs_ig_attrchg);
diff --git a/fs/xfs/xfs_iops.h b/fs/xfs/xfs_iops.h
index 2789490..6a7909f 100644
--- a/fs/xfs/xfs_iops.h
+++ b/fs/xfs/xfs_iops.h
@@ -13,7 +13,6 @@ extern const struct file_operations xfs_dir_file_operations;
extern ssize_t xfs_vn_listxattr(struct dentry *, char *data, size_t size);
-extern void xfs_setattr_time(struct xfs_inode *ip, struct iattr *iattr);
int xfs_vn_setattr_size(struct user_namespace *mnt_userns,
struct dentry *dentry, struct iattr *vap);
diff --git a/fs/xfs/xfs_pnfs.c b/fs/xfs/xfs_pnfs.c
index 5e1d29d..3a82a13 100644
--- a/fs/xfs/xfs_pnfs.c
+++ b/fs/xfs/xfs_pnfs.c
@@ -164,10 +164,12 @@ xfs_fs_map_blocks(
* that the blocks allocated and handed out to the client are
* guaranteed to be present even after a server crash.
*/
- error = xfs_update_prealloc_flags(ip,
- XFS_PREALLOC_SET | XFS_PREALLOC_SYNC);
+ error = xfs_update_prealloc_flags(ip, XFS_PREALLOC_SET);
+ if (!error)
+ error = xfs_log_force_inode(ip);
if (error)
goto out_unlock;
+
} else {
xfs_iunlock(ip, lock_flags);
}
@@ -283,7 +285,8 @@ xfs_fs_commit_blocks(
xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);
xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);
- xfs_setattr_time(ip, iattr);
+ ASSERT(!(iattr->ia_valid & (ATTR_UID | ATTR_GID)));
+ setattr_copy(&init_user_ns, inode, iattr);
if (update_isize) {
i_size_write(inode, iattr->ia_size);
ip->i_disk_size = iattr->ia_size;
diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
index 5608066..6232446 100644
--- a/fs/xfs/xfs_qm.c
+++ b/fs/xfs/xfs_qm.c
@@ -1317,8 +1317,15 @@ xfs_qm_quotacheck(
error = xfs_iwalk_threaded(mp, 0, 0, xfs_qm_dqusage_adjust, 0, true,
NULL);
- if (error)
+ if (error) {
+ /*
+ * The inode walk may have partially populated the dquot
+ * caches. We must purge them before disabling quota and
+ * tearing down the quotainfo, or else the dquots will leak.
+ */
+ xfs_qm_dqpurge_all(mp);
goto error_return;
+ }
/*
* We've made all the changes that we need to make incore. Flush them
diff --git a/fs/xfs/xfs_refcount_item.c b/fs/xfs/xfs_refcount_item.c
index 46904b7..8ef842d 100644
--- a/fs/xfs/xfs_refcount_item.c
+++ b/fs/xfs/xfs_refcount_item.c
@@ -35,6 +35,7 @@ STATIC void
xfs_cui_item_free(
struct xfs_cui_log_item *cuip)
{
+ kmem_free(cuip->cui_item.li_lv_shadow);
if (cuip->cui_format.cui_nextents > XFS_CUI_MAX_FAST_EXTENTS)
kmem_free(cuip);
else
@@ -204,6 +205,7 @@ xfs_cud_item_release(
struct xfs_cud_log_item *cudp = CUD_ITEM(lip);
xfs_cui_release(cudp->cud_cuip);
+ kmem_free(cudp->cud_item.li_lv_shadow);
kmem_cache_free(xfs_cud_zone, cudp);
}
diff --git a/fs/xfs/xfs_rmap_item.c b/fs/xfs/xfs_rmap_item.c
index 5f06959..15e7b01 100644
--- a/fs/xfs/xfs_rmap_item.c
+++ b/fs/xfs/xfs_rmap_item.c
@@ -35,6 +35,7 @@ STATIC void
xfs_rui_item_free(
struct xfs_rui_log_item *ruip)
{
+ kmem_free(ruip->rui_item.li_lv_shadow);
if (ruip->rui_format.rui_nextents > XFS_RUI_MAX_FAST_EXTENTS)
kmem_free(ruip);
else
@@ -227,6 +228,7 @@ xfs_rud_item_release(
struct xfs_rud_log_item *rudp = RUD_ITEM(lip);
xfs_rui_release(rudp->rud_ruip);
+ kmem_free(rudp->rud_item.li_lv_shadow);
kmem_cache_free(xfs_rud_zone, rudp);
}
diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index 85a9859..9619a68 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -402,6 +402,10 @@ static int zonefs_io_error_cb(struct blk_zone *zone, unsigned int idx,
data_size = zonefs_check_zone_condition(inode, zone,
false, false);
}
+ } else if (sbi->s_mount_opts & ZONEFS_MNTOPT_ERRORS_RO &&
+ data_size > isize) {
+ /* Do not expose garbage data */
+ data_size = isize;
}
/*
@@ -732,6 +736,7 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from)
struct zonefs_inode_info *zi = ZONEFS_I(inode);
struct block_device *bdev = inode->i_sb->s_bdev;
unsigned int max = bdev_max_zone_append_sectors(bdev);
+ pgoff_t start, end;
struct bio *bio;
ssize_t size;
int nr_pages;
@@ -740,6 +745,19 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from)
max = ALIGN_DOWN(max << SECTOR_SHIFT, inode->i_sb->s_blocksize);
iov_iter_truncate(from, max);
+ /*
+ * If the inode block size (zone write granularity) is smaller than the
+ * page size, we may be appending data belonging to the last page of the
+ * inode straddling inode->i_size, with that page already cached due to
+ * a buffered read or readahead. So make sure to invalidate that page.
+ * This will always be a no-op for the case where the block size is
+ * equal to the page size.
+ */
+ start = iocb->ki_pos >> PAGE_SHIFT;
+ end = (iocb->ki_pos + iov_iter_count(from) - 1) >> PAGE_SHIFT;
+ if (invalidate_inode_pages2_range(inode->i_mapping, start, end))
+ return -EBUSY;
+
nr_pages = iov_iter_npages(from, BIO_MAX_VECS);
if (!nr_pages)
return 0;
@@ -765,6 +783,24 @@ static ssize_t zonefs_file_dio_append(struct kiocb *iocb, struct iov_iter *from)
ret = submit_bio_wait(bio);
+ /*
+ * If the file zone was written underneath the file system, the zone
+ * write pointer may not be where we expect it to be, but the zone
+ * append write can still succeed. So check manually that we wrote where
+ * we intended to, that is, at zi->i_wpoffset.
+ */
+ if (!ret) {
+ sector_t wpsector =
+ zi->i_zsector + (zi->i_wpoffset >> SECTOR_SHIFT);
+
+ if (bio->bi_iter.bi_sector != wpsector) {
+ zonefs_warn(inode->i_sb,
+ "Corrupted write pointer %llu for zone at %llu\n",
+ bio->bi_iter.bi_sector, zi->i_zsector);
+ ret = -EIO;
+ }
+ }
+
zonefs_file_write_dio_end_io(iocb, size, ret, 0);
trace_zonefs_file_dio_append(inode, size, ret);
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index e28792c..8471717 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -903,7 +903,12 @@
#define PRINTK_INDEX
#endif
+/*
+ * Discard .note.GNU-stack, which is emitted as PROGBITS by the compiler.
+ * Otherwise, the type of .notes section would become PROGBITS instead of NOTES.
+ */
#define NOTES \
+ /DISCARD/ : { *(.note.GNU-stack) } \
.notes : AT(ADDR(.notes) - LOAD_OFFSET) { \
__start_notes = .; \
KEEP(*(.note.*)) \
diff --git a/include/drm/drm_bridge.h b/include/drm/drm_bridge.h
index 1648ce2..c84783c 100644
--- a/include/drm/drm_bridge.h
+++ b/include/drm/drm_bridge.h
@@ -447,11 +447,11 @@ struct drm_bridge_funcs {
*
* The returned array must be allocated with kmalloc() and will be
* freed by the caller. If the allocation fails, NULL should be
- * returned. num_output_fmts must be set to the returned array size.
+ * returned. num_input_fmts must be set to the returned array size.
* Formats listed in the returned array should be listed in decreasing
* preference order (the core will try all formats until it finds one
* that works). When the format is not supported NULL should be
- * returned and num_output_fmts should be set to 0.
+ * returned and num_input_fmts should be set to 0.
*
* This method is called on all elements of the bridge chain as part of
* the bus format negotiation process that happens in
diff --git a/include/drm/drm_mipi_dsi.h b/include/drm/drm_mipi_dsi.h
index af7ba80..1d263eb 100644
--- a/include/drm/drm_mipi_dsi.h
+++ b/include/drm/drm_mipi_dsi.h
@@ -288,6 +288,10 @@ int mipi_dsi_dcs_set_display_brightness(struct mipi_dsi_device *dsi,
u16 brightness);
int mipi_dsi_dcs_get_display_brightness(struct mipi_dsi_device *dsi,
u16 *brightness);
+int mipi_dsi_dcs_set_display_brightness_large(struct mipi_dsi_device *dsi,
+ u16 brightness);
+int mipi_dsi_dcs_get_display_brightness_large(struct mipi_dsi_device *dsi,
+ u16 *brightness);
/**
* struct mipi_dsi_driver - DSI driver
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 6224b1e..a23a5ae 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -484,6 +484,7 @@ int acpi_dev_get_resources(struct acpi_device *adev, struct list_head *list,
void *preproc_data);
int acpi_dev_get_dma_resources(struct acpi_device *adev,
struct list_head *list);
+int acpi_dev_get_memory_resources(struct acpi_device *adev, struct list_head *list);
int acpi_dev_filter_resource_type(struct acpi_resource *ares,
unsigned long types);
@@ -1005,7 +1006,15 @@ void acpi_os_set_prepare_extended_sleep(int (*func)(u8 sleep_state,
acpi_status acpi_os_prepare_extended_sleep(u8 sleep_state,
u32 val_a, u32 val_b);
-
+#ifdef CONFIG_X86
+struct acpi_s2idle_dev_ops {
+ struct list_head list_node;
+ void (*prepare)(void);
+ void (*restore)(void);
+};
+int acpi_register_lps0_dev(struct acpi_s2idle_dev_ops *arg);
+void acpi_unregister_lps0_dev(struct acpi_s2idle_dev_ops *arg);
+#endif /* CONFIG_X86 */
#ifndef CONFIG_IA64
void arch_reserve_mem_area(acpi_physical_address addr, size_t size);
#else
diff --git a/include/linux/acpi_mdio.h b/include/linux/acpi_mdio.h
index 0a24ab7..8e2eefa 100644
--- a/include/linux/acpi_mdio.h
+++ b/include/linux/acpi_mdio.h
@@ -9,7 +9,14 @@
#include <linux/phy.h>
#if IS_ENABLED(CONFIG_ACPI_MDIO)
-int acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode);
+int __acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode,
+ struct module *owner);
+
+static inline int
+acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *handle)
+{
+ return __acpi_mdiobus_register(mdio, handle, THIS_MODULE);
+}
#else /* CONFIG_ACPI_MDIO */
static inline int
acpi_mdiobus_register(struct mii_bus *mdio, struct fwnode_handle *fwnode)
diff --git a/include/linux/bootconfig.h b/include/linux/bootconfig.h
index 537e1b9..5296fbb 100644
--- a/include/linux/bootconfig.h
+++ b/include/linux/bootconfig.h
@@ -49,7 +49,7 @@ struct xbc_node {
/* Maximum size of boot config is 32KB - 1 */
#define XBC_DATA_MAX (XBC_VALUE - 1)
-#define XBC_NODE_MAX 1024
+#define XBC_NODE_MAX 8192
#define XBC_KEYLEN_MAX 256
#define XBC_DEPTH_MAX 16
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index 66a1f49..025391b 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -643,6 +643,11 @@ struct gov_attr_set {
/* sysfs ops for cpufreq governors */
extern const struct sysfs_ops governor_sysfs_ops;
+static inline struct gov_attr_set *to_gov_attr_set(struct kobject *kobj)
+{
+ return container_of(kobj, struct gov_attr_set, kobj);
+}
+
void gov_attr_set_init(struct gov_attr_set *attr_set, struct list_head *list_node);
void gov_attr_set_get(struct gov_attr_set *attr_set, struct list_head *list_node);
unsigned int gov_attr_set_put(struct gov_attr_set *attr_set, struct list_head *list_node);
diff --git a/include/linux/entry-kvm.h b/include/linux/entry-kvm.h
index 0d7865a..07c878d 100644
--- a/include/linux/entry-kvm.h
+++ b/include/linux/entry-kvm.h
@@ -75,7 +75,7 @@ static inline void xfer_to_guest_mode_prepare(void)
*/
static inline bool __xfer_to_guest_mode_work_pending(void)
{
- unsigned long ti_work = READ_ONCE(current_thread_info()->flags);
+ unsigned long ti_work = read_thread_flags();
return !!(ti_work & XFER_TO_GUEST_MODE_WORK);
}
diff --git a/include/linux/eventfd.h b/include/linux/eventfd.h
index 3cd202d..36a4865 100644
--- a/include/linux/eventfd.h
+++ b/include/linux/eventfd.h
@@ -40,6 +40,7 @@ struct file *eventfd_fget(int fd);
struct eventfd_ctx *eventfd_ctx_fdget(int fd);
struct eventfd_ctx *eventfd_ctx_fileget(struct file *file);
__u64 eventfd_signal(struct eventfd_ctx *ctx, __u64 n);
+__u64 eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n, unsigned mask);
int eventfd_ctx_remove_wait_queue(struct eventfd_ctx *ctx, wait_queue_entry_t *wait,
__u64 *cnt);
void eventfd_ctx_do_read(struct eventfd_ctx *ctx, __u64 *cnt);
@@ -66,6 +67,12 @@ static inline int eventfd_signal(struct eventfd_ctx *ctx, __u64 n)
return -ENOSYS;
}
+static inline int eventfd_signal_mask(struct eventfd_ctx *ctx, __u64 n,
+ unsigned mask)
+{
+ return -ENOSYS;
+}
+
static inline void eventfd_ctx_put(struct eventfd_ctx *ctx)
{
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 1e1ac11..23ecfec 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -1941,6 +1941,8 @@ extern long compat_ptr_ioctl(struct file *file, unsigned int cmd,
void inode_init_owner(struct user_namespace *mnt_userns, struct inode *inode,
const struct inode *dir, umode_t mode);
extern bool may_open_dev(const struct path *path);
+umode_t mode_strip_sgid(struct user_namespace *mnt_userns,
+ const struct inode *dir, umode_t mode);
/*
* This is the "filldir" function type, used by readdir() to let
@@ -3131,7 +3133,7 @@ extern void __destroy_inode(struct inode *);
extern struct inode *new_inode_pseudo(struct super_block *sb);
extern struct inode *new_inode(struct super_block *sb);
extern void free_inode_nonrcu(struct inode *inode);
-extern int should_remove_suid(struct dentry *);
+extern int setattr_should_drop_suidgid(struct user_namespace *, struct inode *);
extern int file_remove_privs(struct file *);
extern void __insert_inode_hash(struct inode *, unsigned long hashval);
@@ -3569,7 +3571,7 @@ int __init list_bdev_fs_names(char *buf, size_t size);
static inline bool is_sxid(umode_t mode)
{
- return (mode & S_ISUID) || ((mode & S_ISGID) && (mode & S_IXGRP));
+ return mode & (S_ISUID | S_ISGID);
}
static inline int check_sticky(struct user_namespace *mnt_userns,
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 832e65f..afc678d 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -850,7 +850,7 @@ static inline void __ftrace_enabled_restore(int enabled)
#define CALLER_ADDR5 ((unsigned long)ftrace_return_address(5))
#define CALLER_ADDR6 ((unsigned long)ftrace_return_address(6))
-static inline unsigned long get_lock_parent_ip(void)
+static __always_inline unsigned long get_lock_parent_ip(void)
{
unsigned long addr = CALLER_ADDR0;
diff --git a/include/linux/gpio/driver.h b/include/linux/gpio/driver.h
index ad479db..11c26ae 100644
--- a/include/linux/gpio/driver.h
+++ b/include/linux/gpio/driver.h
@@ -425,7 +425,7 @@ struct gpio_chip {
void __iomem *reg_dir_in;
bool bgpio_dir_unreadable;
int bgpio_bits;
- spinlock_t bgpio_lock;
+ raw_spinlock_t bgpio_lock;
unsigned long bgpio_data;
unsigned long bgpio_dir;
#endif /* CONFIG_GPIO_GENERIC */
diff --git a/include/linux/hid.h b/include/linux/hid.h
index 26742ca..c3478e3 100644
--- a/include/linux/hid.h
+++ b/include/linux/hid.h
@@ -599,6 +599,7 @@ struct hid_device { /* device report descriptor */
unsigned long status; /* see STAT flags above */
unsigned claimed; /* Claimed by hidinput, hiddev? */
unsigned quirks; /* Various quirks the device can pull on us */
+ unsigned initial_quirks; /* Initial set of quirks supplied when creating device */
bool io_started; /* If IO has started */
struct list_head inputs; /* The list of inputs */
@@ -804,6 +805,7 @@ struct hid_driver {
* @output_report: send output report to device
* @idle: send idle request to device
* @may_wakeup: return if device may act as a wakeup source during system-suspend
+ * @max_buffer_size: over-ride maximum data buffer size (default: HID_MAX_BUFFER_SIZE)
*/
struct hid_ll_driver {
int (*start)(struct hid_device *hdev);
@@ -829,6 +831,8 @@ struct hid_ll_driver {
int (*idle)(struct hid_device *hdev, int report, int idle, int reqtype);
bool (*may_wakeup)(struct hid_device *hdev);
+
+ unsigned int max_buffer_size;
};
extern struct hid_ll_driver i2c_hid_ll_driver;
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index 4aa1031..de17904 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -184,7 +184,7 @@ static inline void *kmap_local_pfn(unsigned long pfn)
static inline void __kunmap_local(void *addr)
{
#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
- kunmap_flush_on_unmap(addr);
+ kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE));
#endif
}
@@ -211,7 +211,7 @@ static inline void *kmap_atomic_pfn(unsigned long pfn)
static inline void __kunmap_atomic(void *addr)
{
#ifdef ARCH_HAS_FLUSH_ON_KUNMAP
- kunmap_flush_on_unmap(addr);
+ kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE));
#endif
pagefault_enable();
if (IS_ENABLED(CONFIG_PREEMPT_RT))
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index af8e704..4ede8df 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -7,6 +7,7 @@
#include <linux/fs.h>
#include <linux/hugetlb_inline.h>
#include <linux/cgroup.h>
+#include <linux/page_ref.h>
#include <linux/list.h>
#include <linux/kref.h>
#include <linux/pgtable.h>
@@ -165,7 +166,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
vm_flags_t vm_flags);
long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
long freed);
-bool isolate_huge_page(struct page *page, struct list_head *list);
+int isolate_hugetlb(struct page *page, struct list_head *list);
int get_hwpoison_huge_page(struct page *page, bool *hugetlb);
int get_huge_page_for_hwpoison(unsigned long pfn, int flags);
void putback_active_hugepage(struct page *page);
@@ -353,9 +354,9 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
return NULL;
}
-static inline bool isolate_huge_page(struct page *page, struct list_head *list)
+static inline int isolate_hugetlb(struct page *page, struct list_head *list)
{
- return false;
+ return -EBUSY;
}
static inline int get_hwpoison_huge_page(struct page *page, bool *hugetlb)
@@ -683,7 +684,10 @@ static inline struct hstate *hstate_sizelog(int page_size_log)
if (!page_size_log)
return &default_hstate;
- return size_to_hstate(1UL << page_size_log);
+ if (page_size_log < BITS_PER_LONG)
+ return size_to_hstate(1UL << page_size_log);
+
+ return NULL;
}
static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
@@ -1099,6 +1103,18 @@ static inline __init void hugetlb_cma_check(void)
}
#endif
+#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE
+static inline bool hugetlb_pmd_shared(pte_t *pte)
+{
+ return page_count(virt_to_page(pte)) > 1;
+}
+#else
+static inline bool hugetlb_pmd_shared(pte_t *pte)
+{
+ return false;
+}
+#endif
+
bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr);
#ifndef __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
diff --git a/include/linux/ima.h b/include/linux/ima.h
index b6ab66a5..6e1bca7 100644
--- a/include/linux/ima.h
+++ b/include/linux/ima.h
@@ -21,7 +21,8 @@ extern int ima_file_check(struct file *file, int mask);
extern void ima_post_create_tmpfile(struct user_namespace *mnt_userns,
struct inode *inode);
extern void ima_file_free(struct file *file);
-extern int ima_file_mmap(struct file *file, unsigned long prot);
+extern int ima_file_mmap(struct file *file, unsigned long reqprot,
+ unsigned long prot, unsigned long flags);
extern int ima_file_mprotect(struct vm_area_struct *vma, unsigned long prot);
extern int ima_load_data(enum kernel_load_data_id id, bool contents);
extern int ima_post_load_data(char *buf, loff_t size,
@@ -91,7 +92,8 @@ static inline void ima_file_free(struct file *file)
return;
}
-static inline int ima_file_mmap(struct file *file, unsigned long prot)
+static inline int ima_file_mmap(struct file *file, unsigned long reqprot,
+ unsigned long prot, unsigned long flags)
{
return 0;
}
diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index 81da710..0cf0078 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -515,9 +515,6 @@ struct context_entry {
u64 hi;
};
-/* si_domain contains mulitple devices */
-#define DOMAIN_FLAG_STATIC_IDENTITY BIT(0)
-
/*
* When VT-d works in the scalable mode, it allows DMA translation to
* happen through either first level or second level page table. This
diff --git a/include/linux/irq.h b/include/linux/irq.h
index c8293c8..f9e6449 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -875,16 +875,22 @@ static inline int irq_data_get_node(struct irq_data *d)
return irq_common_data_get_node(d->common);
}
+static inline struct cpumask *irq_data_get_affinity_mask(struct irq_data *d)
+{
+ return d->common->affinity;
+}
+
+static inline void irq_data_update_affinity(struct irq_data *d,
+ const struct cpumask *m)
+{
+ cpumask_copy(d->common->affinity, m);
+}
+
static inline struct cpumask *irq_get_affinity_mask(int irq)
{
struct irq_data *d = irq_get_irq_data(irq);
- return d ? d->common->affinity : NULL;
-}
-
-static inline struct cpumask *irq_data_get_affinity_mask(struct irq_data *d)
-{
- return d->common->affinity;
+ return d ? irq_data_get_affinity_mask(d) : NULL;
}
#ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK
@@ -906,7 +912,7 @@ static inline void irq_data_update_effective_affinity(struct irq_data *d,
static inline
struct cpumask *irq_data_get_effective_affinity_mask(struct irq_data *d)
{
- return d->common->affinity;
+ return irq_data_get_affinity_mask(d);
}
#endif
diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
index 69ae6b2..f9460fb 100644
--- a/include/linux/kernel_stat.h
+++ b/include/linux/kernel_stat.h
@@ -72,7 +72,7 @@ extern unsigned int kstat_irqs_usr(unsigned int irq);
/*
* Number of interrupts per cpu, since bootup
*/
-static inline unsigned int kstat_cpu_irqs_sum(unsigned int cpu)
+static inline unsigned long kstat_cpu_irqs_sum(unsigned int cpu)
{
return kstat_cpu(cpu).irqs_sum;
}
diff --git a/include/linux/kobject.h b/include/linux/kobject.h
index ea30529..d38916e 100644
--- a/include/linux/kobject.h
+++ b/include/linux/kobject.h
@@ -116,7 +116,7 @@ extern void kobject_put(struct kobject *kobj);
extern const void *kobject_namespace(struct kobject *kobj);
extern void kobject_get_ownership(struct kobject *kobj,
kuid_t *uid, kgid_t *gid);
-extern char *kobject_get_path(struct kobject *kobj, gfp_t flag);
+extern char *kobject_get_path(const struct kobject *kobj, gfp_t flag);
/**
* kobject_has_children - Returns whether a kobject has children.
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
index ef8c7ac..2cbb6a5 100644
--- a/include/linux/kprobes.h
+++ b/include/linux/kprobes.h
@@ -349,6 +349,8 @@ extern int proc_kprobes_optimization_handler(struct ctl_table *table,
size_t *length, loff_t *ppos);
#endif
extern void wait_for_kprobe_optimizer(void);
+bool optprobe_queued_unopt(struct optimized_kprobe *op);
+bool kprobe_disarmed(struct kprobe *p);
#else
static inline void wait_for_kprobe_optimizer(void) { }
#endif /* CONFIG_OPTPROBES */
diff --git a/include/linux/kthread.h b/include/linux/kthread.h
index 346b0f2..db47aae 100644
--- a/include/linux/kthread.h
+++ b/include/linux/kthread.h
@@ -56,6 +56,31 @@ bool kthread_is_per_cpu(struct task_struct *k);
__k; \
})
+/**
+ * kthread_run_on_cpu - create and wake a cpu bound thread.
+ * @threadfn: the function to run until signal_pending(current).
+ * @data: data ptr for @threadfn.
+ * @cpu: The cpu on which the thread should be bound,
+ * @namefmt: printf-style name for the thread. Format is restricted
+ * to "name.*%u". Code fills in cpu number.
+ *
+ * Description: Convenient wrapper for kthread_create_on_cpu()
+ * followed by wake_up_process(). Returns the kthread or
+ * ERR_PTR(-ENOMEM).
+ */
+static inline struct task_struct *
+kthread_run_on_cpu(int (*threadfn)(void *data), void *data,
+ unsigned int cpu, const char *namefmt)
+{
+ struct task_struct *p;
+
+ p = kthread_create_on_cpu(threadfn, data, cpu, namefmt);
+ if (!IS_ERR(p))
+ wake_up_process(p);
+
+ return p;
+}
+
void free_kthread_struct(struct task_struct *k);
void kthread_bind(struct task_struct *k, unsigned int cpu);
void kthread_bind_mask(struct task_struct *k, const struct cpumask *mask);
diff --git a/include/linux/lockd/xdr4.h b/include/linux/lockd/xdr4.h
index 5ae766f..025250a 100644
--- a/include/linux/lockd/xdr4.h
+++ b/include/linux/lockd/xdr4.h
@@ -24,6 +24,7 @@
+void nlm4svc_set_file_lock_range(struct file_lock *fl, u64 off, u64 len);
int nlm4svc_decode_testargs(struct svc_rqst *, __be32 *);
int nlm4svc_encode_testres(struct svc_rqst *, __be32 *);
int nlm4svc_decode_lockargs(struct svc_rqst *, __be32 *);
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 3a75d64..5b6c38f 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -275,9 +275,11 @@ struct hh_cache {
* relationship HH alignment <= LL alignment.
*/
#define LL_RESERVED_SPACE(dev) \
- ((((dev)->hard_header_len+(dev)->needed_headroom)&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
+ ((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom)) \
+ & ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
#define LL_RESERVED_SPACE_EXTRA(dev,extra) \
- ((((dev)->hard_header_len+(dev)->needed_headroom+(extra))&~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
+ ((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom) + (extra)) \
+ & ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)
struct header_ops {
int (*create) (struct sk_buff *skb, struct net_device *dev,
diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
index 783f871..7fcd56c 100644
--- a/include/linux/nfs_xdr.h
+++ b/include/linux/nfs_xdr.h
@@ -1806,6 +1806,8 @@ struct nfs_rpc_ops {
struct nfs_server *(*clone_server)(struct nfs_server *, struct nfs_fh *,
struct nfs_fattr *, rpc_authflavor_t);
int (*discover_trunking)(struct nfs_server *, struct nfs_fh *);
+ void (*enable_swap)(struct inode *inode);
+ void (*disable_swap)(struct inode *inode);
};
/*
diff --git a/include/linux/nospec.h b/include/linux/nospec.h
index c1e79f7..9f0af4f 100644
--- a/include/linux/nospec.h
+++ b/include/linux/nospec.h
@@ -11,6 +11,10 @@
struct task_struct;
+#ifndef barrier_nospec
+# define barrier_nospec() do { } while (0)
+#endif
+
/**
* array_index_mask_nospec() - generate a ~0 mask when index < size, 0 otherwise
* @index: array element index
diff --git a/include/linux/nvme-tcp.h b/include/linux/nvme-tcp.h
index 959e0bd..73364ae 100644
--- a/include/linux/nvme-tcp.h
+++ b/include/linux/nvme-tcp.h
@@ -114,8 +114,9 @@ struct nvme_tcp_icresp_pdu {
struct nvme_tcp_term_pdu {
struct nvme_tcp_hdr hdr;
__le16 fes;
- __le32 fei;
- __u8 rsvd[8];
+ __le16 feil;
+ __le16 feiu;
+ __u8 rsvd[10];
};
/**
diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index 87932bd..0895976 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -66,7 +66,6 @@ struct nvmem_keepout {
* @word_size: Minimum read/write access granularity.
* @stride: Minimum read/write access stride.
* @priv: User context passed to read/write callbacks.
- * @wp-gpio: Write protect pin
* @ignore_wp: Write Protect pin is managed by the provider.
*
* Note: A default "nvmem<id>" name will be assigned to the device if
@@ -81,7 +80,6 @@ struct nvmem_config {
const char *name;
int id;
struct module *owner;
- struct gpio_desc *wp_gpio;
const struct nvmem_cell_info *cells;
int ncells;
const struct nvmem_keepout *keepout;
diff --git a/include/linux/of_mdio.h b/include/linux/of_mdio.h
index da633d3..8a52ef2 100644
--- a/include/linux/of_mdio.h
+++ b/include/linux/of_mdio.h
@@ -14,9 +14,25 @@
#if IS_ENABLED(CONFIG_OF_MDIO)
bool of_mdiobus_child_is_phy(struct device_node *child);
-int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np);
-int devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
- struct device_node *np);
+int __of_mdiobus_register(struct mii_bus *mdio, struct device_node *np,
+ struct module *owner);
+
+static inline int of_mdiobus_register(struct mii_bus *mdio,
+ struct device_node *np)
+{
+ return __of_mdiobus_register(mdio, np, THIS_MODULE);
+}
+
+int __devm_of_mdiobus_register(struct device *dev, struct mii_bus *mdio,
+ struct device_node *np, struct module *owner);
+
+static inline int devm_of_mdiobus_register(struct device *dev,
+ struct mii_bus *mdio,
+ struct device_node *np)
+{
+ return __devm_of_mdiobus_register(dev, mdio, np, THIS_MODULE);
+}
+
struct mdio_device *of_mdio_find_device(struct device_node *np);
struct phy_device *of_phy_find_device(struct device_node *phy_np);
struct phy_device *
diff --git a/include/linux/panic.h b/include/linux/panic.h
index f584490..8eb5897 100644
--- a/include/linux/panic.h
+++ b/include/linux/panic.h
@@ -11,16 +11,11 @@ extern long (*panic_blink)(int state);
__printf(1, 2)
void panic(const char *fmt, ...) __noreturn __cold;
void nmi_panic(struct pt_regs *regs, const char *msg);
+void check_panic_on_warn(const char *origin);
extern void oops_enter(void);
extern void oops_exit(void);
extern bool oops_may_print(void);
-#ifdef CONFIG_SMP
-extern unsigned int sysctl_oops_all_cpu_backtrace;
-#else
-#define sysctl_oops_all_cpu_backtrace 0
-#endif /* CONFIG_SMP */
-
extern int panic_timeout;
extern unsigned long panic_print;
extern int panic_on_oops;
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 9d6e752..7e47143 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -557,6 +557,7 @@ struct pci_host_bridge {
void *release_data;
unsigned int ignore_reset_delay:1; /* For entire hierarchy */
unsigned int no_ext_tags:1; /* No Extended Tags */
+ unsigned int no_inc_mrrs:1; /* No Increase MRRS */
unsigned int native_aer:1; /* OS may use PCIe AER */
unsigned int native_pcie_hotplug:1; /* OS may use PCIe hotplug */
unsigned int native_shpc_hotplug:1; /* OS may use SHPC hotplug */
@@ -1389,6 +1390,7 @@ void pci_bus_add_resource(struct pci_bus *bus, struct resource *res,
unsigned int flags);
struct resource *pci_bus_resource_n(const struct pci_bus *bus, int n);
void pci_bus_remove_resources(struct pci_bus *bus);
+void pci_bus_remove_resource(struct pci_bus *bus, struct resource *res);
int devm_request_pci_bus_resources(struct device *dev,
struct list_head *resources);
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 04f44a4..8a1e264 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -3012,6 +3012,8 @@
#define PCI_DEVICE_ID_INTEL_VMD_9A0B 0x9a0b
#define PCI_DEVICE_ID_INTEL_S21152BB 0xb152
+#define PCI_VENDOR_ID_WANGXUN 0x8088
+
#define PCI_VENDOR_ID_SCALEMP 0x8686
#define PCI_DEVICE_ID_SCALEMP_VSMP_CTL 0x1010
@@ -3092,6 +3094,8 @@
#define PCI_VENDOR_ID_3COM_2 0xa727
+#define PCI_VENDOR_ID_SOLIDRUN 0xd063
+
#define PCI_VENDOR_ID_DIGIUM 0xd161
#define PCI_DEVICE_ID_DIGIUM_HFC4S 0xb410
diff --git a/include/linux/random.h b/include/linux/random.h
index 3feafab..ed75fb2 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -19,14 +19,14 @@ void add_input_randomness(unsigned int type, unsigned int code,
void add_interrupt_randomness(int irq) __latent_entropy;
void add_hwgenerator_randomness(const void *buf, size_t len, size_t entropy);
-#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
static inline void add_latent_entropy(void)
{
+#if defined(LATENT_ENTROPY_PLUGIN) && !defined(__CHECKER__)
add_device_randomness((const void *)&latent_entropy, sizeof(latent_entropy));
-}
#else
-static inline void add_latent_entropy(void) { }
+ add_device_randomness(NULL, 0);
#endif
+}
void get_random_bytes(void *buf, size_t len);
size_t __must_check get_random_bytes_arch(void *buf, size_t len);
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 434d12f..13bddb8 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -193,6 +193,7 @@ void synchronize_rcu_tasks_rude(void);
#define rcu_note_voluntary_context_switch(t) rcu_tasks_qs(t, false)
void exit_tasks_rcu_start(void);
+void exit_tasks_rcu_stop(void);
void exit_tasks_rcu_finish(void);
#else /* #ifdef CONFIG_TASKS_RCU_GENERIC */
#define rcu_tasks_qs(t, preempt) do { } while (0)
@@ -200,6 +201,7 @@ void exit_tasks_rcu_finish(void);
#define call_rcu_tasks call_rcu
#define synchronize_rcu_tasks synchronize_rcu
static inline void exit_tasks_rcu_start(void) { }
+static inline void exit_tasks_rcu_stop(void) { }
static inline void exit_tasks_rcu_finish(void) { }
#endif /* #else #ifdef CONFIG_TASKS_RCU_GENERIC */
@@ -311,11 +313,18 @@ static inline int rcu_read_lock_any_held(void)
* RCU_LOCKDEP_WARN - emit lockdep splat if specified condition is met
* @c: condition to check
* @s: informative message
+ *
+ * This checks debug_lockdep_rcu_enabled() before checking (c) to
+ * prevent early boot splats due to lockdep not yet being initialized,
+ * and rechecks it after checking (c) to prevent false-positive splats
+ * due to races with lockdep being disabled. See commit 3066820034b5dd
+ * ("rcu: Reject RCU_LOCKDEP_WARN() false positives") for more detail.
*/
#define RCU_LOCKDEP_WARN(c, s) \
do { \
static bool __section(".data.unlikely") __warned; \
- if ((c) && debug_lockdep_rcu_enabled() && !__warned) { \
+ if (debug_lockdep_rcu_enabled() && (c) && \
+ debug_lockdep_rcu_enabled() && !__warned) { \
__warned = true; \
lockdep_rcu_suspicious(__FILE__, __LINE__, s); \
} \
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index caae8e0..d351f1b3 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -59,6 +59,7 @@ extern void sched_post_fork(struct task_struct *p);
extern void sched_dead(struct task_struct *p);
void __noreturn do_task_dead(void);
+void __noreturn make_task_dead(int signr);
extern void proc_caches_init(void);
diff --git a/include/linux/sh_intc.h b/include/linux/sh_intc.h
index c255273..37ad810 100644
--- a/include/linux/sh_intc.h
+++ b/include/linux/sh_intc.h
@@ -97,7 +97,10 @@ struct intc_hw_desc {
unsigned int nr_subgroups;
};
-#define _INTC_ARRAY(a) a, __same_type(a, NULL) ? 0 : sizeof(a)/sizeof(*a)
+#define _INTC_SIZEOF_OR_ZERO(a) (_Generic(a, \
+ typeof(NULL): 0, \
+ default: sizeof(a)))
+#define _INTC_ARRAY(a) a, _INTC_SIZEOF_OR_ZERO(a)/sizeof(*a)
#define INTC_HW_DESC(vectors, groups, mask_regs, \
prio_regs, sense_regs, ack_regs) \
diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
index 48d015e..cc338c6 100644
--- a/include/linux/stmmac.h
+++ b/include/linux/stmmac.h
@@ -251,6 +251,7 @@ struct plat_stmmacenet_data {
int rss_en;
int mac_port_sel_speed;
bool en_tx_lpi_clockgating;
+ bool rx_clk_runs_in_lpi;
int has_xgmac;
bool vlan_fail_q_en;
u8 vlan_fail_q;
diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
index fa372b4..47cf70c 100644
--- a/include/linux/sysctl.h
+++ b/include/linux/sysctl.h
@@ -206,6 +206,9 @@ struct ctl_table_header *register_sysctl_paths(const struct ctl_path *path,
void unregister_sysctl_table(struct ctl_table_header * table);
extern int sysctl_init(void);
+extern void __register_sysctl_init(const char *path, struct ctl_table *table,
+ const char *table_name);
+#define register_sysctl_init(path, table) __register_sysctl_init(path, table, #table)
void do_sysctl_args(void);
extern int pwrsw_enabled;
diff --git a/include/linux/sysfb.h b/include/linux/sysfb.h
index 8ba8b5b..c1ef5fc 100644
--- a/include/linux/sysfb.h
+++ b/include/linux/sysfb.h
@@ -70,11 +70,16 @@ static inline void sysfb_disable(void)
#ifdef CONFIG_EFI
extern struct efifb_dmi_info efifb_dmi_list[];
-void sysfb_apply_efi_quirks(struct platform_device *pd);
+void sysfb_apply_efi_quirks(void);
+void sysfb_set_efifb_fwnode(struct platform_device *pd);
#else /* CONFIG_EFI */
-static inline void sysfb_apply_efi_quirks(struct platform_device *pd)
+static inline void sysfb_apply_efi_quirks(void)
+{
+}
+
+static inline void sysfb_set_efifb_fwnode(struct platform_device *pd)
{
}
diff --git a/include/linux/thermal.h b/include/linux/thermal.h
index c3148939..b94314e 100644
--- a/include/linux/thermal.h
+++ b/include/linux/thermal.h
@@ -92,6 +92,7 @@ struct thermal_cooling_device_ops {
struct thermal_cooling_device {
int id;
char *type;
+ unsigned long max_state;
struct device device;
struct device_node *np;
void *devdata;
@@ -113,7 +114,7 @@ struct thermal_cooling_device {
* @trip_hyst_attrs: attributes for trip points for sysfs: trip hysteresis
* @mode: current mode of this thermal zone
* @devdata: private pointer for device private data
- * @trips: number of trip points the thermal zone supports
+ * @num_trips: number of trip points the thermal zone supports
* @trips_disabled; bitmap for disabled trips
* @passive_delay_jiffies: number of jiffies to wait between polls when
* performing passive cooling.
@@ -153,7 +154,7 @@ struct thermal_zone_device {
struct thermal_attr *trip_hyst_attrs;
enum thermal_device_mode mode;
void *devdata;
- int trips;
+ int num_trips;
unsigned long trips_disabled; /* bitmap for disabled trips */
unsigned long passive_delay_jiffies;
unsigned long polling_delay_jiffies;
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 0999f631..9a07353 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -118,6 +118,15 @@ static inline int test_ti_thread_flag(struct thread_info *ti, int flag)
return test_bit(flag, (unsigned long *)&ti->flags);
}
+/*
+ * This may be used in noinstr code, and needs to be __always_inline to prevent
+ * inadvertent instrumentation.
+ */
+static __always_inline unsigned long read_ti_thread_flags(struct thread_info *ti)
+{
+ return READ_ONCE(ti->flags);
+}
+
#define set_thread_flag(flag) \
set_ti_thread_flag(current_thread_info(), flag)
#define clear_thread_flag(flag) \
@@ -130,6 +139,11 @@ static inline int test_ti_thread_flag(struct thread_info *ti, int flag)
test_and_clear_ti_thread_flag(current_thread_info(), flag)
#define test_thread_flag(flag) \
test_ti_thread_flag(current_thread_info(), flag)
+#define read_thread_flags() \
+ read_ti_thread_flags(current_thread_info())
+
+#define read_task_thread_flags(t) \
+ read_ti_thread_flags(task_thread_info(t))
#ifdef CONFIG_GENERIC_ENTRY
#define set_syscall_work(fl) \
diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
index 28031b1..cd01da1 100644
--- a/include/linux/tracepoint.h
+++ b/include/linux/tracepoint.h
@@ -231,12 +231,11 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
* not add unwanted padding between the beginning of the section and the
* structure. Force alignment to the same alignment as the section start.
*
- * When lockdep is enabled, we make sure to always do the RCU portions of
- * the tracepoint code, regardless of whether tracing is on. However,
- * don't check if the condition is false, due to interaction with idle
- * instrumentation. This lets us find RCU issues triggered with tracepoints
- * even when this tracepoint is off. This code has no purpose other than
- * poking RCU a bit.
+ * When lockdep is enabled, we make sure to always test if RCU is
+ * "watching" regardless if the tracepoint is enabled or not. Tracepoints
+ * require RCU to be active, and it should always warn at the tracepoint
+ * site if it is not watching, as it will need to be active when the
+ * tracepoint is enabled.
*/
#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \
extern int __traceiter_##name(data_proto); \
@@ -249,9 +248,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
TP_ARGS(args), \
TP_CONDITION(cond), 0); \
if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
- rcu_read_lock_sched_notrace(); \
- rcu_dereference_sched(__tracepoint_##name.funcs);\
- rcu_read_unlock_sched_notrace(); \
+ WARN_ON_ONCE(!rcu_is_watching()); \
} \
} \
__DECLARE_TRACE_RCU(name, PARAMS(proto), PARAMS(args), \
diff --git a/include/linux/transport_class.h b/include/linux/transport_class.h
index 63076fb..2efc271a 100644
--- a/include/linux/transport_class.h
+++ b/include/linux/transport_class.h
@@ -70,8 +70,14 @@ void transport_destroy_device(struct device *);
static inline int
transport_register_device(struct device *dev)
{
+ int ret;
+
transport_setup_device(dev);
- return transport_add_device(dev);
+ ret = transport_add_device(dev);
+ if (ret)
+ transport_destroy_device(dev);
+
+ return ret;
}
static inline void
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index ac03940..e1d59ca 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -338,6 +338,10 @@ copy_struct_from_user(void *dst, size_t ksize, const void __user *src,
size_t size = min(ksize, usize);
size_t rest = max(ksize, usize) - size;
+ /* Double check if ksize is larger than a known object size. */
+ if (WARN_ON_ONCE(ksize > __builtin_object_size(dst, 1)))
+ return -E2BIG;
+
/* Deal with trailing bytes. */
if (usize < ksize) {
memset(dst + size, 0, rest);
diff --git a/include/linux/usb.h b/include/linux/usb.h
index da1329b..89f58d0 100644
--- a/include/linux/usb.h
+++ b/include/linux/usb.h
@@ -749,11 +749,14 @@ extern struct device *usb_intf_get_dma_device(struct usb_interface *intf);
extern int usb_acpi_set_power_state(struct usb_device *hdev, int index,
bool enable);
extern bool usb_acpi_power_manageable(struct usb_device *hdev, int index);
+extern int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index);
#else
static inline int usb_acpi_set_power_state(struct usb_device *hdev, int index,
bool enable) { return 0; }
static inline bool usb_acpi_power_manageable(struct usb_device *hdev, int index)
{ return true; }
+static inline int usb_acpi_port_lpm_incapable(struct usb_device *hdev, int index)
+ { return 0; }
#endif
/* USB autosuspend and autoresume */
diff --git a/include/linux/util_macros.h b/include/linux/util_macros.h
index 72299f2..43db6e4 100644
--- a/include/linux/util_macros.h
+++ b/include/linux/util_macros.h
@@ -38,4 +38,16 @@
*/
#define find_closest_descending(x, a, as) __find_closest(x, a, as, >=)
+/**
+ * is_insidevar - check if the @ptr points inside the @var memory range.
+ * @ptr: the pointer to a memory address.
+ * @var: the variable which address and size identify the memory range.
+ *
+ * Evaluates to true if the address in @ptr lies within the memory
+ * range allocated to @var.
+ */
+#define is_insidevar(ptr, var) \
+ ((uintptr_t)(ptr) >= (uintptr_t)(var) && \
+ (uintptr_t)(ptr) < (uintptr_t)(var) + sizeof(var))
+
#endif
diff --git a/include/net/netfilter/nf_tproxy.h b/include/net/netfilter/nf_tproxy.h
index 82d0e41..faa108b 100644
--- a/include/net/netfilter/nf_tproxy.h
+++ b/include/net/netfilter/nf_tproxy.h
@@ -17,6 +17,13 @@ static inline bool nf_tproxy_sk_is_transparent(struct sock *sk)
return false;
}
+static inline void nf_tproxy_twsk_deschedule_put(struct inet_timewait_sock *tw)
+{
+ local_bh_disable();
+ inet_twsk_deschedule_put(tw);
+ local_bh_enable();
+}
+
/* assign a socket to the skb -- consumes sk */
static inline void nf_tproxy_assign_sock(struct sk_buff *skb, struct sock *sk)
{
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index 891b44d..6906da5 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -1335,4 +1335,11 @@ void mini_qdisc_pair_block_init(struct mini_Qdisc_pair *miniqp,
int sch_frag_xmit_hook(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb));
+/* Make sure qdisc is no longer in SCHED state. */
+static inline void qdisc_synchronize(const struct Qdisc *q)
+{
+ while (test_bit(__QDISC_STATE_SCHED, &q->state))
+ msleep(1);
+}
+
#endif
diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
index 8d2c3dd..790252c 100644
--- a/include/net/sctp/structs.h
+++ b/include/net/sctp/structs.h
@@ -1420,6 +1420,7 @@ struct sctp_stream_priorities {
/* The next stream in line */
struct sctp_stream_out_ext *next;
__u16 prio;
+ __u16 users;
};
struct sctp_stream_out_ext {
diff --git a/include/net/sock.h b/include/net/sock.h
index e1a303e..3a4e813 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -323,7 +323,7 @@ struct bpf_local_storage;
* @sk_tskey: counter to disambiguate concurrent tstamp requests
* @sk_zckey: counter to order MSG_ZEROCOPY notifications
* @sk_socket: Identd and reporting IO signals
- * @sk_user_data: RPC layer private data
+ * @sk_user_data: RPC layer private data. Write-protected by @sk_callback_lock.
* @sk_frag: cached page frag
* @sk_peek_off: current peek_offset value
* @sk_send_head: front of stuff to transmit
@@ -1847,7 +1847,12 @@ void sk_common_release(struct sock *sk);
* Default socket callbacks and setup code
*/
-/* Initialise core socket variables */
+/* Initialise core socket variables using an explicit uid. */
+void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid);
+
+/* Initialise core socket variables.
+ * Assumes struct socket *sock is embedded in a struct socket_alloc.
+ */
void sock_init_data(struct socket *sock, struct sock *sk);
/*
@@ -2315,6 +2320,19 @@ static inline __must_check bool skb_set_owner_sk_safe(struct sk_buff *skb, struc
return false;
}
+static inline struct sk_buff *skb_clone_and_charge_r(struct sk_buff *skb, struct sock *sk)
+{
+ skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC));
+ if (skb) {
+ if (sk_rmem_schedule(sk, skb, skb->truesize)) {
+ skb_set_owner_r(skb, sk);
+ return skb;
+ }
+ __kfree_skb(skb);
+ }
+ return NULL;
+}
+
static inline void skb_prepare_for_gro(struct sk_buff *skb)
{
if (skb->destructor != sock_wfree) {
diff --git a/include/net/tc_act/tc_pedit.h b/include/net/tc_act/tc_pedit.h
index 3e02709a..83fe399 100644
--- a/include/net/tc_act/tc_pedit.h
+++ b/include/net/tc_act/tc_pedit.h
@@ -4,22 +4,29 @@
#include <net/act_api.h>
#include <linux/tc_act/tc_pedit.h>
+#include <linux/types.h>
struct tcf_pedit_key_ex {
enum pedit_header_type htype;
enum pedit_cmd cmd;
};
-struct tcf_pedit {
- struct tc_action common;
- unsigned char tcfp_nkeys;
- unsigned char tcfp_flags;
- u32 tcfp_off_max_hint;
+struct tcf_pedit_parms {
struct tc_pedit_key *tcfp_keys;
struct tcf_pedit_key_ex *tcfp_keys_ex;
+ u32 tcfp_off_max_hint;
+ unsigned char tcfp_nkeys;
+ unsigned char tcfp_flags;
+ struct rcu_head rcu;
+};
+
+struct tcf_pedit {
+ struct tc_action common;
+ struct tcf_pedit_parms __rcu *parms;
};
#define to_pedit(a) ((struct tcf_pedit *)a)
+#define to_pedit_parms(a) (rcu_dereference(to_pedit(a)->parms))
static inline bool is_tcf_pedit(const struct tc_action *a)
{
@@ -32,37 +39,81 @@ static inline bool is_tcf_pedit(const struct tc_action *a)
static inline int tcf_pedit_nkeys(const struct tc_action *a)
{
- return to_pedit(a)->tcfp_nkeys;
+ struct tcf_pedit_parms *parms;
+ int nkeys;
+
+ rcu_read_lock();
+ parms = to_pedit_parms(a);
+ nkeys = parms->tcfp_nkeys;
+ rcu_read_unlock();
+
+ return nkeys;
}
static inline u32 tcf_pedit_htype(const struct tc_action *a, int index)
{
- if (to_pedit(a)->tcfp_keys_ex)
- return to_pedit(a)->tcfp_keys_ex[index].htype;
+ u32 htype = TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK;
+ struct tcf_pedit_parms *parms;
- return TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK;
+ rcu_read_lock();
+ parms = to_pedit_parms(a);
+ if (parms->tcfp_keys_ex)
+ htype = parms->tcfp_keys_ex[index].htype;
+ rcu_read_unlock();
+
+ return htype;
}
static inline u32 tcf_pedit_cmd(const struct tc_action *a, int index)
{
- if (to_pedit(a)->tcfp_keys_ex)
- return to_pedit(a)->tcfp_keys_ex[index].cmd;
+ struct tcf_pedit_parms *parms;
+ u32 cmd = __PEDIT_CMD_MAX;
- return __PEDIT_CMD_MAX;
+ rcu_read_lock();
+ parms = to_pedit_parms(a);
+ if (parms->tcfp_keys_ex)
+ cmd = parms->tcfp_keys_ex[index].cmd;
+ rcu_read_unlock();
+
+ return cmd;
}
static inline u32 tcf_pedit_mask(const struct tc_action *a, int index)
{
- return to_pedit(a)->tcfp_keys[index].mask;
+ struct tcf_pedit_parms *parms;
+ u32 mask;
+
+ rcu_read_lock();
+ parms = to_pedit_parms(a);
+ mask = parms->tcfp_keys[index].mask;
+ rcu_read_unlock();
+
+ return mask;
}
static inline u32 tcf_pedit_val(const struct tc_action *a, int index)
{
- return to_pedit(a)->tcfp_keys[index].val;
+ struct tcf_pedit_parms *parms;
+ u32 val;
+
+ rcu_read_lock();
+ parms = to_pedit_parms(a);
+ val = parms->tcfp_keys[index].val;
+ rcu_read_unlock();
+
+ return val;
}
static inline u32 tcf_pedit_offset(const struct tc_action *a, int index)
{
- return to_pedit(a)->tcfp_keys[index].off;
+ struct tcf_pedit_parms *parms;
+ u32 off;
+
+ rcu_read_lock();
+ parms = to_pedit_parms(a);
+ off = parms->tcfp_keys[index].off;
+ rcu_read_unlock();
+
+ return off;
}
#endif /* __NET_TC_PED_H */
diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
index 5cf8422..c7ee527 100644
--- a/include/scsi/libiscsi.h
+++ b/include/scsi/libiscsi.h
@@ -414,6 +414,8 @@ extern int iscsi_host_get_max_scsi_cmds(struct Scsi_Host *shost,
extern struct iscsi_cls_session *
iscsi_session_setup(struct iscsi_transport *, struct Scsi_Host *shost,
uint16_t, int, int, uint32_t, unsigned int);
+void iscsi_session_remove(struct iscsi_cls_session *cls_session);
+void iscsi_session_free(struct iscsi_cls_session *cls_session);
extern void iscsi_session_teardown(struct iscsi_cls_session *);
extern void iscsi_session_recovery_timedout(struct iscsi_cls_session *);
extern int iscsi_set_param(struct iscsi_cls_conn *cls_conn,
diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
index 0f2f149a..304ccf1 100644
--- a/include/scsi/scsi_transport_iscsi.h
+++ b/include/scsi/scsi_transport_iscsi.h
@@ -236,6 +236,14 @@ enum {
ISCSI_SESSION_FREE,
};
+enum {
+ ISCSI_SESSION_TARGET_UNBOUND,
+ ISCSI_SESSION_TARGET_ALLOCATED,
+ ISCSI_SESSION_TARGET_SCANNED,
+ ISCSI_SESSION_TARGET_UNBINDING,
+ ISCSI_SESSION_TARGET_MAX,
+};
+
#define ISCSI_MAX_TARGET -1
struct iscsi_cls_session {
@@ -262,6 +270,7 @@ struct iscsi_cls_session {
*/
pid_t creator;
int state;
+ int target_state; /* session target bind state */
int sid; /* session id */
void *dd_data; /* LLD private data */
struct device dev; /* sysfs transport/container device */
diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h
index c3039e9..32e93d5 100644
--- a/include/sound/soc-dapm.h
+++ b/include/sound/soc-dapm.h
@@ -16,6 +16,7 @@
#include <sound/asoc.h>
struct device;
+struct snd_pcm_substream;
struct snd_soc_pcm_runtime;
struct soc_enum;
diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
index 8f58fd9..9271b5d 100644
--- a/include/trace/events/btrfs.h
+++ b/include/trace/events/btrfs.h
@@ -96,7 +96,7 @@ struct btrfs_space_info;
EM( FLUSH_DELALLOC_WAIT, "FLUSH_DELALLOC_WAIT") \
EM( FLUSH_DELALLOC_FULL, "FLUSH_DELALLOC_FULL") \
EM( FLUSH_DELAYED_REFS_NR, "FLUSH_DELAYED_REFS_NR") \
- EM( FLUSH_DELAYED_REFS, "FLUSH_ELAYED_REFS") \
+ EM( FLUSH_DELAYED_REFS, "FLUSH_DELAYED_REFS") \
EM( ALLOC_CHUNK, "ALLOC_CHUNK") \
EM( ALLOC_CHUNK_FORCE, "ALLOC_CHUNK_FORCE") \
EM( RUN_DELAYED_IPUTS, "RUN_DELAYED_IPUTS") \
diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
index 670e417..fec252b 100644
--- a/include/trace/events/rcu.h
+++ b/include/trace/events/rcu.h
@@ -768,7 +768,7 @@ TRACE_EVENT_RCU(rcu_torture_read,
TP_ARGS(rcutorturename, rhp, secs, c_old, c),
TP_STRUCT__entry(
- __field(char, rcutorturename[RCUTORTURENAME_LEN])
+ __array(char, rcutorturename, RCUTORTURENAME_LEN)
__field(struct rcu_head *, rhp)
__field(unsigned long, secs)
__field(unsigned long, c_old)
diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
index a77b690..7f0b91d 100644
--- a/include/trace/trace_events.h
+++ b/include/trace/trace_events.h
@@ -479,7 +479,7 @@ static struct trace_event_functions trace_event_type_funcs_##call = { \
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
-#define ALIGN_STRUCTFIELD(type) ((int)(offsetof(struct {char a; type b;}, b)))
+#define ALIGN_STRUCTFIELD(type) ((int)(__alignof__(struct {type b;})))
#undef __field_ext
#define __field_ext(_type, _item, _filter_type) { \
diff --git a/include/uapi/linux/eventpoll.h b/include/uapi/linux/eventpoll.h
index 8a3432d..e687658 100644
--- a/include/uapi/linux/eventpoll.h
+++ b/include/uapi/linux/eventpoll.h
@@ -41,6 +41,12 @@
#define EPOLLMSG (__force __poll_t)0x00000400
#define EPOLLRDHUP (__force __poll_t)0x00002000
+/*
+ * Internal flag - wakeup generated by io_uring, used to detect recursion back
+ * into the io_uring poll handler.
+ */
+#define EPOLL_URING_WAKE ((__force __poll_t)(1U << 27))
+
/* Set exclusive wakeup mode for the target file descriptor */
#define EPOLLEXCLUSIVE ((__force __poll_t)(1U << 28))
diff --git a/include/uapi/linux/ip.h b/include/uapi/linux/ip.h
index d2f1433..860bbf6 100644
--- a/include/uapi/linux/ip.h
+++ b/include/uapi/linux/ip.h
@@ -18,6 +18,7 @@
#ifndef _UAPI_LINUX_IP_H
#define _UAPI_LINUX_IP_H
#include <linux/types.h>
+#include <linux/stddef.h>
#include <asm/byteorder.h>
#define IPTOS_TOS_MASK 0x1E
diff --git a/include/uapi/linux/ipv6.h b/include/uapi/linux/ipv6.h
index 62e5e16..39c6add 100644
--- a/include/uapi/linux/ipv6.h
+++ b/include/uapi/linux/ipv6.h
@@ -4,6 +4,7 @@
#include <linux/libc-compat.h>
#include <linux/types.h>
+#include <linux/stddef.h>
#include <linux/in6.h>
#include <asm/byteorder.h>
diff --git a/include/uapi/linux/netfilter/nf_conntrack_sctp.h b/include/uapi/linux/netfilter/nf_conntrack_sctp.h
index edc6dda..2d6f80d 100644
--- a/include/uapi/linux/netfilter/nf_conntrack_sctp.h
+++ b/include/uapi/linux/netfilter/nf_conntrack_sctp.h
@@ -15,7 +15,7 @@ enum sctp_conntrack {
SCTP_CONNTRACK_SHUTDOWN_RECD,
SCTP_CONNTRACK_SHUTDOWN_ACK_SENT,
SCTP_CONNTRACK_HEARTBEAT_SENT,
- SCTP_CONNTRACK_HEARTBEAT_ACKED,
+ SCTP_CONNTRACK_HEARTBEAT_ACKED, /* no longer used */
SCTP_CONNTRACK_MAX
};
diff --git a/include/uapi/linux/netfilter/nfnetlink_cttimeout.h b/include/uapi/linux/netfilter/nfnetlink_cttimeout.h
index 6b20fb2..aa805e6 100644
--- a/include/uapi/linux/netfilter/nfnetlink_cttimeout.h
+++ b/include/uapi/linux/netfilter/nfnetlink_cttimeout.h
@@ -94,7 +94,7 @@ enum ctattr_timeout_sctp {
CTA_TIMEOUT_SCTP_SHUTDOWN_RECD,
CTA_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT,
CTA_TIMEOUT_SCTP_HEARTBEAT_SENT,
- CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED,
+ CTA_TIMEOUT_SCTP_HEARTBEAT_ACKED, /* no longer used */
__CTA_TIMEOUT_SCTP_MAX
};
#define CTA_TIMEOUT_SCTP_MAX (__CTA_TIMEOUT_SCTP_MAX - 1)
diff --git a/include/uapi/linux/usb/video.h b/include/uapi/linux/usb/video.h
index bfdae12..c58854f 100644
--- a/include/uapi/linux/usb/video.h
+++ b/include/uapi/linux/usb/video.h
@@ -179,6 +179,36 @@
#define UVC_CONTROL_CAP_AUTOUPDATE (1 << 3)
#define UVC_CONTROL_CAP_ASYNCHRONOUS (1 << 4)
+/* 3.9.2.6 Color Matching Descriptor Values */
+enum uvc_color_primaries_values {
+ UVC_COLOR_PRIMARIES_UNSPECIFIED,
+ UVC_COLOR_PRIMARIES_BT_709_SRGB,
+ UVC_COLOR_PRIMARIES_BT_470_2_M,
+ UVC_COLOR_PRIMARIES_BT_470_2_B_G,
+ UVC_COLOR_PRIMARIES_SMPTE_170M,
+ UVC_COLOR_PRIMARIES_SMPTE_240M,
+};
+
+enum uvc_transfer_characteristics_values {
+ UVC_TRANSFER_CHARACTERISTICS_UNSPECIFIED,
+ UVC_TRANSFER_CHARACTERISTICS_BT_709,
+ UVC_TRANSFER_CHARACTERISTICS_BT_470_2_M,
+ UVC_TRANSFER_CHARACTERISTICS_BT_470_2_B_G,
+ UVC_TRANSFER_CHARACTERISTICS_SMPTE_170M,
+ UVC_TRANSFER_CHARACTERISTICS_SMPTE_240M,
+ UVC_TRANSFER_CHARACTERISTICS_LINEAR,
+ UVC_TRANSFER_CHARACTERISTICS_SRGB,
+};
+
+enum uvc_matrix_coefficients {
+ UVC_MATRIX_COEFFICIENTS_UNSPECIFIED,
+ UVC_MATRIX_COEFFICIENTS_BT_709,
+ UVC_MATRIX_COEFFICIENTS_FCC,
+ UVC_MATRIX_COEFFICIENTS_BT_470_2_B_G,
+ UVC_MATRIX_COEFFICIENTS_SMPTE_170M,
+ UVC_MATRIX_COEFFICIENTS_SMPTE_240M,
+};
+
/* ------------------------------------------------------------------------
* UVC structures
*/
diff --git a/include/uapi/linux/uvcvideo.h b/include/uapi/linux/uvcvideo.h
index 8288137..a9d0a64 100644
--- a/include/uapi/linux/uvcvideo.h
+++ b/include/uapi/linux/uvcvideo.h
@@ -86,7 +86,7 @@ struct uvc_xu_control_query {
* struct. The first two fields are added by the driver, they can be used for
* clock synchronisation. The rest is an exact copy of a UVC payload header.
* Only complete objects with complete buffers are included. Therefore it's
- * always sizeof(meta->ts) + sizeof(meta->sof) + meta->length bytes large.
+ * always sizeof(meta->ns) + sizeof(meta->sof) + meta->length bytes large.
*/
struct uvc_meta_buf {
__u64 ns;
diff --git a/include/xen/interface/platform.h b/include/xen/interface/platform.h
index 732efb0..744bc41 100644
--- a/include/xen/interface/platform.h
+++ b/include/xen/interface/platform.h
@@ -500,6 +500,8 @@ struct xenpf_symdata {
};
DEFINE_GUEST_HANDLE_STRUCT(xenpf_symdata);
+#define XENPF_get_dom0_console 64
+
struct xen_platform_op {
uint32_t cmd;
uint32_t interface_version; /* XENPF_INTERFACE_VERSION */
@@ -523,6 +525,7 @@ struct xen_platform_op {
struct xenpf_mem_hotadd mem_add;
struct xenpf_core_parking core_parking;
struct xenpf_symdata symdata;
+ struct dom0_vga_console_info dom0_console;
uint8_t pad[128];
} u;
};
diff --git a/init/Kconfig b/init/Kconfig
index a414439..dafc3ba 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -91,6 +91,10 @@
config CC_HAS_NO_PROFILE_FN_ATTR
def_bool $(success,echo '__attribute__((no_profile_instrument_function)) int x();' | $(CC) -x c - -c -o /dev/null -Werror)
+config PAHOLE_VERSION
+ int
+ default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+
config CONSTRUCTORS
bool
diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
index 87bc38b..81485c1 100644
--- a/io_uring/io-wq.c
+++ b/io_uring/io-wq.c
@@ -513,7 +513,7 @@ static struct io_wq_work *io_get_next_work(struct io_wqe_acct *acct,
static bool io_flush_signals(void)
{
- if (unlikely(test_thread_flag(TIF_NOTIFY_SIGNAL))) {
+ if (test_thread_flag(TIF_NOTIFY_SIGNAL) || current->task_works) {
__set_current_state(TASK_RUNNING);
tracehook_notify_signal();
return true;
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 48707ae..2d6f275 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -486,6 +486,7 @@ struct io_poll_iocb {
struct file *file;
struct wait_queue_head *head;
__poll_t events;
+ int retries;
struct wait_queue_entry wait;
};
@@ -578,6 +579,7 @@ struct io_sr_msg {
int msg_flags;
int bgid;
size_t len;
+ size_t done_io;
struct io_buffer *kbuf;
};
@@ -739,6 +741,7 @@ enum {
REQ_F_CREDS_BIT,
REQ_F_REFCOUNT_BIT,
REQ_F_ARM_LTIMEOUT_BIT,
+ REQ_F_PARTIAL_IO_BIT,
/* keep async read/write and isreg together and in order */
REQ_F_NOWAIT_READ_BIT,
REQ_F_NOWAIT_WRITE_BIT,
@@ -794,6 +797,8 @@ enum {
REQ_F_REFCOUNT = BIT(REQ_F_REFCOUNT_BIT),
/* there is a linked timeout that has to be armed */
REQ_F_ARM_LTIMEOUT = BIT(REQ_F_ARM_LTIMEOUT_BIT),
+ /* request has already done partial IO */
+ REQ_F_PARTIAL_IO = BIT(REQ_F_PARTIAL_IO_BIT),
};
struct async_poll {
@@ -1630,13 +1635,15 @@ static void io_cqring_ev_posted(struct io_ring_ctx *ctx)
* wake as many waiters as we need to.
*/
if (wq_has_sleeper(&ctx->cq_wait))
- wake_up_all(&ctx->cq_wait);
+ __wake_up(&ctx->cq_wait, TASK_NORMAL, 0,
+ poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
if (ctx->sq_data && waitqueue_active(&ctx->sq_data->wait))
wake_up(&ctx->sq_data->wait);
if (io_should_trigger_evfd(ctx))
- eventfd_signal(ctx->cq_ev_fd, 1);
+ eventfd_signal_mask(ctx->cq_ev_fd, 1, EPOLL_URING_WAKE);
if (waitqueue_active(&ctx->poll_wait))
- wake_up_interruptible(&ctx->poll_wait);
+ __wake_up(&ctx->poll_wait, TASK_INTERRUPTIBLE, 0,
+ poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
}
static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
@@ -1646,12 +1653,14 @@ static void io_cqring_ev_posted_iopoll(struct io_ring_ctx *ctx)
if (ctx->flags & IORING_SETUP_SQPOLL) {
if (waitqueue_active(&ctx->cq_wait))
- wake_up_all(&ctx->cq_wait);
+ __wake_up(&ctx->cq_wait, TASK_NORMAL, 0,
+ poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
}
if (io_should_trigger_evfd(ctx))
- eventfd_signal(ctx->cq_ev_fd, 1);
+ eventfd_signal_mask(ctx->cq_ev_fd, 1, EPOLL_URING_WAKE);
if (waitqueue_active(&ctx->poll_wait))
- wake_up_interruptible(&ctx->poll_wait);
+ __wake_up(&ctx->poll_wait, TASK_INTERRUPTIBLE, 0,
+ poll_to_key(EPOLL_URING_WAKE | EPOLLIN));
}
/* Returns true if there are no backlogged entries after the flush */
@@ -2455,6 +2464,15 @@ static inline unsigned int io_put_rw_kbuf(struct io_kiocb *req)
static inline bool io_run_task_work(void)
{
+ /*
+ * PF_IO_WORKER never returns to userspace, so check here if we have
+ * notify work that needs processing.
+ */
+ if (current->flags & PF_IO_WORKER &&
+ test_thread_flag(TIF_NOTIFY_RESUME)) {
+ __set_current_state(TASK_RUNNING);
+ tracehook_notify_resume(NULL);
+ }
if (test_thread_flag(TIF_NOTIFY_SIGNAL) || current->task_works) {
__set_current_state(TASK_RUNNING);
tracehook_notify_signal();
@@ -2696,17 +2714,32 @@ static bool io_rw_should_reissue(struct io_kiocb *req)
}
#endif
-static bool __io_complete_rw_common(struct io_kiocb *req, long res)
+/*
+ * Trigger the notifications after having done some IO, and finish the write
+ * accounting, if any.
+ */
+static void io_req_io_end(struct io_kiocb *req)
{
- if (req->rw.kiocb.ki_flags & IOCB_WRITE) {
+ struct io_rw *rw = &req->rw;
+
+ if (rw->kiocb.ki_flags & IOCB_WRITE) {
kiocb_end_write(req);
fsnotify_modify(req->file);
} else {
fsnotify_access(req->file);
}
+}
+
+static bool __io_complete_rw_common(struct io_kiocb *req, long res)
+{
if (res != req->result) {
if ((res == -EAGAIN || res == -EOPNOTSUPP) &&
io_rw_should_reissue(req)) {
+ /*
+ * Reissue will start accounting again, finish the
+ * current cycle.
+ */
+ io_req_io_end(req);
req->flags |= REQ_F_REISSUE;
return true;
}
@@ -2748,12 +2781,10 @@ static void io_req_task_complete(struct io_kiocb *req, bool *locked)
}
}
-static void __io_complete_rw(struct io_kiocb *req, long res, long res2,
- unsigned int issue_flags)
+static void io_req_rw_complete(struct io_kiocb *req, bool *locked)
{
- if (__io_complete_rw_common(req, res))
- return;
- __io_req_complete(req, issue_flags, io_fixup_rw_res(req, res), io_put_rw_kbuf(req));
+ io_req_io_end(req);
+ io_req_task_complete(req, locked);
}
static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
@@ -2763,7 +2794,7 @@ static void io_complete_rw(struct kiocb *kiocb, long res, long res2)
if (__io_complete_rw_common(req, res))
return;
req->result = io_fixup_rw_res(req, res);
- req->io_task_work.func = io_req_task_complete;
+ req->io_task_work.func = io_req_rw_complete;
io_req_task_work_add(req);
}
@@ -2915,14 +2946,6 @@ static int io_prep_rw(struct io_kiocb *req, const struct io_uring_sqe *sqe,
req->flags |= REQ_F_ISREG;
kiocb->ki_pos = READ_ONCE(sqe->off);
- if (kiocb->ki_pos == -1) {
- if (!(file->f_mode & FMODE_STREAM)) {
- req->flags |= REQ_F_CUR_POS;
- kiocb->ki_pos = file->f_pos;
- } else {
- kiocb->ki_pos = 0;
- }
- }
kiocb->ki_hint = ki_hint_validate(file_write_hint(kiocb->ki_filp));
kiocb->ki_flags = iocb_flags(kiocb->ki_filp);
ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));
@@ -3004,6 +3027,23 @@ static inline void io_rw_done(struct kiocb *kiocb, ssize_t ret)
}
}
+static inline loff_t *io_kiocb_update_pos(struct io_kiocb *req)
+{
+ struct kiocb *kiocb = &req->rw.kiocb;
+
+ if (kiocb->ki_pos != -1)
+ return &kiocb->ki_pos;
+
+ if (!(req->file->f_mode & FMODE_STREAM)) {
+ req->flags |= REQ_F_CUR_POS;
+ kiocb->ki_pos = req->file->f_pos;
+ return &kiocb->ki_pos;
+ }
+
+ kiocb->ki_pos = 0;
+ return NULL;
+}
+
static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
unsigned int issue_flags)
{
@@ -3011,10 +3051,20 @@ static void kiocb_done(struct kiocb *kiocb, ssize_t ret,
if (req->flags & REQ_F_CUR_POS)
req->file->f_pos = kiocb->ki_pos;
- if (ret >= 0 && (kiocb->ki_complete == io_complete_rw))
- __io_complete_rw(req, ret, 0, issue_flags);
- else
+ if (ret >= 0 && (kiocb->ki_complete == io_complete_rw)) {
+ if (!__io_complete_rw_common(req, ret)) {
+ /*
+ * Safe to call io_end from here as we're inline
+ * from the submission path.
+ */
+ io_req_io_end(req);
+ __io_req_complete(req, issue_flags,
+ io_fixup_rw_res(req, ret),
+ io_put_rw_kbuf(req));
+ }
+ } else {
io_rw_done(kiocb, ret);
+ }
if (req->flags & REQ_F_REISSUE) {
req->flags &= ~REQ_F_REISSUE;
@@ -3296,6 +3346,7 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
struct kiocb *kiocb = &req->rw.kiocb;
struct file *file = req->file;
ssize_t ret = 0;
+ loff_t *ppos;
/*
* Don't support polled IO through this interface, and we can't
@@ -3307,6 +3358,8 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
if (kiocb->ki_flags & IOCB_NOWAIT)
return -EAGAIN;
+ ppos = io_kiocb_ppos(kiocb);
+
while (iov_iter_count(iter)) {
struct iovec iovec;
ssize_t nr;
@@ -3320,10 +3373,10 @@ static ssize_t loop_rw_iter(int rw, struct io_kiocb *req, struct iov_iter *iter)
if (rw == READ) {
nr = file->f_op->read(file, iovec.iov_base,
- iovec.iov_len, io_kiocb_ppos(kiocb));
+ iovec.iov_len, ppos);
} else {
nr = file->f_op->write(file, iovec.iov_base,
- iovec.iov_len, io_kiocb_ppos(kiocb));
+ iovec.iov_len, ppos);
}
if (nr < 0) {
@@ -3524,6 +3577,7 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
struct iov_iter_state __state, *state;
ssize_t ret, ret2;
+ loff_t *ppos;
if (rw) {
iter = &rw->iter;
@@ -3556,7 +3610,9 @@ static int io_read(struct io_kiocb *req, unsigned int issue_flags)
return ret ?: -EAGAIN;
}
- ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), req->result);
+ ppos = io_kiocb_update_pos(req);
+
+ ret = rw_verify_area(READ, req->file, ppos, req->result);
if (unlikely(ret)) {
kfree(iovec);
return ret;
@@ -3660,6 +3716,7 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
struct iov_iter_state __state, *state;
ssize_t ret, ret2;
+ loff_t *ppos;
if (rw) {
iter = &rw->iter;
@@ -3690,7 +3747,9 @@ static int io_write(struct io_kiocb *req, unsigned int issue_flags)
(req->flags & REQ_F_ISREG))
goto copy_iov;
- ret = rw_verify_area(WRITE, req->file, io_kiocb_ppos(kiocb), req->result);
+ ppos = io_kiocb_update_pos(req);
+
+ ret = rw_verify_area(WRITE, req->file, ppos, req->result);
if (unlikely(ret))
goto out_free;
@@ -4770,6 +4829,13 @@ static int io_sync_file_range(struct io_kiocb *req, unsigned int issue_flags)
}
#if defined(CONFIG_NET)
+static bool io_net_retry(struct socket *sock, int flags)
+{
+ if (!(flags & MSG_WAITALL))
+ return false;
+ return sock->type == SOCK_STREAM || sock->type == SOCK_SEQPACKET;
+}
+
static int io_setup_async_msg(struct io_kiocb *req,
struct io_async_msghdr *kmsg)
{
@@ -4787,8 +4853,10 @@ static int io_setup_async_msg(struct io_kiocb *req,
if (async_msg->msg.msg_name)
async_msg->msg.msg_name = &async_msg->addr;
/* if were using fast_iov, set it to the new one */
- if (!async_msg->free_iov)
- async_msg->msg.msg_iter.iov = async_msg->fast_iov;
+ if (!kmsg->free_iov) {
+ size_t fast_idx = kmsg->msg.msg_iter.iov - kmsg->fast_iov;
+ async_msg->msg.msg_iter.iov = &async_msg->fast_iov[fast_idx];
+ }
return -EAGAIN;
}
@@ -4833,12 +4901,14 @@ static int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
if (req->ctx->compat)
sr->msg_flags |= MSG_CMSG_COMPAT;
#endif
+ sr->done_io = 0;
return 0;
}
static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_async_msghdr iomsg, *kmsg;
+ struct io_sr_msg *sr = &req->sr_msg;
struct socket *sock;
unsigned flags;
int min_ret = 0;
@@ -4863,17 +4933,27 @@ static int io_sendmsg(struct io_kiocb *req, unsigned int issue_flags)
min_ret = iov_iter_count(&kmsg->msg.msg_iter);
ret = __sys_sendmsg_sock(sock, &kmsg->msg, flags);
- if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
- return io_setup_async_msg(req, kmsg);
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
+ if (ret < min_ret) {
+ if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
+ return io_setup_async_msg(req, kmsg);
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
+ if (ret > 0 && io_net_retry(sock, flags)) {
+ sr->done_io += ret;
+ req->flags |= REQ_F_PARTIAL_IO;
+ return io_setup_async_msg(req, kmsg);
+ }
+ req_set_fail(req);
+ }
/* fast path, check for non-NULL to avoid function call */
if (kmsg->free_iov)
kfree(kmsg->free_iov);
req->flags &= ~REQ_F_NEED_CLEANUP;
- if (ret < min_ret)
- req_set_fail(req);
+ if (ret >= 0)
+ ret += sr->done_io;
+ else if (sr->done_io)
+ ret = sr->done_io;
__io_req_complete(req, issue_flags, ret, 0);
return 0;
}
@@ -4909,13 +4989,24 @@ static int io_send(struct io_kiocb *req, unsigned int issue_flags)
msg.msg_flags = flags;
ret = sock_sendmsg(sock, &msg);
- if ((issue_flags & IO_URING_F_NONBLOCK) && ret == -EAGAIN)
- return -EAGAIN;
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
-
- if (ret < min_ret)
+ if (ret < min_ret) {
+ if (ret == -EAGAIN && (issue_flags & IO_URING_F_NONBLOCK))
+ return -EAGAIN;
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
+ if (ret > 0 && io_net_retry(sock, flags)) {
+ sr->len -= ret;
+ sr->buf += ret;
+ sr->done_io += ret;
+ req->flags |= REQ_F_PARTIAL_IO;
+ return -EAGAIN;
+ }
req_set_fail(req);
+ }
+ if (ret >= 0)
+ ret += sr->done_io;
+ else if (sr->done_io)
+ ret = sr->done_io;
__io_req_complete(req, issue_flags, ret, 0);
return 0;
}
@@ -5051,7 +5142,7 @@ static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));
sr->len = READ_ONCE(sqe->len);
sr->bgid = READ_ONCE(sqe->buf_group);
- sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
+ sr->msg_flags = READ_ONCE(sqe->msg_flags);
if (sr->msg_flags & MSG_DONTWAIT)
req->flags |= REQ_F_NOWAIT;
@@ -5059,12 +5150,14 @@ static int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
if (req->ctx->compat)
sr->msg_flags |= MSG_CMSG_COMPAT;
#endif
+ sr->done_io = 0;
return 0;
}
static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
{
struct io_async_msghdr iomsg, *kmsg;
+ struct io_sr_msg *sr = &req->sr_msg;
struct socket *sock;
struct io_buffer *kbuf;
unsigned flags;
@@ -5102,10 +5195,20 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
ret = __sys_recvmsg_sock(sock, &kmsg->msg, req->sr_msg.umsg,
kmsg->uaddr, flags);
- if (force_nonblock && ret == -EAGAIN)
- return io_setup_async_msg(req, kmsg);
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
+ if (ret < min_ret) {
+ if (ret == -EAGAIN && force_nonblock)
+ return io_setup_async_msg(req, kmsg);
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
+ if (ret > 0 && io_net_retry(sock, flags)) {
+ sr->done_io += ret;
+ req->flags |= REQ_F_PARTIAL_IO;
+ return io_setup_async_msg(req, kmsg);
+ }
+ req_set_fail(req);
+ } else if ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
+ req_set_fail(req);
+ }
if (req->flags & REQ_F_BUFFER_SELECTED)
cflags = io_put_recv_kbuf(req);
@@ -5113,8 +5216,10 @@ static int io_recvmsg(struct io_kiocb *req, unsigned int issue_flags)
if (kmsg->free_iov)
kfree(kmsg->free_iov);
req->flags &= ~REQ_F_NEED_CLEANUP;
- if (ret < min_ret || ((flags & MSG_WAITALL) && (kmsg->msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
- req_set_fail(req);
+ if (ret >= 0)
+ ret += sr->done_io;
+ else if (sr->done_io)
+ ret = sr->done_io;
__io_req_complete(req, issue_flags, ret, cflags);
return 0;
}
@@ -5161,15 +5266,29 @@ static int io_recv(struct io_kiocb *req, unsigned int issue_flags)
min_ret = iov_iter_count(&msg.msg_iter);
ret = sock_recvmsg(sock, &msg, flags);
- if (force_nonblock && ret == -EAGAIN)
- return -EAGAIN;
- if (ret == -ERESTARTSYS)
- ret = -EINTR;
+ if (ret < min_ret) {
+ if (ret == -EAGAIN && force_nonblock)
+ return -EAGAIN;
+ if (ret == -ERESTARTSYS)
+ ret = -EINTR;
+ if (ret > 0 && io_net_retry(sock, flags)) {
+ sr->len -= ret;
+ sr->buf += ret;
+ sr->done_io += ret;
+ req->flags |= REQ_F_PARTIAL_IO;
+ return -EAGAIN;
+ }
+ req_set_fail(req);
+ } else if ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
out_free:
+ req_set_fail(req);
+ }
if (req->flags & REQ_F_BUFFER_SELECTED)
cflags = io_put_recv_kbuf(req);
- if (ret < min_ret || ((flags & MSG_WAITALL) && (msg.msg_flags & (MSG_TRUNC | MSG_CTRUNC))))
- req_set_fail(req);
+ if (ret >= 0)
+ ret += sr->done_io;
+ else if (sr->done_io)
+ ret = sr->done_io;
__io_req_complete(req, issue_flags, ret, cflags);
return 0;
}
@@ -5207,9 +5326,6 @@ static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
struct file *file;
int ret, fd;
- if (req->file->f_flags & O_NONBLOCK)
- req->flags |= REQ_F_NOWAIT;
-
if (!fixed) {
fd = __get_unused_fd_flags(accept->flags, accept->nofile);
if (unlikely(fd < 0))
@@ -5221,6 +5337,8 @@ static int io_accept(struct io_kiocb *req, unsigned int issue_flags)
if (!fixed)
put_unused_fd(fd);
ret = PTR_ERR(file);
+ /* safe to retry */
+ req->flags |= REQ_F_PARTIAL_IO;
if (ret == -EAGAIN && force_nonblock)
return -EAGAIN;
if (ret == -ERESTARTSYS)
@@ -5637,8 +5755,17 @@ static int io_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync,
if (mask && !(mask & poll->events))
return 0;
- if (io_poll_get_ownership(req))
+ if (io_poll_get_ownership(req)) {
+ /*
+ * If we trigger a multishot poll off our own wakeup path,
+ * disable multishot as there is a circular dependency between
+ * CQ posting and triggering the event.
+ */
+ if (mask & EPOLL_URING_WAKE)
+ poll->events |= EPOLLONESHOT;
+
__io_poll_execute(req, mask);
+ }
return 1;
}
@@ -5768,6 +5895,14 @@ enum {
IO_APOLL_READY
};
+/*
+ * We can't reliably detect loops in repeated poll triggers and issue
+ * subsequently failing. But rather than fail these immediately, allow a
+ * certain amount of retries before we give up. Given that this condition
+ * should _rarely_ trigger even once, we should be fine with a larger value.
+ */
+#define APOLL_MAX_RETRY 128
+
static int io_arm_poll_handler(struct io_kiocb *req)
{
const struct io_op_def *def = &io_op_defs[req->opcode];
@@ -5779,8 +5914,6 @@ static int io_arm_poll_handler(struct io_kiocb *req)
if (!req->file || !file_can_poll(req->file))
return IO_APOLL_ABORTED;
- if (req->flags & REQ_F_POLLED)
- return IO_APOLL_ABORTED;
if (!def->pollin && !def->pollout)
return IO_APOLL_ABORTED;
@@ -5795,9 +5928,19 @@ static int io_arm_poll_handler(struct io_kiocb *req)
mask |= POLLOUT | POLLWRNORM;
}
- apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
- if (unlikely(!apoll))
- return IO_APOLL_ABORTED;
+ if (req->flags & REQ_F_POLLED) {
+ apoll = req->apoll;
+ kfree(apoll->double_poll);
+ if (unlikely(!--apoll->poll.retries)) {
+ apoll->double_poll = NULL;
+ return IO_APOLL_ABORTED;
+ }
+ } else {
+ apoll = kmalloc(sizeof(*apoll), GFP_ATOMIC);
+ if (unlikely(!apoll))
+ return IO_APOLL_ABORTED;
+ apoll->poll.retries = APOLL_MAX_RETRY;
+ }
apoll->double_poll = NULL;
req->apoll = apoll;
req->flags |= REQ_F_POLLED;
@@ -9097,14 +9240,17 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, struct iovec *iov,
pret = pin_user_pages(ubuf, nr_pages, FOLL_WRITE | FOLL_LONGTERM,
pages, vmas);
if (pret == nr_pages) {
+ struct file *file = vmas[0]->vm_file;
+
/* don't support file backed memory */
for (i = 0; i < nr_pages; i++) {
- struct vm_area_struct *vma = vmas[i];
-
- if (vma_is_shmem(vma))
+ if (vmas[i]->vm_file != file) {
+ ret = -EINVAL;
+ break;
+ }
+ if (!file)
continue;
- if (vma->vm_file &&
- !is_file_hugepages(vma->vm_file)) {
+ if (!vma_is_shmem(vmas[i]) && !is_file_hugepages(file)) {
ret = -EOPNOTSUPP;
break;
}
@@ -9615,6 +9761,10 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
/* if we failed setting up the ctx, we might not have any rings */
io_iopoll_try_reap_events(ctx);
+ /* drop cached put refs after potentially doing completions */
+ if (current->io_uring)
+ io_uring_drop_tctx_refs(current);
+
INIT_WORK(&ctx->exit_work, io_ring_exit_work);
/*
* Use system_unbound_wq to avoid spawning tons of event kworkers
@@ -9726,6 +9876,7 @@ static void io_uring_try_cancel_requests(struct io_ring_ctx *ctx,
while (!list_empty_careful(&ctx->iopoll_list)) {
io_iopoll_try_reap_events(ctx);
ret = true;
+ cond_resched();
}
}
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 0c2fa93..6c7126d 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3655,6 +3655,7 @@ static int btf_datasec_resolve(struct btf_verifier_env *env,
struct btf *btf = env->btf;
u16 i;
+ env->resolve_mode = RESOLVE_TBD;
for_each_vsi_from(i, v->next_member, v->t, vsi) {
u32 var_type_id = vsi->type, type_id, type_size = 0;
const struct btf_type *var_type = btf_type_by_id(env->btf,
@@ -4468,6 +4469,7 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
if (!ctx_struct)
/* should not happen */
return NULL;
+again:
ctx_tname = btf_name_by_offset(btf_vmlinux, ctx_struct->name_off);
if (!ctx_tname) {
/* should not happen */
@@ -4481,8 +4483,16 @@ btf_get_prog_ctx_type(struct bpf_verifier_log *log, const struct btf *btf,
* int socket_filter_bpf_prog(struct __sk_buff *skb)
* { // no fields of skb are ever used }
*/
- if (strcmp(ctx_tname, tname))
- return NULL;
+ if (strcmp(ctx_tname, tname)) {
+ /* bpf_user_pt_regs_t is a typedef, so resolve it to
+ * underlying struct and check name again
+ */
+ if (!btf_type_is_modifier(ctx_struct))
+ return NULL;
+ while (btf_type_is_modifier(ctx_struct))
+ ctx_struct = btf_type_by_id(btf_vmlinux, ctx_struct->type);
+ goto again;
+ }
return ctx_type;
}
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 4ce500e..f7c27c1 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -32,6 +32,7 @@
#include <linux/perf_event.h>
#include <linux/extable.h>
#include <linux/log2.h>
+#include <linux/nospec.h>
#include <asm/barrier.h>
#include <asm/unaligned.h>
@@ -828,7 +829,7 @@ static int __init bpf_jit_charge_init(void)
{
/* Only used as heuristic here to derive limit. */
bpf_jit_limit_max = bpf_jit_alloc_exec_limit();
- bpf_jit_limit = min_t(u64, round_up(bpf_jit_limit_max >> 2,
+ bpf_jit_limit = min_t(u64, round_up(bpf_jit_limit_max >> 1,
PAGE_SIZE), LONG_MAX);
return 0;
}
@@ -1648,9 +1649,7 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
* reuse preexisting logic from Spectre v1 mitigation that
* happens to produce the required code on x86 for v4 as well.
*/
-#ifdef CONFIG_X86
barrier_nospec();
-#endif
CONT;
#define LDST(SIZEOP, SIZE) \
STX_MEM_##SIZEOP: \
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index e7f45a9..10b3777 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -163,7 +163,7 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
unsigned long flags;
bool use_raw_lock;
- hash = hash & HASHTAB_MAP_LOCK_MASK;
+ hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
use_raw_lock = htab_use_raw_lock(htab);
if (use_raw_lock)
@@ -194,7 +194,7 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
{
bool use_raw_lock = htab_use_raw_lock(htab);
- hash = hash & HASHTAB_MAP_LOCK_MASK;
+ hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
if (use_raw_lock)
raw_spin_unlock_irqrestore(&b->raw_lock, flags);
else
diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c
index bd09290..fcdd282 100644
--- a/kernel/bpf/offload.c
+++ b/kernel/bpf/offload.c
@@ -216,9 +216,6 @@ static void __bpf_prog_offload_destroy(struct bpf_prog *prog)
if (offload->dev_state)
offload->offdev->ops->destroy(prog);
- /* Make sure BPF_PROG_GET_NEXT_ID can't find this dead program */
- bpf_prog_free_id(prog, true);
-
list_del_init(&offload->offloads);
kfree(offload);
prog->aux->offload = NULL;
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index aea9852..ad41b82 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -1695,7 +1695,7 @@ static void bpf_audit_prog(const struct bpf_prog *prog, unsigned int op)
return;
if (audit_enabled == AUDIT_OFF)
return;
- if (op == BPF_AUDIT_LOAD)
+ if (!in_irq() && !irqs_disabled())
ctx = audit_context();
ab = audit_log_start(ctx, GFP_ATOMIC, AUDIT_BPF);
if (unlikely(!ab))
@@ -1790,6 +1790,7 @@ static void bpf_prog_put_deferred(struct work_struct *work)
prog = aux->prog;
perf_event_bpf_event(prog, PERF_BPF_EVENT_PROG_UNLOAD, 0);
bpf_audit_prog(prog, BPF_AUDIT_UNLOAD);
+ bpf_prog_free_id(prog, true);
__bpf_prog_put_noref(prog, true);
}
@@ -1798,9 +1799,6 @@ static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock)
struct bpf_prog_aux *aux = prog->aux;
if (atomic64_dec_and_test(&aux->refcnt)) {
- /* bpf_prog_free_id() must be called first */
- bpf_prog_free_id(prog, do_idr_lock);
-
if (in_irq() || irqs_disabled()) {
INIT_WORK(&aux->work, bpf_prog_put_deferred);
schedule_work(&aux->work);
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 488225b..1c95d97 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -614,6 +614,12 @@ static bool is_spilled_reg(const struct bpf_stack_state *stack)
return stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL;
}
+static void scrub_spilled_slot(u8 *stype)
+{
+ if (*stype != STACK_INVALID)
+ *stype = STACK_MISC;
+}
+
static void print_verifier_state(struct bpf_verifier_env *env,
const struct bpf_func_state *state)
{
@@ -2218,8 +2224,6 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
*/
if (insn->src_reg != BPF_REG_FP)
return 0;
- if (BPF_SIZE(insn->code) != BPF_DW)
- return 0;
/* dreg = *(u64 *)[fp - off] was a fill from the stack.
* that [fp - off] slot contains scalar that needs to be
@@ -2242,8 +2246,6 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
/* scalars can only be spilled into stack */
if (insn->dst_reg != BPF_REG_FP)
return 0;
- if (BPF_SIZE(insn->code) != BPF_DW)
- return 0;
spi = (-insn->off - 1) / BPF_REG_SIZE;
if (spi >= 64) {
verbose(env, "BUG spi %d\n", spi);
@@ -2259,6 +2261,12 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
if (opcode == BPF_CALL) {
if (insn->src_reg == BPF_PSEUDO_CALL)
return -ENOTSUPP;
+ /* kfunc with imm==0 is invalid and fixup_kfunc_call will
+ * catch this error later. Make backtracking conservative
+ * with ENOTSUPP.
+ */
+ if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL && insn->imm == 0)
+ return -ENOTSUPP;
/* regular helper call sets R0 */
*reg_mask &= ~1;
if (*reg_mask & 0x3f) {
@@ -2607,16 +2615,33 @@ static bool __is_pointer_value(bool allow_ptr_leaks,
return reg->type != SCALAR_VALUE;
}
+/* Copy src state preserving dst->parent and dst->live fields */
+static void copy_register_state(struct bpf_reg_state *dst, const struct bpf_reg_state *src)
+{
+ struct bpf_reg_state *parent = dst->parent;
+ enum bpf_reg_liveness live = dst->live;
+
+ *dst = *src;
+ dst->parent = parent;
+ dst->live = live;
+}
+
static void save_register_state(struct bpf_func_state *state,
- int spi, struct bpf_reg_state *reg)
+ int spi, struct bpf_reg_state *reg,
+ int size)
{
int i;
- state->stack[spi].spilled_ptr = *reg;
- state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
+ copy_register_state(&state->stack[spi].spilled_ptr, reg);
+ if (size == BPF_REG_SIZE)
+ state->stack[spi].spilled_ptr.live |= REG_LIVE_WRITTEN;
- for (i = 0; i < BPF_REG_SIZE; i++)
- state->stack[spi].slot_type[i] = STACK_SPILL;
+ for (i = BPF_REG_SIZE; i > BPF_REG_SIZE - size; i--)
+ state->stack[spi].slot_type[i - 1] = STACK_SPILL;
+
+ /* size < 8 bytes spill */
+ for (; i; i--)
+ scrub_spilled_slot(&state->stack[spi].slot_type[i - 1]);
}
/* check_stack_{read,write}_fixed_off functions track spill/fill of registers,
@@ -2653,7 +2678,9 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
bool sanitize = reg && is_spillable_regtype(reg->type);
for (i = 0; i < size; i++) {
- if (state->stack[spi].slot_type[i] == STACK_INVALID) {
+ u8 type = state->stack[spi].slot_type[i];
+
+ if (type != STACK_MISC && type != STACK_ZERO) {
sanitize = true;
break;
}
@@ -2663,7 +2690,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
env->insn_aux_data[insn_idx].sanitize_stack_spill = true;
}
- if (reg && size == BPF_REG_SIZE && register_is_bounded(reg) &&
+ if (reg && !(off % BPF_REG_SIZE) && register_is_bounded(reg) &&
!register_is_null(reg) && env->bpf_capable) {
if (dst_reg != BPF_REG_FP) {
/* The backtracking logic can only recognize explicit
@@ -2676,7 +2703,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
if (err)
return err;
}
- save_register_state(state, spi, reg);
+ save_register_state(state, spi, reg, size);
} else if (reg && is_spillable_regtype(reg->type)) {
/* register containing pointer is being spilled into stack */
if (size != BPF_REG_SIZE) {
@@ -2688,7 +2715,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
verbose(env, "cannot spill pointers to stack into stack frame of the caller\n");
return -EINVAL;
}
- save_register_state(state, spi, reg);
+ save_register_state(state, spi, reg, size);
} else {
u8 type = STACK_MISC;
@@ -2697,7 +2724,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
/* Mark slots as STACK_MISC if they belonged to spilled ptr. */
if (is_spilled_reg(&state->stack[spi]))
for (i = 0; i < BPF_REG_SIZE; i++)
- state->stack[spi].slot_type[i] = STACK_MISC;
+ scrub_spilled_slot(&state->stack[spi].slot_type[i]);
/* only mark the slot as written if all 8 bytes were written
* otherwise read propagation may incorrectly stop too soon
@@ -2903,35 +2930,56 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
struct bpf_func_state *state = vstate->frame[vstate->curframe];
int i, slot = -off - 1, spi = slot / BPF_REG_SIZE;
struct bpf_reg_state *reg;
- u8 *stype;
+ u8 *stype, type;
stype = reg_state->stack[spi].slot_type;
reg = ®_state->stack[spi].spilled_ptr;
if (is_spilled_reg(®_state->stack[spi])) {
- if (size != BPF_REG_SIZE) {
+ u8 spill_size = 1;
+
+ for (i = BPF_REG_SIZE - 1; i > 0 && stype[i - 1] == STACK_SPILL; i--)
+ spill_size++;
+
+ if (size != BPF_REG_SIZE || spill_size != BPF_REG_SIZE) {
if (reg->type != SCALAR_VALUE) {
verbose_linfo(env, env->insn_idx, "; ");
verbose(env, "invalid size of register fill\n");
return -EACCES;
}
- if (dst_regno >= 0) {
- mark_reg_unknown(env, state->regs, dst_regno);
- state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
- }
+
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
- return 0;
- }
- for (i = 1; i < BPF_REG_SIZE; i++) {
- if (stype[(slot - i) % BPF_REG_SIZE] != STACK_SPILL) {
- verbose(env, "corrupted spill memory\n");
- return -EACCES;
+ if (dst_regno < 0)
+ return 0;
+
+ if (!(off % BPF_REG_SIZE) && size == spill_size) {
+ /* The earlier check_reg_arg() has decided the
+ * subreg_def for this insn. Save it first.
+ */
+ s32 subreg_def = state->regs[dst_regno].subreg_def;
+
+ copy_register_state(&state->regs[dst_regno], reg);
+ state->regs[dst_regno].subreg_def = subreg_def;
+ } else {
+ for (i = 0; i < size; i++) {
+ type = stype[(slot - i) % BPF_REG_SIZE];
+ if (type == STACK_SPILL)
+ continue;
+ if (type == STACK_MISC)
+ continue;
+ verbose(env, "invalid read from stack off %d+%d size %d\n",
+ off, i, size);
+ return -EACCES;
+ }
+ mark_reg_unknown(env, state->regs, dst_regno);
}
+ state->regs[dst_regno].live |= REG_LIVE_WRITTEN;
+ return 0;
}
if (dst_regno >= 0) {
/* restore register state from stack */
- state->regs[dst_regno] = *reg;
+ copy_register_state(&state->regs[dst_regno], reg);
/* mark reg as written since spilled pointer state likely
* has its liveness marks cleared by is_state_visited()
* which resets stack/reg liveness for state transitions
@@ -2950,8 +2998,6 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
}
mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
} else {
- u8 type;
-
for (i = 0; i < size; i++) {
type = stype[(slot - i) % BPF_REG_SIZE];
if (type == STACK_MISC)
@@ -4557,7 +4603,7 @@ static int check_stack_range_initialized(
if (clobber) {
__mark_reg_unknown(env, &state->stack[spi].spilled_ptr);
for (j = 0; j < BPF_REG_SIZE; j++)
- state->stack[spi].slot_type[j] = STACK_MISC;
+ scrub_spilled_slot(&state->stack[spi].slot_type[j]);
}
goto mark;
}
@@ -6865,7 +6911,7 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
*/
if (!ptr_is_dst_reg) {
tmp = *dst_reg;
- *dst_reg = *ptr_reg;
+ copy_register_state(dst_reg, ptr_reg);
}
ret = sanitize_speculative_path(env, NULL, env->insn_idx + 1,
env->insn_idx);
@@ -8121,7 +8167,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
* to propagate min/max range.
*/
src_reg->id = ++env->id_gen;
- *dst_reg = *src_reg;
+ copy_register_state(dst_reg, src_reg);
dst_reg->live |= REG_LIVE_WRITTEN;
dst_reg->subreg_def = DEF_NOT_SUBREG;
} else {
@@ -8132,7 +8178,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
insn->src_reg);
return -EACCES;
} else if (src_reg->type == SCALAR_VALUE) {
- *dst_reg = *src_reg;
+ copy_register_state(dst_reg, src_reg);
/* Make sure ID is cleared otherwise
* dst_reg min/max could be incorrectly
* propagated into src_reg by find_equal_scalars()
@@ -8928,7 +8974,7 @@ static void find_equal_scalars(struct bpf_verifier_state *vstate,
bpf_for_each_reg_in_vstate(vstate, state, reg, ({
if (reg->type == SCALAR_VALUE && reg->id == known_reg->id)
- *reg = *known_reg;
+ copy_register_state(reg, known_reg);
}));
}
diff --git a/kernel/compat.c b/kernel/compat.c
index 5555198..fb50f29 100644
--- a/kernel/compat.c
+++ b/kernel/compat.c
@@ -152,7 +152,7 @@ COMPAT_SYSCALL_DEFINE3(sched_getaffinity, compat_pid_t, pid, unsigned int, len,
if (len & (sizeof(compat_ulong_t)-1))
return -EINVAL;
- if (!alloc_cpumask_var(&mask, GFP_KERNEL))
+ if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
return -ENOMEM;
ret = sched_getaffinity(pid, mask);
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 4874209..0cbdd4a 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -189,7 +189,7 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
/* Check if any of the above work has queued a deferred wakeup */
tick_nohz_user_enter_prepare();
- ti_work = READ_ONCE(current_thread_info()->flags);
+ ti_work = read_thread_flags();
}
/* Return the latest work state for arch_exit_to_user_mode() */
@@ -198,13 +198,14 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
static void exit_to_user_mode_prepare(struct pt_regs *regs)
{
- unsigned long ti_work = READ_ONCE(current_thread_info()->flags);
+ unsigned long ti_work;
lockdep_assert_irqs_disabled();
/* Flush pending rcuog wakeup before the last need_resched() check */
tick_nohz_user_enter_prepare();
+ ti_work = read_thread_flags();
if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK))
ti_work = exit_to_user_mode_loop(regs, ti_work);
diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c
index 49972ee..96d476e 100644
--- a/kernel/entry/kvm.c
+++ b/kernel/entry/kvm.c
@@ -26,7 +26,7 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work)
if (ret)
return ret;
- ti_work = READ_ONCE(current_thread_info()->flags);
+ ti_work = read_thread_flags();
} while (ti_work & XFER_TO_GUEST_MODE_WORK || need_resched());
return 0;
}
@@ -43,7 +43,7 @@ int xfer_to_guest_mode_handle_work(struct kvm_vcpu *vcpu)
* disabled in the inner loop before going into guest mode. No need
* to disable interrupts here.
*/
- ti_work = READ_ONCE(current_thread_info()->flags);
+ ti_work = read_thread_flags();
if (!(ti_work & XFER_TO_GUEST_MODE_WORK))
return 0;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index d879503..dc57835 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2252,7 +2252,7 @@ static void perf_group_detach(struct perf_event *event)
/* Inherit group flags from the previous leader */
sibling->group_caps = event->group_caps;
- if (!RB_EMPTY_NODE(&event->group_node)) {
+ if (sibling->attach_state & PERF_ATTACH_CONTEXT) {
add_event_to_groups(sibling, event->ctx);
if (sibling->state == PERF_EVENT_STATE_ACTIVE)
@@ -3909,7 +3909,7 @@ ctx_sched_in(struct perf_event_context *ctx,
if (likely(!ctx->nr_events))
return;
- if (is_active ^ EVENT_TIME) {
+ if (!(is_active & EVENT_TIME)) {
/* start ctx time */
__update_context_time(ctx, false);
perf_cgroup_set_timestamp(task, ctx);
@@ -9056,7 +9056,7 @@ static void perf_event_bpf_output(struct perf_event *event, void *data)
perf_event_header__init_id(&bpf_event->event_id.header,
&sample, event);
- ret = perf_output_begin(&handle, data, event,
+ ret = perf_output_begin(&handle, &sample, event,
bpf_event->event_id.header.size);
if (ret)
return;
@@ -12024,7 +12024,7 @@ perf_event_set_output(struct perf_event *event, struct perf_event *output_event)
/*
* If its not a per-cpu rb, it must be the same task.
*/
- if (output_event->cpu == -1 && output_event->ctx != event->ctx)
+ if (output_event->cpu == -1 && output_event->hw.target != event->hw.target)
goto out;
/*
diff --git a/kernel/exit.c b/kernel/exit.c
index bd3ca2e..e5895cb 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -65,11 +65,58 @@
#include <linux/compat.h>
#include <linux/io_uring.h>
#include <linux/security.h>
+#include <linux/sysfs.h>
#include <linux/uaccess.h>
#include <asm/unistd.h>
#include <asm/mmu_context.h>
+/*
+ * The default value should be high enough to not crash a system that randomly
+ * crashes its kernel from time to time, but low enough to at least not permit
+ * overflowing 32-bit refcounts or the ldsem writer count.
+ */
+static unsigned int oops_limit = 10000;
+
+#ifdef CONFIG_SYSCTL
+static struct ctl_table kern_exit_table[] = {
+ {
+ .procname = "oops_limit",
+ .data = &oops_limit,
+ .maxlen = sizeof(oops_limit),
+ .mode = 0644,
+ .proc_handler = proc_douintvec,
+ },
+ { }
+};
+
+static __init int kernel_exit_sysctls_init(void)
+{
+ register_sysctl_init("kernel", kern_exit_table);
+ return 0;
+}
+late_initcall(kernel_exit_sysctls_init);
+#endif
+
+static atomic_t oops_count = ATOMIC_INIT(0);
+
+#ifdef CONFIG_SYSFS
+static ssize_t oops_count_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *page)
+{
+ return sysfs_emit(page, "%d\n", atomic_read(&oops_count));
+}
+
+static struct kobj_attribute oops_count_attr = __ATTR_RO(oops_count);
+
+static __init int kernel_exit_sysfs_init(void)
+{
+ sysfs_add_file_to_group(kernel_kobj, &oops_count_attr.attr, NULL);
+ return 0;
+}
+late_initcall(kernel_exit_sysfs_init);
+#endif
+
static void __unhash_process(struct task_struct *p, bool group_dead)
{
nr_threads--;
@@ -880,6 +927,31 @@ void __noreturn do_exit(long code)
}
EXPORT_SYMBOL_GPL(do_exit);
+void __noreturn make_task_dead(int signr)
+{
+ /*
+ * Take the task off the cpu after something catastrophic has
+ * happened.
+ */
+ unsigned int limit;
+
+ /*
+ * Every time the system oopses, if the oops happens while a reference
+ * to an object was held, the reference leaks.
+ * If the oops doesn't also leak memory, repeated oopsing can cause
+ * reference counters to wrap around (if they're not using refcount_t).
+ * This means that repeated oopsing can make unexploitable-looking bugs
+ * exploitable through repeated oopsing.
+ * To make sure this can't happen, place an upper bound on how often the
+ * kernel may oops without panic().
+ */
+ limit = READ_ONCE(oops_limit);
+ if (atomic_inc_return(&oops_count) >= limit && limit)
+ panic("Oopsed too often (kernel.oops_limit is %d)", limit);
+
+ do_exit(signr);
+}
+
void complete_and_exit(struct completion *comp, long code)
{
if (comp)
diff --git a/kernel/fail_function.c b/kernel/fail_function.c
index 60dc825..d81ec84 100644
--- a/kernel/fail_function.c
+++ b/kernel/fail_function.c
@@ -163,10 +163,7 @@ static void fei_debugfs_add_attr(struct fei_attr *attr)
static void fei_debugfs_remove_attr(struct fei_attr *attr)
{
- struct dentry *dir;
-
- dir = debugfs_lookup(attr->kp.symbol_name, fei_debugfs_dir);
- debugfs_remove_recursive(dir);
+ debugfs_lookup_and_remove(attr->kp.symbol_name, fei_debugfs_dir);
}
static int fei_kprobe_handler(struct kprobe *kp, struct pt_regs *regs)
diff --git a/kernel/fork.c b/kernel/fork.c
index 7bb10c6..7b80e54 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2829,7 +2829,7 @@ static bool clone3_args_valid(struct kernel_clone_args *kargs)
* - make the CLONE_DETACHED bit reusable for clone3
* - make the CSIGNAL bits reusable for clone3
*/
- if (kargs->flags & (CLONE_DETACHED | CSIGNAL))
+ if (kargs->flags & (CLONE_DETACHED | (CSIGNAL & (~CLONE_NEWTIME))))
return false;
if ((kargs->flags & (CLONE_SIGHAND | CLONE_CLEAR_SIGHAND)) ==
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index 035e303..e0b6778 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -25,6 +25,9 @@ static DEFINE_MUTEX(irq_domain_mutex);
static struct irq_domain *irq_default_domain;
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+ unsigned int nr_irqs, int node, void *arg,
+ bool realloc, const struct irq_affinity_desc *affinity);
static void irq_domain_check_hierarchy(struct irq_domain *domain);
struct irqchip_fwid {
@@ -123,23 +126,12 @@ void irq_domain_free_fwnode(struct fwnode_handle *fwnode)
}
EXPORT_SYMBOL_GPL(irq_domain_free_fwnode);
-/**
- * __irq_domain_add() - Allocate a new irq_domain data structure
- * @fwnode: firmware node for the interrupt controller
- * @size: Size of linear map; 0 for radix mapping only
- * @hwirq_max: Maximum number of interrupts supported by controller
- * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
- * direct mapping
- * @ops: domain callbacks
- * @host_data: Controller private data pointer
- *
- * Allocates and initializes an irq_domain structure.
- * Returns pointer to IRQ domain, or NULL on failure.
- */
-struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
- irq_hw_number_t hwirq_max, int direct_max,
- const struct irq_domain_ops *ops,
- void *host_data)
+static struct irq_domain *__irq_domain_create(struct fwnode_handle *fwnode,
+ unsigned int size,
+ irq_hw_number_t hwirq_max,
+ int direct_max,
+ const struct irq_domain_ops *ops,
+ void *host_data)
{
struct irqchip_fwid *fwid;
struct irq_domain *domain;
@@ -227,12 +219,44 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s
irq_domain_check_hierarchy(domain);
+ return domain;
+}
+
+static void __irq_domain_publish(struct irq_domain *domain)
+{
mutex_lock(&irq_domain_mutex);
debugfs_add_domain_dir(domain);
list_add(&domain->link, &irq_domain_list);
mutex_unlock(&irq_domain_mutex);
pr_debug("Added domain %s\n", domain->name);
+}
+
+/**
+ * __irq_domain_add() - Allocate a new irq_domain data structure
+ * @fwnode: firmware node for the interrupt controller
+ * @size: Size of linear map; 0 for radix mapping only
+ * @hwirq_max: Maximum number of interrupts supported by controller
+ * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no
+ * direct mapping
+ * @ops: domain callbacks
+ * @host_data: Controller private data pointer
+ *
+ * Allocates and initializes an irq_domain structure.
+ * Returns pointer to IRQ domain, or NULL on failure.
+ */
+struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int size,
+ irq_hw_number_t hwirq_max, int direct_max,
+ const struct irq_domain_ops *ops,
+ void *host_data)
+{
+ struct irq_domain *domain;
+
+ domain = __irq_domain_create(fwnode, size, hwirq_max, direct_max,
+ ops, host_data);
+ if (domain)
+ __irq_domain_publish(domain);
+
return domain;
}
EXPORT_SYMBOL_GPL(__irq_domain_add);
@@ -538,6 +562,9 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
return;
hwirq = irq_data->hwirq;
+
+ mutex_lock(&irq_domain_mutex);
+
irq_set_status_flags(irq, IRQ_NOREQUEST);
/* remove chip and handler */
@@ -557,10 +584,12 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq)
/* Clear reverse map for this hwirq */
irq_domain_clear_mapping(domain, hwirq);
+
+ mutex_unlock(&irq_domain_mutex);
}
-int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
- irq_hw_number_t hwirq)
+static int irq_domain_associate_locked(struct irq_domain *domain, unsigned int virq,
+ irq_hw_number_t hwirq)
{
struct irq_data *irq_data = irq_get_irq_data(virq);
int ret;
@@ -573,7 +602,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
if (WARN(irq_data->domain, "error: virq%i is already associated", virq))
return -EINVAL;
- mutex_lock(&irq_domain_mutex);
irq_data->hwirq = hwirq;
irq_data->domain = domain;
if (domain->ops->map) {
@@ -590,7 +618,6 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
}
irq_data->domain = NULL;
irq_data->hwirq = 0;
- mutex_unlock(&irq_domain_mutex);
return ret;
}
@@ -601,12 +628,23 @@ int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
domain->mapcount++;
irq_domain_set_mapping(domain, hwirq, irq_data);
- mutex_unlock(&irq_domain_mutex);
irq_clear_status_flags(virq, IRQ_NOREQUEST);
return 0;
}
+
+int irq_domain_associate(struct irq_domain *domain, unsigned int virq,
+ irq_hw_number_t hwirq)
+{
+ int ret;
+
+ mutex_lock(&irq_domain_mutex);
+ ret = irq_domain_associate_locked(domain, virq, hwirq);
+ mutex_unlock(&irq_domain_mutex);
+
+ return ret;
+}
EXPORT_SYMBOL_GPL(irq_domain_associate);
void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base,
@@ -668,6 +706,34 @@ unsigned int irq_create_direct_mapping(struct irq_domain *domain)
EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
#endif
+static unsigned int irq_create_mapping_affinity_locked(struct irq_domain *domain,
+ irq_hw_number_t hwirq,
+ const struct irq_affinity_desc *affinity)
+{
+ struct device_node *of_node = irq_domain_get_of_node(domain);
+ int virq;
+
+ pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
+
+ /* Allocate a virtual interrupt number */
+ virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
+ affinity);
+ if (virq <= 0) {
+ pr_debug("-> virq allocation failed\n");
+ return 0;
+ }
+
+ if (irq_domain_associate_locked(domain, virq, hwirq)) {
+ irq_free_desc(virq);
+ return 0;
+ }
+
+ pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
+ hwirq, of_node_full_name(of_node), virq);
+
+ return virq;
+}
+
/**
* irq_create_mapping_affinity() - Map a hardware interrupt into linux irq space
* @domain: domain owning this hardware interrupt or NULL for default domain
@@ -680,14 +746,11 @@ EXPORT_SYMBOL_GPL(irq_create_direct_mapping);
* on the number returned from that call.
*/
unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
- irq_hw_number_t hwirq,
- const struct irq_affinity_desc *affinity)
+ irq_hw_number_t hwirq,
+ const struct irq_affinity_desc *affinity)
{
- struct device_node *of_node;
int virq;
- pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq);
-
/* Look for default domain if necessary */
if (domain == NULL)
domain = irq_default_domain;
@@ -695,32 +758,19 @@ unsigned int irq_create_mapping_affinity(struct irq_domain *domain,
WARN(1, "%s(, %lx) called with NULL domain\n", __func__, hwirq);
return 0;
}
- pr_debug("-> using domain @%p\n", domain);
- of_node = irq_domain_get_of_node(domain);
+ mutex_lock(&irq_domain_mutex);
/* Check if mapping already exists */
virq = irq_find_mapping(domain, hwirq);
if (virq) {
- pr_debug("-> existing mapping on virq %d\n", virq);
- return virq;
+ pr_debug("existing mapping on virq %d\n", virq);
+ goto out;
}
- /* Allocate a virtual interrupt number */
- virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node),
- affinity);
- if (virq <= 0) {
- pr_debug("-> virq allocation failed\n");
- return 0;
- }
-
- if (irq_domain_associate(domain, virq, hwirq)) {
- irq_free_desc(virq);
- return 0;
- }
-
- pr_debug("irq %lu on domain %s mapped to virtual irq %u\n",
- hwirq, of_node_full_name(of_node), virq);
+ virq = irq_create_mapping_affinity_locked(domain, hwirq, affinity);
+out:
+ mutex_unlock(&irq_domain_mutex);
return virq;
}
@@ -789,6 +839,8 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
if (WARN_ON(type & ~IRQ_TYPE_SENSE_MASK))
type &= IRQ_TYPE_SENSE_MASK;
+ mutex_lock(&irq_domain_mutex);
+
/*
* If we've already configured this interrupt,
* don't do it again, or hell will break loose.
@@ -801,7 +853,7 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
* interrupt number.
*/
if (type == IRQ_TYPE_NONE || type == irq_get_trigger_type(virq))
- return virq;
+ goto out;
/*
* If the trigger type has not been set yet, then set
@@ -809,40 +861,45 @@ unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec)
*/
if (irq_get_trigger_type(virq) == IRQ_TYPE_NONE) {
irq_data = irq_get_irq_data(virq);
- if (!irq_data)
- return 0;
+ if (!irq_data) {
+ virq = 0;
+ goto out;
+ }
irqd_set_trigger_type(irq_data, type);
- return virq;
+ goto out;
}
pr_warn("type mismatch, failed to map hwirq-%lu for %s!\n",
hwirq, of_node_full_name(to_of_node(fwspec->fwnode)));
- return 0;
+ virq = 0;
+ goto out;
}
if (irq_domain_is_hierarchy(domain)) {
- virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec);
- if (virq <= 0)
- return 0;
+ virq = irq_domain_alloc_irqs_locked(domain, -1, 1, NUMA_NO_NODE,
+ fwspec, false, NULL);
+ if (virq <= 0) {
+ virq = 0;
+ goto out;
+ }
} else {
/* Create mapping */
- virq = irq_create_mapping(domain, hwirq);
+ virq = irq_create_mapping_affinity_locked(domain, hwirq, NULL);
if (!virq)
- return virq;
+ goto out;
}
irq_data = irq_get_irq_data(virq);
- if (!irq_data) {
- if (irq_domain_is_hierarchy(domain))
- irq_domain_free_irqs(virq, 1);
- else
- irq_dispose_mapping(virq);
- return 0;
+ if (WARN_ON(!irq_data)) {
+ virq = 0;
+ goto out;
}
/* Store trigger type */
irqd_set_trigger_type(irq_data, type);
+out:
+ mutex_unlock(&irq_domain_mutex);
return virq;
}
@@ -1102,12 +1159,15 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent,
struct irq_domain *domain;
if (size)
- domain = irq_domain_create_linear(fwnode, size, ops, host_data);
+ domain = __irq_domain_create(fwnode, size, size, 0, ops, host_data);
else
- domain = irq_domain_create_tree(fwnode, ops, host_data);
+ domain = __irq_domain_create(fwnode, 0, ~0, 0, ops, host_data);
+
if (domain) {
domain->parent = parent;
domain->flags |= flags;
+
+ __irq_domain_publish(domain);
}
return domain;
@@ -1425,6 +1485,52 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
return domain->ops->alloc(domain, irq_base, nr_irqs, arg);
}
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+ unsigned int nr_irqs, int node, void *arg,
+ bool realloc, const struct irq_affinity_desc *affinity)
+{
+ int i, ret, virq;
+
+ if (realloc && irq_base >= 0) {
+ virq = irq_base;
+ } else {
+ virq = irq_domain_alloc_descs(irq_base, nr_irqs, 0, node,
+ affinity);
+ if (virq < 0) {
+ pr_debug("cannot allocate IRQ(base %d, count %d)\n",
+ irq_base, nr_irqs);
+ return virq;
+ }
+ }
+
+ if (irq_domain_alloc_irq_data(domain, virq, nr_irqs)) {
+ pr_debug("cannot allocate memory for IRQ%d\n", virq);
+ ret = -ENOMEM;
+ goto out_free_desc;
+ }
+
+ ret = irq_domain_alloc_irqs_hierarchy(domain, virq, nr_irqs, arg);
+ if (ret < 0)
+ goto out_free_irq_data;
+
+ for (i = 0; i < nr_irqs; i++) {
+ ret = irq_domain_trim_hierarchy(virq + i);
+ if (ret)
+ goto out_free_irq_data;
+ }
+
+ for (i = 0; i < nr_irqs; i++)
+ irq_domain_insert_irq(virq + i);
+
+ return virq;
+
+out_free_irq_data:
+ irq_domain_free_irq_data(virq, nr_irqs);
+out_free_desc:
+ irq_free_descs(virq, nr_irqs);
+ return ret;
+}
+
/**
* __irq_domain_alloc_irqs - Allocate IRQs from domain
* @domain: domain to allocate from
@@ -1451,7 +1557,7 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
unsigned int nr_irqs, int node, void *arg,
bool realloc, const struct irq_affinity_desc *affinity)
{
- int i, ret, virq;
+ int ret;
if (domain == NULL) {
domain = irq_default_domain;
@@ -1459,49 +1565,11 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
return -EINVAL;
}
- if (realloc && irq_base >= 0) {
- virq = irq_base;
- } else {
- virq = irq_domain_alloc_descs(irq_base, nr_irqs, 0, node,
- affinity);
- if (virq < 0) {
- pr_debug("cannot allocate IRQ(base %d, count %d)\n",
- irq_base, nr_irqs);
- return virq;
- }
- }
-
- if (irq_domain_alloc_irq_data(domain, virq, nr_irqs)) {
- pr_debug("cannot allocate memory for IRQ%d\n", virq);
- ret = -ENOMEM;
- goto out_free_desc;
- }
-
mutex_lock(&irq_domain_mutex);
- ret = irq_domain_alloc_irqs_hierarchy(domain, virq, nr_irqs, arg);
- if (ret < 0) {
- mutex_unlock(&irq_domain_mutex);
- goto out_free_irq_data;
- }
-
- for (i = 0; i < nr_irqs; i++) {
- ret = irq_domain_trim_hierarchy(virq + i);
- if (ret) {
- mutex_unlock(&irq_domain_mutex);
- goto out_free_irq_data;
- }
- }
-
- for (i = 0; i < nr_irqs; i++)
- irq_domain_insert_irq(virq + i);
+ ret = irq_domain_alloc_irqs_locked(domain, irq_base, nr_irqs, node, arg,
+ realloc, affinity);
mutex_unlock(&irq_domain_mutex);
- return virq;
-
-out_free_irq_data:
- irq_domain_free_irq_data(virq, nr_irqs);
-out_free_desc:
- irq_free_descs(virq, nr_irqs);
return ret;
}
@@ -1863,6 +1931,13 @@ void irq_domain_set_info(struct irq_domain *domain, unsigned int virq,
irq_set_handler_data(virq, handler_data);
}
+static int irq_domain_alloc_irqs_locked(struct irq_domain *domain, int irq_base,
+ unsigned int nr_irqs, int node, void *arg,
+ bool realloc, const struct irq_affinity_desc *affinity)
+{
+ return -EINVAL;
+}
+
static void irq_domain_check_hierarchy(struct irq_domain *domain)
{
}
@@ -1913,7 +1988,7 @@ static void debugfs_add_domain_dir(struct irq_domain *d)
static void debugfs_remove_domain_dir(struct irq_domain *d)
{
- debugfs_remove(debugfs_lookup(d->name, domain_dir));
+ debugfs_lookup_and_remove(d->name, domain_dir);
}
void __init irq_domain_debugfs_init(struct dentry *root)
diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile
index c2bb07f..c4ddd18 100644
--- a/kernel/kcsan/Makefile
+++ b/kernel/kcsan/Makefile
@@ -13,5 +13,6 @@
obj-y := core.o debugfs.o report.o
obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o
-CFLAGS_kcsan_test.o := $(CFLAGS_KCSAN) -g -fno-omit-frame-pointer
+CFLAGS_kcsan_test.o := $(CFLAGS_KCSAN) -fno-omit-frame-pointer
+CFLAGS_kcsan_test.o += $(DISABLE_STRUCTLEAK_PLUGIN)
obj-$(CONFIG_KCSAN_KUNIT_TEST) += kcsan_test.o
diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c
index dc55fd5..8b176ae 100644
--- a/kernel/kcsan/kcsan_test.c
+++ b/kernel/kcsan/kcsan_test.c
@@ -151,7 +151,7 @@ static bool report_matches(const struct expect_report *r)
const bool is_assert = (r->access[0].type | r->access[1].type) & KCSAN_ACCESS_ASSERT;
bool ret = false;
unsigned long flags;
- typeof(observed.lines) expect;
+ typeof(*observed.lines) *expect;
const char *end;
char *cur;
int i;
@@ -160,6 +160,10 @@ static bool report_matches(const struct expect_report *r)
if (!report_available())
return false;
+ expect = kmalloc(sizeof(observed.lines), GFP_KERNEL);
+ if (WARN_ON(!expect))
+ return false;
+
/* Generate expected report contents. */
/* Title */
@@ -243,6 +247,7 @@ static bool report_matches(const struct expect_report *r)
strstr(observed.lines[2], expect[1])));
out:
spin_unlock_irqrestore(&observed.lock, flags);
+ kfree(expect);
return ret;
}
diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c
index 2113792..b88d5d5 100644
--- a/kernel/kcsan/report.c
+++ b/kernel/kcsan/report.c
@@ -432,8 +432,7 @@ static void print_report(enum kcsan_value_change value_change,
dump_stack_print_info(KERN_DEFAULT);
pr_err("==================================================================\n");
- if (panic_on_warn)
- panic("panic_on_warn set ...\n");
+ check_panic_on_warn("KCSAN");
}
static void release_report(unsigned long *flags, struct other_info *other_info)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 23af2f8..7e9fa1b 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -18,6 +18,9 @@
* <jkenisto@us.ibm.com> and Prasanna S Panchamukhi
* <prasanna@in.ibm.com> added function-return probes.
*/
+
+#define pr_fmt(fmt) "kprobes: " fmt
+
#include <linux/kprobes.h>
#include <linux/hash.h>
#include <linux/init.h>
@@ -448,8 +451,8 @@ static inline int kprobe_optready(struct kprobe *p)
return 0;
}
-/* Return true(!0) if the kprobe is disarmed. Note: p must be on hash list */
-static inline int kprobe_disarmed(struct kprobe *p)
+/* Return true if the kprobe is disarmed. Note: p must be on hash list */
+bool kprobe_disarmed(struct kprobe *p)
{
struct optimized_kprobe *op;
@@ -653,7 +656,7 @@ void wait_for_kprobe_optimizer(void)
mutex_unlock(&kprobe_mutex);
}
-static bool optprobe_queued_unopt(struct optimized_kprobe *op)
+bool optprobe_queued_unopt(struct optimized_kprobe *op)
{
struct optimized_kprobe *_op;
@@ -892,7 +895,7 @@ static void optimize_all_kprobes(void)
optimize_kprobe(p);
}
cpus_read_unlock();
- printk(KERN_INFO "Kprobes globally optimized\n");
+ pr_info("kprobe jump-optimization is enabled. All kprobes are optimized if possible.\n");
out:
mutex_unlock(&kprobe_mutex);
}
@@ -925,7 +928,7 @@ static void unoptimize_all_kprobes(void)
/* Wait for unoptimizing completion */
wait_for_kprobe_optimizer();
- printk(KERN_INFO "Kprobes globally unoptimized\n");
+ pr_info("kprobe jump-optimization is disabled. All kprobes are based on software breakpoint.\n");
}
static DEFINE_MUTEX(kprobe_sysctl_mutex);
@@ -1003,7 +1006,7 @@ static int reuse_unused_kprobe(struct kprobe *ap)
* unregistered.
* Thus there should be no chance to reuse unused kprobe.
*/
- printk(KERN_ERR "Error: There should be no unused kprobe here.\n");
+ WARN_ON_ONCE(1);
return -EINVAL;
}
@@ -1049,18 +1052,13 @@ static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
int ret = 0;
ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 0, 0);
- if (ret) {
- pr_debug("Failed to arm kprobe-ftrace at %pS (%d)\n",
- p->addr, ret);
+ if (WARN_ONCE(ret < 0, "Failed to arm kprobe-ftrace at %pS (error %d)\n", p->addr, ret))
return ret;
- }
if (*cnt == 0) {
ret = register_ftrace_function(ops);
- if (ret) {
- pr_debug("Failed to init kprobe-ftrace (%d)\n", ret);
+ if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret))
goto err_ftrace;
- }
}
(*cnt)++;
@@ -1092,14 +1090,14 @@ static int __disarm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
if (*cnt == 1) {
ret = unregister_ftrace_function(ops);
- if (WARN(ret < 0, "Failed to unregister kprobe-ftrace (%d)\n", ret))
+ if (WARN(ret < 0, "Failed to unregister kprobe-ftrace (error %d)\n", ret))
return ret;
}
(*cnt)--;
ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
- WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (%d)\n",
+ WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (error %d)\n",
p->addr, ret);
return ret;
}
@@ -1894,7 +1892,7 @@ unsigned long __kretprobe_trampoline_handler(struct pt_regs *regs,
node = node->next;
}
- pr_err("Oops! Kretprobe fails to find correct return address.\n");
+ pr_err("kretprobe: Return address not found, not execute handler. Maybe there is a bug in the kernel.\n");
BUG_ON(1);
found:
@@ -2229,8 +2227,7 @@ EXPORT_SYMBOL_GPL(enable_kprobe);
/* Caller must NOT call this in usual path. This is only for critical case */
void dump_kprobe(struct kprobe *kp)
{
- pr_err("Dumping kprobe:\n");
- pr_err("Name: %s\nOffset: %x\nAddress: %pS\n",
+ pr_err("Dump kprobe:\n.symbol_name = %s, .offset = %x, .addr = %pS\n",
kp->symbol_name, kp->offset, kp->addr);
}
NOKPROBE_SYMBOL(dump_kprobe);
@@ -2493,8 +2490,7 @@ static int __init init_kprobes(void)
err = populate_kprobe_blacklist(__start_kprobe_blacklist,
__stop_kprobe_blacklist);
if (err) {
- pr_err("kprobes: failed to populate blacklist: %d\n", err);
- pr_err("Please take care of using kprobes.\n");
+ pr_err("Failed to populate blacklist (error %d), kprobes not restricted, be careful using them!\n", err);
}
if (kretprobe_blacklist_size) {
@@ -2503,7 +2499,7 @@ static int __init init_kprobes(void)
kretprobe_blacklist[i].addr =
kprobe_lookup_name(kretprobe_blacklist[i].name, 0);
if (!kretprobe_blacklist[i].addr)
- printk("kretprobe: lookup failed: %s\n",
+ pr_err("Failed to lookup symbol '%s' for kretprobe blacklist. Maybe the target function is removed or renamed.\n",
kretprobe_blacklist[i].name);
}
}
@@ -2707,7 +2703,7 @@ static int arm_all_kprobes(void)
}
if (errors)
- pr_warn("Kprobes globally enabled, but failed to arm %d out of %d probes\n",
+ pr_warn("Kprobes globally enabled, but failed to enable %d out of %d probes. Please check which kprobes are kept disabled via debugfs.\n",
errors, total);
else
pr_info("Kprobes globally enabled\n");
@@ -2750,7 +2746,7 @@ static int disarm_all_kprobes(void)
}
if (errors)
- pr_warn("Kprobes globally disabled, but failed to disarm %d out of %d probes\n",
+ pr_warn("Kprobes globally disabled, but failed to disable %d out of %d probes. Please check which kprobes are kept enabled via debugfs.\n",
errors, total);
else
pr_info("Kprobes globally disabled\n");
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 94e2b12..a704ffa 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -523,6 +523,7 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
to_kthread(p)->cpu = cpu;
return p;
}
+EXPORT_SYMBOL(kthread_create_on_cpu);
void kthread_set_per_cpu(struct task_struct *k, int cpu)
{
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index ea5a701..fcd9ad3 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -855,8 +855,9 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task,
* then we need to wake the new top waiter up to try
* to get the lock.
*/
- if (prerequeue_top_waiter != rt_mutex_top_waiter(lock))
- wake_up_state(waiter->task, waiter->wake_state);
+ top_waiter = rt_mutex_top_waiter(lock);
+ if (prerequeue_top_waiter != top_waiter)
+ wake_up_state(top_waiter->task, top_waiter->wake_state);
raw_spin_unlock_irq(&lock->wait_lock);
return 0;
}
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 4cc73e6..de375fe 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -586,18 +586,16 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
*/
if (first->handoff_set && (waiter != first))
return false;
-
- /*
- * First waiter can inherit a previously set handoff
- * bit and spin on rwsem if lock acquisition fails.
- */
- if (waiter == first)
- waiter->handoff_set = true;
}
new = count;
if (count & RWSEM_LOCK_MASK) {
+ /*
+ * A waiter (first or not) can set the handoff bit
+ * if it is an RT task or wait in the wait queue
+ * for too long.
+ */
if (has_handoff || (!rt_task(waiter->task) &&
!time_after(jiffies, waiter->timeout)))
return false;
@@ -613,11 +611,12 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
} while (!atomic_long_try_cmpxchg_acquire(&sem->count, &count, new));
/*
- * We have either acquired the lock with handoff bit cleared or
- * set the handoff bit.
+ * We have either acquired the lock with handoff bit cleared or set
+ * the handoff bit. Only the first waiter can have its handoff_set
+ * set here to enable optimistic spinning in slowpath loop.
*/
if (new & RWSEM_FLAG_HANDOFF) {
- waiter->handoff_set = true;
+ first->handoff_set = true;
lockevent_inc(rwsem_wlock_handoff);
return false;
}
@@ -1045,7 +1044,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
/* Ordered by sem->wait_lock against rwsem_mark_wake(). */
break;
}
- schedule();
+ schedule_preempt_disabled();
lockevent_inc(rwsem_sleep_reader);
}
@@ -1224,14 +1223,20 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
*/
static inline int __down_read_common(struct rw_semaphore *sem, int state)
{
+ int ret = 0;
long count;
+ preempt_disable();
if (!rwsem_read_trylock(sem, &count)) {
- if (IS_ERR(rwsem_down_read_slowpath(sem, count, state)))
- return -EINTR;
+ if (IS_ERR(rwsem_down_read_slowpath(sem, count, state))) {
+ ret = -EINTR;
+ goto out;
+ }
DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
}
- return 0;
+out:
+ preempt_enable();
+ return ret;
}
static inline void __down_read(struct rw_semaphore *sem)
@@ -1251,22 +1256,23 @@ static inline int __down_read_killable(struct rw_semaphore *sem)
static inline int __down_read_trylock(struct rw_semaphore *sem)
{
+ int ret = 0;
long tmp;
DEBUG_RWSEMS_WARN_ON(sem->magic != sem, sem);
- /*
- * Optimize for the case when the rwsem is not locked at all.
- */
- tmp = RWSEM_UNLOCKED_VALUE;
- do {
+ preempt_disable();
+ tmp = atomic_long_read(&sem->count);
+ while (!(tmp & RWSEM_READ_FAILED_MASK)) {
if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp,
- tmp + RWSEM_READER_BIAS)) {
+ tmp + RWSEM_READER_BIAS)) {
rwsem_set_reader_owned(sem);
- return 1;
+ ret = 1;
+ break;
}
- } while (!(tmp & RWSEM_READ_FAILED_MASK));
- return 0;
+ }
+ preempt_enable();
+ return ret;
}
/*
@@ -1308,6 +1314,7 @@ static inline void __up_read(struct rw_semaphore *sem)
DEBUG_RWSEMS_WARN_ON(sem->magic != sem, sem);
DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem);
+ preempt_disable();
rwsem_clear_reader_owned(sem);
tmp = atomic_long_add_return_release(-RWSEM_READER_BIAS, &sem->count);
DEBUG_RWSEMS_WARN_ON(tmp < 0, sem);
@@ -1316,6 +1323,7 @@ static inline void __up_read(struct rw_semaphore *sem)
clear_nonspinnable(sem);
rwsem_wake(sem);
}
+ preempt_enable();
}
/*
@@ -1633,6 +1641,12 @@ void down_read_non_owner(struct rw_semaphore *sem)
{
might_sleep();
__down_read(sem);
+ /*
+ * The owner value for a reader-owned lock is mostly for debugging
+ * purpose only and is not critical to the correct functioning of
+ * rwsem. So it is perfectly fine to set it in a preempt-enabled
+ * context here.
+ */
__rwsem_set_reader_owned(sem, NULL);
}
EXPORT_SYMBOL(down_read_non_owner);
diff --git a/kernel/module.c b/kernel/module.c
index ef79f4d..8a1766c 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -3665,7 +3665,8 @@ static bool finished_loading(const char *name)
sched_annotate_sleep();
mutex_lock(&module_mutex);
mod = find_module_all(name, strlen(name), true);
- ret = !mod || mod->state == MODULE_STATE_LIVE;
+ ret = !mod || mod->state == MODULE_STATE_LIVE
+ || mod->state == MODULE_STATE_GOING;
mutex_unlock(&module_mutex);
return ret;
@@ -3835,20 +3836,35 @@ static int add_unformed_module(struct module *mod)
mod->state = MODULE_STATE_UNFORMED;
-again:
mutex_lock(&module_mutex);
old = find_module_all(mod->name, strlen(mod->name), true);
if (old != NULL) {
- if (old->state != MODULE_STATE_LIVE) {
+ if (old->state == MODULE_STATE_COMING
+ || old->state == MODULE_STATE_UNFORMED) {
/* Wait in case it fails to load. */
mutex_unlock(&module_mutex);
err = wait_event_interruptible(module_wq,
finished_loading(mod->name));
if (err)
goto out_unlocked;
- goto again;
+
+ /* The module might have gone in the meantime. */
+ mutex_lock(&module_mutex);
+ old = find_module_all(mod->name, strlen(mod->name),
+ true);
}
- err = -EEXIST;
+
+ /*
+ * We are here only when the same module was being loaded. Do
+ * not try to load it again right now. It prevents long delays
+ * caused by serialized module load failures. It might happen
+ * when more devices of the same type trigger load of
+ * a particular module.
+ */
+ if (old && old->state == MODULE_STATE_LIVE)
+ err = -EEXIST;
+ else
+ err = -EBUSY;
goto out;
}
mod_update_bounds(mod);
diff --git a/kernel/panic.c b/kernel/panic.c
index cefd7d8..47933d4 100644
--- a/kernel/panic.c
+++ b/kernel/panic.c
@@ -32,6 +32,7 @@
#include <linux/bug.h>
#include <linux/ratelimit.h>
#include <linux/debugfs.h>
+#include <linux/sysfs.h>
#include <asm/sections.h>
#define PANIC_TIMER_STEP 100
@@ -42,7 +43,9 @@
* Should we dump all CPUs backtraces in an oops event?
* Defaults to 0, can be changed via sysctl.
*/
-unsigned int __read_mostly sysctl_oops_all_cpu_backtrace;
+static unsigned int __read_mostly sysctl_oops_all_cpu_backtrace;
+#else
+#define sysctl_oops_all_cpu_backtrace 0
#endif /* CONFIG_SMP */
int panic_on_oops = CONFIG_PANIC_ON_OOPS_VALUE;
@@ -55,6 +58,7 @@ bool crash_kexec_post_notifiers;
int panic_on_warn __read_mostly;
unsigned long panic_on_taint;
bool panic_on_taint_nousertaint = false;
+static unsigned int warn_limit __read_mostly;
int panic_timeout = CONFIG_PANIC_TIMEOUT;
EXPORT_SYMBOL_GPL(panic_timeout);
@@ -71,6 +75,56 @@ ATOMIC_NOTIFIER_HEAD(panic_notifier_list);
EXPORT_SYMBOL(panic_notifier_list);
+#ifdef CONFIG_SYSCTL
+static struct ctl_table kern_panic_table[] = {
+#ifdef CONFIG_SMP
+ {
+ .procname = "oops_all_cpu_backtrace",
+ .data = &sysctl_oops_all_cpu_backtrace,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ },
+#endif
+ {
+ .procname = "warn_limit",
+ .data = &warn_limit,
+ .maxlen = sizeof(warn_limit),
+ .mode = 0644,
+ .proc_handler = proc_douintvec,
+ },
+ { }
+};
+
+static __init int kernel_panic_sysctls_init(void)
+{
+ register_sysctl_init("kernel", kern_panic_table);
+ return 0;
+}
+late_initcall(kernel_panic_sysctls_init);
+#endif
+
+static atomic_t warn_count = ATOMIC_INIT(0);
+
+#ifdef CONFIG_SYSFS
+static ssize_t warn_count_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *page)
+{
+ return sysfs_emit(page, "%d\n", atomic_read(&warn_count));
+}
+
+static struct kobj_attribute warn_count_attr = __ATTR_RO(warn_count);
+
+static __init int kernel_panic_sysfs_init(void)
+{
+ sysfs_add_file_to_group(kernel_kobj, &warn_count_attr.attr, NULL);
+ return 0;
+}
+late_initcall(kernel_panic_sysfs_init);
+#endif
+
static long no_blink(int state)
{
return 0;
@@ -167,6 +221,19 @@ static void panic_print_sys_info(void)
ftrace_dump(DUMP_ALL);
}
+void check_panic_on_warn(const char *origin)
+{
+ unsigned int limit;
+
+ if (panic_on_warn)
+ panic("%s: panic_on_warn set ...\n", origin);
+
+ limit = READ_ONCE(warn_limit);
+ if (atomic_inc_return(&warn_count) >= limit && limit)
+ panic("%s: system warned too often (kernel.warn_limit is %d)",
+ origin, limit);
+}
+
/**
* panic - halt the system
* @fmt: The text string to print
@@ -184,6 +251,16 @@ void panic(const char *fmt, ...)
int old_cpu, this_cpu;
bool _crash_kexec_post_notifiers = crash_kexec_post_notifiers;
+ if (panic_on_warn) {
+ /*
+ * This thread may hit another WARN() in the panic path.
+ * Resetting this prevents additional WARN() from panicking the
+ * system on this thread. Other threads are blocked by the
+ * panic_mutex in panic().
+ */
+ panic_on_warn = 0;
+ }
+
/*
* Disable local interrupts. This will prevent panic_smp_self_stop
* from deadlocking the first cpu that invokes the panic, since
@@ -592,16 +669,7 @@ void __warn(const char *file, int line, void *caller, unsigned taint,
if (regs)
show_regs(regs);
- if (panic_on_warn) {
- /*
- * This thread may hit another WARN() in the panic path.
- * Resetting this prevents additional WARN() from panicking the
- * system on this thread. Other threads are blocked by the
- * panic_mutex in panic().
- */
- panic_on_warn = 0;
- panic("panic_on_warn set ...\n");
- }
+ check_panic_on_warn("kernel");
if (!regs)
dump_stack();
diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c
index a46a372..259fc4c 100644
--- a/kernel/pid_namespace.c
+++ b/kernel/pid_namespace.c
@@ -244,7 +244,24 @@ void zap_pid_ns_processes(struct pid_namespace *pid_ns)
set_current_state(TASK_INTERRUPTIBLE);
if (pid_ns->pid_allocated == init_pids)
break;
+ /*
+ * Release tasks_rcu_exit_srcu to avoid following deadlock:
+ *
+ * 1) TASK A unshare(CLONE_NEWPID)
+ * 2) TASK A fork() twice -> TASK B (child reaper for new ns)
+ * and TASK C
+ * 3) TASK B exits, kills TASK C, waits for TASK A to reap it
+ * 4) TASK A calls synchronize_rcu_tasks()
+ * -> synchronize_srcu(tasks_rcu_exit_srcu)
+ * 5) *DEADLOCK*
+ *
+ * It is considered safe to release tasks_rcu_exit_srcu here
+ * because we assume the current task can not be concurrently
+ * reaped at this point.
+ */
+ exit_tasks_rcu_stop();
schedule();
+ exit_tasks_rcu_start();
}
__set_current_state(TASK_RUNNING);
diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c
index 97e6246..1b902f9 100644
--- a/kernel/power/energy_model.c
+++ b/kernel/power/energy_model.c
@@ -85,10 +85,7 @@ static void em_debug_create_pd(struct device *dev)
static void em_debug_remove_pd(struct device *dev)
{
- struct dentry *debug_dir;
-
- debug_dir = debugfs_lookup(dev_name(dev), rootdir);
- debugfs_remove_recursive(debug_dir);
+ debugfs_lookup_and_remove(dev_name(dev), rootdir);
}
static int __init em_debug_init(void)
diff --git a/kernel/printk/index.c b/kernel/printk/index.c
index d370940..d23b8f8 100644
--- a/kernel/printk/index.c
+++ b/kernel/printk/index.c
@@ -146,7 +146,7 @@ static void pi_create_file(struct module *mod)
#ifdef CONFIG_MODULES
static void pi_remove_file(struct module *mod)
{
- debugfs_remove(debugfs_lookup(pi_get_module_name(mod), dfs_index));
+ debugfs_lookup_and_remove(pi_get_module_name(mod), dfs_index);
}
static int pi_module_notify(struct notifier_block *nb, unsigned long op,
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 4bd07cc..94b8ee8 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -171,8 +171,9 @@ static void call_rcu_tasks_generic(struct rcu_head *rhp, rcu_callback_t func,
static void synchronize_rcu_tasks_generic(struct rcu_tasks *rtp)
{
/* Complain if the scheduler has not started. */
- WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
- "synchronize_rcu_tasks called too soon");
+ if (WARN_ONCE(rcu_scheduler_active == RCU_SCHEDULER_INACTIVE,
+ "synchronize_%s() called too soon", rtp->name))
+ return;
/* Wait for the grace period. */
wait_rcu_gp(rtp->call_func);
@@ -451,11 +452,21 @@ static void rcu_tasks_pertask(struct task_struct *t, struct list_head *hop)
static void rcu_tasks_postscan(struct list_head *hop)
{
/*
- * Wait for tasks that are in the process of exiting. This
- * does only part of the job, ensuring that all tasks that were
- * previously exiting reach the point where they have disabled
- * preemption, allowing the later synchronize_rcu() to finish
- * the job.
+ * Exiting tasks may escape the tasklist scan. Those are vulnerable
+ * until their final schedule() with TASK_DEAD state. To cope with
+ * this, divide the fragile exit path part in two intersecting
+ * read side critical sections:
+ *
+ * 1) An _SRCU_ read side starting before calling exit_notify(),
+ * which may remove the task from the tasklist, and ending after
+ * the final preempt_disable() call in do_exit().
+ *
+ * 2) An _RCU_ read side starting with the final preempt_disable()
+ * call in do_exit() and ending with the final call to schedule()
+ * with TASK_DEAD state.
+ *
+ * This handles the part 1). And postgp will handle part 2) with a
+ * call to synchronize_rcu().
*/
synchronize_srcu(&tasks_rcu_exit_srcu);
}
@@ -522,7 +533,10 @@ static void rcu_tasks_postgp(struct rcu_tasks *rtp)
*
* In addition, this synchronize_rcu() waits for exiting tasks
* to complete their final preempt_disable() region of execution,
- * cleaning up after the synchronize_srcu() above.
+ * cleaning up after synchronize_srcu(&tasks_rcu_exit_srcu),
+ * enforcing the whole region before tasklist removal until
+ * the final schedule() with TASK_DEAD state to be an RCU TASKS
+ * read side critical section.
*/
synchronize_rcu();
}
@@ -612,27 +626,42 @@ void show_rcu_tasks_classic_gp_kthread(void)
EXPORT_SYMBOL_GPL(show_rcu_tasks_classic_gp_kthread);
#endif // !defined(CONFIG_TINY_RCU)
-/* Do the srcu_read_lock() for the above synchronize_srcu(). */
+/*
+ * Contribute to protect against tasklist scan blind spot while the
+ * task is exiting and may be removed from the tasklist. See
+ * corresponding synchronize_srcu() for further details.
+ */
void exit_tasks_rcu_start(void) __acquires(&tasks_rcu_exit_srcu)
{
- preempt_disable();
current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu);
- preempt_enable();
}
-/* Do the srcu_read_unlock() for the above synchronize_srcu(). */
-void exit_tasks_rcu_finish(void) __releases(&tasks_rcu_exit_srcu)
+/*
+ * Contribute to protect against tasklist scan blind spot while the
+ * task is exiting and may be removed from the tasklist. See
+ * corresponding synchronize_srcu() for further details.
+ */
+void exit_tasks_rcu_stop(void) __releases(&tasks_rcu_exit_srcu)
{
struct task_struct *t = current;
- preempt_disable();
__srcu_read_unlock(&tasks_rcu_exit_srcu, t->rcu_tasks_idx);
- preempt_enable();
- exit_tasks_rcu_finish_trace(t);
+}
+
+/*
+ * Contribute to protect against tasklist scan blind spot while the
+ * task is exiting and may be removed from the tasklist. See
+ * corresponding synchronize_srcu() for further details.
+ */
+void exit_tasks_rcu_finish(void)
+{
+ exit_tasks_rcu_stop();
+ exit_tasks_rcu_finish_trace(current);
}
#else /* #ifdef CONFIG_TASKS_RCU */
void exit_tasks_rcu_start(void) { }
+void exit_tasks_rcu_stop(void) { }
void exit_tasks_rcu_finish(void) { exit_tasks_rcu_finish_trace(current); }
#endif /* #else #ifdef CONFIG_TASKS_RCU */
@@ -660,9 +689,6 @@ static void rcu_tasks_be_rude(struct work_struct *work)
// Wait for one rude RCU-tasks grace period.
static void rcu_tasks_rude_wait_gp(struct rcu_tasks *rtp)
{
- if (num_online_cpus() <= 1)
- return; // Fastpath for only one CPU.
-
rtp->n_ipis += cpumask_weight(cpu_online_mask);
schedule_on_each_cpu(rcu_tasks_be_rude);
}
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 16f9411..f9fb279 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -565,7 +565,9 @@ static void synchronize_rcu_expedited_wait(void)
mask = leaf_node_cpu_bit(rnp, cpu);
if (!(READ_ONCE(rnp->expmask) & mask))
continue;
+ preempt_disable(); // For smp_processor_id() in dump_cpu_task().
dump_cpu_task(cpu);
+ preempt_enable();
}
}
jiffies_stall = 3 * rcu_jiffies_till_stall_check() + 3;
diff --git a/kernel/resource.c b/kernel/resource.c
index 20e10e4..cb441e3 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -1325,20 +1325,6 @@ void release_mem_region_adjustable(resource_size_t start, resource_size_t size)
continue;
}
- /*
- * All memory regions added from memory-hotplug path have the
- * flag IORESOURCE_SYSTEM_RAM. If the resource does not have
- * this flag, we know that we are dealing with a resource coming
- * from HMM/devm. HMM/devm use another mechanism to add/release
- * a resource. This goes via devm_request_mem_region and
- * devm_release_mem_region.
- * HMM/devm take care to release their resources when they want,
- * so if we are dealing with them, let us just back off here.
- */
- if (!(res->flags & IORESOURCE_SYSRAM)) {
- break;
- }
-
if (!(res->flags & IORESOURCE_MEM))
break;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a6703ff..181a897 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1999,6 +1999,9 @@ static inline void dequeue_task(struct rq *rq, struct task_struct *p, int flags)
void activate_task(struct rq *rq, struct task_struct *p, int flags)
{
+ if (task_on_rq_migrating(p))
+ flags |= ENQUEUE_MIGRATED;
+
enqueue_task(rq, p, flags);
p->on_rq = TASK_ON_RQ_QUEUED;
@@ -5565,8 +5568,7 @@ static noinline void __schedule_bug(struct task_struct *prev)
pr_err("Preemption disabled at:");
print_ip_sym(KERN_ERR, preempt_disable_ip);
}
- if (panic_on_warn)
- panic("scheduling while atomic\n");
+ check_panic_on_warn("scheduling while atomic");
dump_stack();
add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
@@ -8188,14 +8190,14 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
if (len & (sizeof(unsigned long)-1))
return -EINVAL;
- if (!alloc_cpumask_var(&mask, GFP_KERNEL))
+ if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
return -ENOMEM;
ret = sched_getaffinity(pid, mask);
if (ret == 0) {
unsigned int retlen = min(len, cpumask_size());
- if (copy_to_user(user_mask_ptr, mask, retlen))
+ if (copy_to_user(user_mask_ptr, cpumask_bits(mask), retlen))
ret = -EFAULT;
else
ret = retlen;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 2a2f32e..226c814 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1846,8 +1846,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first)
deadline_queue_push_tasks(rq);
}
-static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq,
- struct dl_rq *dl_rq)
+static struct sched_dl_entity *pick_next_dl_entity(struct dl_rq *dl_rq)
{
struct rb_node *left = rb_first_cached(&dl_rq->root);
@@ -1866,7 +1865,7 @@ static struct task_struct *pick_task_dl(struct rq *rq)
if (!sched_dl_runnable(rq))
return NULL;
- dl_se = pick_next_dl_entity(rq, dl_rq);
+ dl_se = pick_next_dl_entity(dl_rq);
BUG_ON(!dl_se);
p = dl_task_of(dl_se);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f25f86b..f6692b2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4327,6 +4327,29 @@ static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se)
#endif
}
+static inline bool entity_is_long_sleeper(struct sched_entity *se)
+{
+ struct cfs_rq *cfs_rq;
+ u64 sleep_time;
+
+ if (se->exec_start == 0)
+ return false;
+
+ cfs_rq = cfs_rq_of(se);
+
+ sleep_time = rq_clock_task(rq_of(cfs_rq));
+
+ /* Happen while migrating because of clock task divergence */
+ if (sleep_time <= se->exec_start)
+ return false;
+
+ sleep_time -= se->exec_start;
+ if (sleep_time > ((1ULL << 63) / scale_load_down(NICE_0_LOAD)))
+ return true;
+
+ return false;
+}
+
static void
place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
{
@@ -4355,8 +4378,29 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
vruntime -= thresh;
}
- /* ensure we never gain time by being placed backwards. */
- se->vruntime = max_vruntime(se->vruntime, vruntime);
+ /*
+ * Pull vruntime of the entity being placed to the base level of
+ * cfs_rq, to prevent boosting it if placed backwards.
+ * However, min_vruntime can advance much faster than real time, with
+ * the extreme being when an entity with the minimal weight always runs
+ * on the cfs_rq. If the waking entity slept for a long time, its
+ * vruntime difference from min_vruntime may overflow s64 and their
+ * comparison may get inversed, so ignore the entity's original
+ * vruntime in that case.
+ * The maximal vruntime speedup is given by the ratio of normal to
+ * minimal weight: scale_load_down(NICE_0_LOAD) / MIN_SHARES.
+ * When placing a migrated waking entity, its exec_start has been set
+ * from a different rq. In order to take into account a possible
+ * divergence between new and prev rq's clocks task because of irq and
+ * stolen time, we take an additional margin.
+ * So, cutting off on the sleep time of
+ * 2^63 / scale_load_down(NICE_0_LOAD) ~ 104 days
+ * should be safe.
+ */
+ if (entity_is_long_sleeper(se))
+ se->vruntime = vruntime;
+ else
+ se->vruntime = max_vruntime(se->vruntime, vruntime);
}
static void check_enqueue_throttle(struct cfs_rq *cfs_rq);
@@ -4452,6 +4496,9 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
if (flags & ENQUEUE_WAKEUP)
place_entity(cfs_rq, se, 0);
+ /* Entity has migrated, no longer consider this task hot */
+ if (flags & ENQUEUE_MIGRATED)
+ se->exec_start = 0;
check_schedstat_required();
update_stats_enqueue(cfs_rq, se, flags);
@@ -7181,9 +7228,6 @@ static void migrate_task_rq_fair(struct task_struct *p, int new_cpu)
/* Tell new CPU we are migrated */
p->se.avg.last_update_time = 0;
- /* We have migrated, no longer consider this task hot */
- p->se.exec_start = 0;
-
update_scan_period(p, new_cpu);
}
diff --git a/kernel/sched/psi.c b/kernel/sched/psi.c
index cad2a1b..fa88bf6 100644
--- a/kernel/sched/psi.c
+++ b/kernel/sched/psi.c
@@ -1169,10 +1169,11 @@ void psi_trigger_destroy(struct psi_trigger *t)
group = t->group;
/*
- * Wakeup waiters to stop polling. Can happen if cgroup is deleted
- * from under a polling process.
+ * Wakeup waiters to stop polling and clear the queue to prevent it from
+ * being accessed later. Can happen if cgroup is deleted from under a
+ * polling process.
*/
- wake_up_interruptible(&t->event_wait);
+ wake_up_pollfree(&t->event_wait);
mutex_lock(&group->trigger_lock);
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index add67f8..08af607 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1614,8 +1614,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
rt_queue_push_tasks(rq);
}
-static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
- struct rt_rq *rt_rq)
+static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
{
struct rt_prio_array *array = &rt_rq->active;
struct sched_rt_entity *next = NULL;
@@ -1626,6 +1625,8 @@ static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
BUG_ON(idx >= MAX_RT_PRIO);
queue = array->queue + idx;
+ if (SCHED_WARN_ON(list_empty(queue)))
+ return NULL;
next = list_entry(queue->next, struct sched_rt_entity, run_list);
return next;
@@ -1637,8 +1638,9 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq)
struct rt_rq *rt_rq = &rq->rt;
do {
- rt_se = pick_next_rt_entity(rq, rt_rq);
- BUG_ON(!rt_se);
+ rt_se = pick_next_rt_entity(rt_rq);
+ if (unlikely(!rt_se))
+ return NULL;
rt_rq = group_rt_rq(rt_se);
} while (rt_rq);
diff --git a/kernel/sys.c b/kernel/sys.c
index 3e4e893..2d2bc63 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -1575,6 +1575,8 @@ int do_prlimit(struct task_struct *tsk, unsigned int resource,
if (resource >= RLIM_NLIMITS)
return -EINVAL;
+ resource = array_index_nospec(resource, RLIM_NLIMITS);
+
if (new_rlim) {
if (new_rlim->rlim_cur > new_rlim->rlim_max)
return -EINVAL;
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 34ce595..928798f8 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -2220,17 +2220,6 @@ static struct ctl_table kern_table[] = {
.proc_handler = proc_dointvec,
},
#endif
-#ifdef CONFIG_SMP
- {
- .procname = "oops_all_cpu_backtrace",
- .data = &sysctl_oops_all_cpu_backtrace,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_minmax,
- .extra1 = SYSCTL_ZERO,
- .extra2 = SYSCTL_ONE,
- },
-#endif /* CONFIG_SMP */
{
.procname = "pid_max",
.data = &pid_max,
diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c
index 5897828..7e5dff6 100644
--- a/kernel/time/alarmtimer.c
+++ b/kernel/time/alarmtimer.c
@@ -470,11 +470,35 @@ u64 alarm_forward(struct alarm *alarm, ktime_t now, ktime_t interval)
}
EXPORT_SYMBOL_GPL(alarm_forward);
-u64 alarm_forward_now(struct alarm *alarm, ktime_t interval)
+static u64 __alarm_forward_now(struct alarm *alarm, ktime_t interval, bool throttle)
{
struct alarm_base *base = &alarm_bases[alarm->type];
+ ktime_t now = base->get_ktime();
- return alarm_forward(alarm, base->get_ktime(), interval);
+ if (IS_ENABLED(CONFIG_HIGH_RES_TIMERS) && throttle) {
+ /*
+ * Same issue as with posix_timer_fn(). Timers which are
+ * periodic but the signal is ignored can starve the system
+ * with a very small interval. The real fix which was
+ * promised in the context of posix_timer_fn() never
+ * materialized, but someone should really work on it.
+ *
+ * To prevent DOS fake @now to be 1 jiffie out which keeps
+ * the overrun accounting correct but creates an
+ * inconsistency vs. timer_gettime(2).
+ */
+ ktime_t kj = NSEC_PER_SEC / HZ;
+
+ if (interval < kj)
+ now = ktime_add(now, kj);
+ }
+
+ return alarm_forward(alarm, now, interval);
+}
+
+u64 alarm_forward_now(struct alarm *alarm, ktime_t interval)
+{
+ return __alarm_forward_now(alarm, interval, false);
}
EXPORT_SYMBOL_GPL(alarm_forward_now);
@@ -551,9 +575,10 @@ static enum alarmtimer_restart alarm_handle_timer(struct alarm *alarm,
if (posix_timer_event(ptr, si_private) && ptr->it_interval) {
/*
* Handle ignored signals and rearm the timer. This will go
- * away once we handle ignored signals proper.
+ * away once we handle ignored signals proper. Ensure that
+ * small intervals cannot starve the system.
*/
- ptr->it_overrun += alarm_forward_now(alarm, ptr->it_interval);
+ ptr->it_overrun += __alarm_forward_now(alarm, ptr->it_interval, true);
++ptr->it_requeue_pending;
ptr->it_active = 1;
result = ALARMTIMER_RESTART;
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index bcad1a1..97ec980 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -378,6 +378,15 @@ void clocksource_verify_percpu(struct clocksource *cs)
}
EXPORT_SYMBOL_GPL(clocksource_verify_percpu);
+static inline void clocksource_reset_watchdog(void)
+{
+ struct clocksource *cs;
+
+ list_for_each_entry(cs, &watchdog_list, wd_list)
+ cs->flags &= ~CLOCK_SOURCE_WATCHDOG;
+}
+
+
static void clocksource_watchdog(struct timer_list *unused)
{
u64 csnow, wdnow, cslast, wdlast, delta;
@@ -385,6 +394,7 @@ static void clocksource_watchdog(struct timer_list *unused)
int64_t wd_nsec, cs_nsec;
struct clocksource *cs;
enum wd_read_status read_ret;
+ unsigned long extra_wait = 0;
u32 md;
spin_lock(&watchdog_lock);
@@ -404,13 +414,30 @@ static void clocksource_watchdog(struct timer_list *unused)
read_ret = cs_watchdog_read(cs, &csnow, &wdnow);
- if (read_ret != WD_READ_SUCCESS) {
- if (read_ret == WD_READ_UNSTABLE)
- /* Clock readout unreliable, so give it up. */
- __clocksource_unstable(cs);
+ if (read_ret == WD_READ_UNSTABLE) {
+ /* Clock readout unreliable, so give it up. */
+ __clocksource_unstable(cs);
continue;
}
+ /*
+ * When WD_READ_SKIP is returned, it means the system is likely
+ * under very heavy load, where the latency of reading
+ * watchdog/clocksource is very big, and affect the accuracy of
+ * watchdog check. So give system some space and suspend the
+ * watchdog check for 5 minutes.
+ */
+ if (read_ret == WD_READ_SKIP) {
+ /*
+ * As the watchdog timer will be suspended, and
+ * cs->last could keep unchanged for 5 minutes, reset
+ * the counters.
+ */
+ clocksource_reset_watchdog();
+ extra_wait = HZ * 300;
+ break;
+ }
+
/* Clocksource initialized ? */
if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) ||
atomic_read(&watchdog_reset_pending)) {
@@ -506,7 +533,7 @@ static void clocksource_watchdog(struct timer_list *unused)
* pair clocksource_stop_watchdog() clocksource_start_watchdog().
*/
if (!timer_pending(&watchdog_timer)) {
- watchdog_timer.expires += WATCHDOG_INTERVAL;
+ watchdog_timer.expires += WATCHDOG_INTERVAL + extra_wait;
add_timer_on(&watchdog_timer, next_cpu);
}
out:
@@ -531,14 +558,6 @@ static inline void clocksource_stop_watchdog(void)
watchdog_running = 0;
}
-static inline void clocksource_reset_watchdog(void)
-{
- struct clocksource *cs;
-
- list_for_each_entry(cs, &watchdog_list, wd_list)
- cs->flags &= ~CLOCK_SOURCE_WATCHDOG;
-}
-
static void clocksource_resume_watchdog(void)
{
atomic_inc(&watchdog_reset_pending);
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 23af5ec..9740958 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -2126,6 +2126,7 @@ SYSCALL_DEFINE2(nanosleep, struct __kernel_timespec __user *, rqtp,
if (!timespec64_valid(&tu))
return -EINVAL;
+ current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
current->restart_block.nanosleep.rmtp = rmtp;
return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
@@ -2147,6 +2148,7 @@ SYSCALL_DEFINE2(nanosleep_time32, struct old_timespec32 __user *, rqtp,
if (!timespec64_valid(&tu))
return -EINVAL;
+ current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
current->restart_block.nanosleep.compat_rmtp = rmtp;
return hrtimer_nanosleep(timespec64_to_ktime(tu), HRTIMER_MODE_REL,
diff --git a/kernel/time/posix-stubs.c b/kernel/time/posix-stubs.c
index fcb3b21..3783d07 100644
--- a/kernel/time/posix-stubs.c
+++ b/kernel/time/posix-stubs.c
@@ -146,6 +146,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
return -EINVAL;
if (flags & TIMER_ABSTIME)
rmtp = NULL;
+ current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
current->restart_block.nanosleep.rmtp = rmtp;
texp = timespec64_to_ktime(t);
@@ -239,6 +240,7 @@ SYSCALL_DEFINE4(clock_nanosleep_time32, clockid_t, which_clock, int, flags,
return -EINVAL;
if (flags & TIMER_ABSTIME)
rmtp = NULL;
+ current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
current->restart_block.nanosleep.compat_rmtp = rmtp;
texp = timespec64_to_ktime(t);
diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
index 5dead89..0c8a87a 100644
--- a/kernel/time/posix-timers.c
+++ b/kernel/time/posix-timers.c
@@ -1270,6 +1270,7 @@ SYSCALL_DEFINE4(clock_nanosleep, const clockid_t, which_clock, int, flags,
return -EINVAL;
if (flags & TIMER_ABSTIME)
rmtp = NULL;
+ current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_NATIVE : TT_NONE;
current->restart_block.nanosleep.rmtp = rmtp;
@@ -1297,6 +1298,7 @@ SYSCALL_DEFINE4(clock_nanosleep_time32, clockid_t, which_clock, int, flags,
return -EINVAL;
if (flags & TIMER_ABSTIME)
rmtp = NULL;
+ current->restart_block.fn = do_no_restart_syscall;
current->restart_block.nanosleep.type = rmtp ? TT_COMPAT : TT_NONE;
current->restart_block.nanosleep.compat_rmtp = rmtp;
diff --git a/kernel/time/test_udelay.c b/kernel/time/test_udelay.c
index 13b11eb..20d5df6 100644
--- a/kernel/time/test_udelay.c
+++ b/kernel/time/test_udelay.c
@@ -149,7 +149,7 @@ module_init(udelay_test_init);
static void __exit udelay_test_exit(void)
{
mutex_lock(&udelay_test_lock);
- debugfs_remove(debugfs_lookup(DEBUGFS_FILENAME, NULL));
+ debugfs_lookup_and_remove(DEBUGFS_FILENAME, NULL);
mutex_unlock(&udelay_test_lock);
}
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 16b0d3f..e6d03cf 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -319,8 +319,8 @@ static void blk_trace_free(struct request_queue *q, struct blk_trace *bt)
* under 'q->debugfs_dir', thus lookup and remove them.
*/
if (!bt->dir) {
- debugfs_remove(debugfs_lookup("dropped", q->debugfs_dir));
- debugfs_remove(debugfs_lookup("msg", q->debugfs_dir));
+ debugfs_lookup_and_remove("dropped", q->debugfs_dir);
+ debugfs_lookup_and_remove("msg", q->debugfs_dir);
} else {
debugfs_remove(bt->dir);
}
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index c289010..b314e71 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -776,6 +776,7 @@ static void do_bpf_send_signal(struct irq_work *entry)
work = container_of(entry, struct send_signal_irq_work, irq_work);
group_send_sig_info(work->sig, SEND_SIG_PRIV, work->task, work->type);
+ put_task_struct(work->task);
}
static int bpf_send_signal_common(u32 sig, enum pid_type type)
@@ -793,6 +794,9 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type)
return -EPERM;
if (unlikely(!nmi_uaccess_okay()))
return -EPERM;
+ /* Task should not be pid=1 to avoid kernel panic. */
+ if (unlikely(is_global_init(current)))
+ return -EPERM;
if (irqs_disabled()) {
/* Do an early check on signal validity. Otherwise,
@@ -809,7 +813,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type)
* to the irq_work. The current task may change when queued
* irq works get executed.
*/
- work->task = current;
+ work->task = get_task_struct(current);
work->sig = sig;
work->type = type;
irq_work_queue(&work->irq_work);
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 731f25a..06927f7 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1538,7 +1538,8 @@ static struct dyn_ftrace *lookup_rec(unsigned long start, unsigned long end)
key.flags = end; /* overload flags, as it is unsigned long */
for (pg = ftrace_pages_start; pg; pg = pg->next) {
- if (end < pg->records[0].ip ||
+ if (pg->index == 0 ||
+ end < pg->records[0].ip ||
start >= (pg->records[pg->index - 1].ip + MCOUNT_INSN_SIZE))
continue;
rec = bsearch(&key, pg->records, pg->index,
@@ -5382,12 +5383,15 @@ int modify_ftrace_direct(unsigned long ip,
ret = 0;
}
- if (unlikely(ret && new_direct)) {
- direct->count++;
- list_del_rcu(&new_direct->next);
- synchronize_rcu_tasks();
- kfree(new_direct);
- ftrace_direct_func_count--;
+ if (ret) {
+ direct->addr = old_addr;
+ if (unlikely(new_direct)) {
+ direct->count++;
+ list_del_rcu(&new_direct->next);
+ synchronize_rcu_tasks();
+ kfree(new_direct);
+ ftrace_direct_func_count--;
+ }
}
out_unlock:
diff --git a/kernel/trace/kprobe_event_gen_test.c b/kernel/trace/kprobe_event_gen_test.c
index c736487..e0c420e 100644
--- a/kernel/trace/kprobe_event_gen_test.c
+++ b/kernel/trace/kprobe_event_gen_test.c
@@ -146,7 +146,7 @@ static int __init test_gen_kprobe_cmd(void)
if (trace_event_file_is_valid(gen_kprobe_test))
gen_kprobe_test = NULL;
/* We got an error after creating the event, delete it */
- ret = kprobe_event_delete("gen_kprobe_test");
+ kprobe_event_delete("gen_kprobe_test");
goto out;
}
@@ -211,7 +211,7 @@ static int __init test_gen_kretprobe_cmd(void)
if (trace_event_file_is_valid(gen_kretprobe_test))
gen_kretprobe_test = NULL;
/* We got an error after creating the event, delete it */
- ret = kprobe_event_delete("gen_kretprobe_test");
+ kprobe_event_delete("gen_kretprobe_test");
goto out;
}
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index ffc8696..c75c81f 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1545,19 +1545,6 @@ static int rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer,
}
/**
- * rb_check_list - make sure a pointer to a list has the last bits zero
- */
-static int rb_check_list(struct ring_buffer_per_cpu *cpu_buffer,
- struct list_head *list)
-{
- if (RB_WARN_ON(cpu_buffer, rb_list_head(list->prev) != list->prev))
- return 1;
- if (RB_WARN_ON(cpu_buffer, rb_list_head(list->next) != list->next))
- return 1;
- return 0;
-}
-
-/**
* rb_check_pages - integrity check of buffer pages
* @cpu_buffer: CPU buffer with pages to test
*
@@ -1566,36 +1553,27 @@ static int rb_check_list(struct ring_buffer_per_cpu *cpu_buffer,
*/
static int rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
{
- struct list_head *head = cpu_buffer->pages;
- struct buffer_page *bpage, *tmp;
+ struct list_head *head = rb_list_head(cpu_buffer->pages);
+ struct list_head *tmp;
- /* Reset the head page if it exists */
- if (cpu_buffer->head_page)
- rb_set_head_page(cpu_buffer);
-
- rb_head_page_deactivate(cpu_buffer);
-
- if (RB_WARN_ON(cpu_buffer, head->next->prev != head))
- return -1;
- if (RB_WARN_ON(cpu_buffer, head->prev->next != head))
+ if (RB_WARN_ON(cpu_buffer,
+ rb_list_head(rb_list_head(head->next)->prev) != head))
return -1;
- if (rb_check_list(cpu_buffer, head))
+ if (RB_WARN_ON(cpu_buffer,
+ rb_list_head(rb_list_head(head->prev)->next) != head))
return -1;
- list_for_each_entry_safe(bpage, tmp, head, list) {
+ for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) {
if (RB_WARN_ON(cpu_buffer,
- bpage->list.next->prev != &bpage->list))
+ rb_list_head(rb_list_head(tmp->next)->prev) != tmp))
return -1;
+
if (RB_WARN_ON(cpu_buffer,
- bpage->list.prev->next != &bpage->list))
- return -1;
- if (rb_check_list(cpu_buffer, &bpage->list))
+ rb_list_head(rb_list_head(tmp->prev)->next) != tmp))
return -1;
}
- rb_head_page_activate(cpu_buffer);
-
return 0;
}
@@ -3063,6 +3041,10 @@ rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
if (RB_WARN_ON(cpu_buffer,
rb_is_reader_page(cpu_buffer->tail_page)))
return;
+ /*
+ * No need for a memory barrier here, as the update
+ * of the tail_page did it for this page.
+ */
local_set(&cpu_buffer->commit_page->page->commit,
rb_page_write(cpu_buffer->commit_page));
rb_inc_page(&cpu_buffer->commit_page);
@@ -3072,6 +3054,8 @@ rb_set_commit_to_write(struct ring_buffer_per_cpu *cpu_buffer)
while (rb_commit_index(cpu_buffer) !=
rb_page_write(cpu_buffer->commit_page)) {
+ /* Make sure the readers see the content of what is committed. */
+ smp_wmb();
local_set(&cpu_buffer->commit_page->page->commit,
rb_page_write(cpu_buffer->commit_page));
RB_WARN_ON(cpu_buffer,
@@ -4654,7 +4638,12 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
/*
* Make sure we see any padding after the write update
- * (see rb_reset_tail())
+ * (see rb_reset_tail()).
+ *
+ * In addition, a writer may be writing on the reader page
+ * if the page has not been fully filled, so the read barrier
+ * is also needed to make sure we see the content of what is
+ * committed by the writer (see rb_set_commit_to_write()).
*/
smp_rmb();
@@ -5568,11 +5557,16 @@ EXPORT_SYMBOL_GPL(ring_buffer_alloc_read_page);
*/
void ring_buffer_free_read_page(struct trace_buffer *buffer, int cpu, void *data)
{
- struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu];
+ struct ring_buffer_per_cpu *cpu_buffer;
struct buffer_data_page *bpage = data;
struct page *page = virt_to_page(bpage);
unsigned long flags;
+ if (!buffer || !buffer->buffers || !buffer->buffers[cpu])
+ return;
+
+ cpu_buffer = buffer->buffers[cpu];
+
/* If the page is still in use someplace else, we can't reuse it */
if (page_ref_count(page) > 1)
goto out;
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 114d94d..dc097bd 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -5093,6 +5093,8 @@ loff_t tracing_lseek(struct file *file, loff_t offset, int whence)
static const struct file_operations tracing_fops = {
.open = tracing_open,
.read = seq_read,
+ .read_iter = seq_read_iter,
+ .splice_read = generic_file_splice_read,
.write = tracing_write_stub,
.llseek = tracing_lseek,
.release = tracing_release,
@@ -9089,9 +9091,6 @@ buffer_percent_write(struct file *filp, const char __user *ubuf,
if (val > 100)
return -EINVAL;
- if (!val)
- val = 1;
-
tr->buffer_percent = val;
(*ppos)++;
@@ -9417,6 +9416,7 @@ static int __remove_instance(struct trace_array *tr)
tracefs_remove(tr->dir);
free_percpu(tr->last_func_repeats);
free_trace_buffers(tr);
+ clear_tracing_err_log(tr);
for (i = 0; i < tr->nr_topts; i++) {
kfree(tr->topts[i].topts);
@@ -10217,6 +10217,8 @@ void __init early_trace_init(void)
static_key_enable(&tracepoint_printk_key.key);
}
tracer_alloc_buffers();
+
+ init_events();
}
void __init trace_init(void)
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 9fc5981..66b6c83 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1500,6 +1500,7 @@ extern void trace_event_enable_cmd_record(bool enable);
extern void trace_event_enable_tgid_record(bool enable);
extern int event_trace_init(void);
+extern int init_events(void);
extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr);
extern int event_trace_del_tracer(struct trace_array *tr);
extern void __trace_early_add_events(struct trace_array *tr);
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index cc1a078..80e6528 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -1127,6 +1127,9 @@ static const char *hist_field_name(struct hist_field *field,
{
const char *field_name = "";
+ if (WARN_ON_ONCE(!field))
+ return field_name;
+
if (level > 1)
return field_name;
@@ -1699,6 +1702,8 @@ static struct hist_field *create_hist_field(struct hist_trigger_data *hist_data,
hist_field->fn = flags & HIST_FIELD_FL_LOG2 ? hist_field_log2 :
hist_field_bucket;
hist_field->operands[0] = create_hist_field(hist_data, field, fl, NULL);
+ if (!hist_field->operands[0])
+ goto free;
hist_field->size = hist_field->operands[0]->size;
hist_field->type = kstrdup_const(hist_field->operands[0]->type, GFP_KERNEL);
if (!hist_field->type)
diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
index d0a730d..9ec032f 100644
--- a/kernel/trace/trace_hwlat.c
+++ b/kernel/trace/trace_hwlat.c
@@ -339,7 +339,7 @@ static void move_to_next_cpu(void)
cpumask_clear(current_mask);
cpumask_set_cpu(next_cpu, current_mask);
- sched_setaffinity(0, current_mask);
+ set_cpus_allowed_ptr(current, current_mask);
return;
change_mode:
@@ -446,7 +446,7 @@ static int start_single_kthread(struct trace_array *tr)
}
- sched_setaffinity(kthread->pid, current_mask);
+ set_cpus_allowed_ptr(kthread, current_mask);
kdata->kthread = kthread;
wake_up_process(kthread);
@@ -491,18 +491,18 @@ static void stop_per_cpu_kthreads(void)
static int start_cpu_kthread(unsigned int cpu)
{
struct task_struct *kthread;
- char comm[24];
- snprintf(comm, 24, "hwlatd/%d", cpu);
+ /* Do not start a new hwlatd thread if it is already running */
+ if (per_cpu(hwlat_per_cpu_data, cpu).kthread)
+ return 0;
- kthread = kthread_create_on_cpu(kthread_fn, NULL, cpu, comm);
+ kthread = kthread_run_on_cpu(kthread_fn, NULL, cpu, "hwlatd/%u");
if (IS_ERR(kthread)) {
pr_err(BANNER "could not start sampling thread\n");
return -ENOMEM;
}
per_cpu(hwlat_per_cpu_data, cpu).kthread = kthread;
- wake_up_process(kthread);
return 0;
}
@@ -588,9 +588,6 @@ static int start_per_cpu_kthreads(struct trace_array *tr)
*/
cpumask_and(current_mask, cpu_online_mask, tr->tracing_cpumask);
- for_each_online_cpu(cpu)
- per_cpu(hwlat_per_cpu_data, cpu).kthread = NULL;
-
for_each_cpu(cpu, current_mask) {
retval = start_cpu_kthread(cpu);
if (retval)
diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index c2ca40e..6b4d3f3 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1569,7 +1569,7 @@ static struct trace_event *events[] __initdata = {
NULL
};
-__init static int init_events(void)
+__init int init_events(void)
{
struct trace_event *event;
int i, ret;
@@ -1587,4 +1587,3 @@ __init static int init_events(void)
return 0;
}
-early_initcall(init_events);
diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
index 1059ef6..54cbaa9 100644
--- a/kernel/watch_queue.c
+++ b/kernel/watch_queue.c
@@ -274,6 +274,7 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
if (ret < 0)
goto error;
+ ret = -ENOMEM;
pages = kcalloc(sizeof(struct page *), nr_pages, GFP_KERNEL);
if (!pages)
goto error;
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index f71db0cc..dbbd243 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -302,7 +302,7 @@
config DEBUG_INFO_DWARF5
bool "Generate DWARF Version 5 debuginfo"
depends on !CC_IS_CLANG || AS_IS_LLVM || (AS_IS_GNU && AS_VERSION >= 23502 && AS_HAS_NON_CONST_LEB128)
- depends on !DEBUG_INFO_BTF
+ depends on !DEBUG_INFO_BTF || PAHOLE_VERSION >= 121
help
Generate DWARF v5 debug info. Requires binutils 2.35.2, gcc 5.0+ (gcc
5.0+ accepts the -gdwarf-5 flag but only had partial support for some
@@ -328,7 +328,7 @@
DWARF type info into equivalent deduplicated BTF type info.
config PAHOLE_HAS_SPLIT_BTF
- def_bool $(success, test `$(PAHOLE) --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/'` -ge "119")
+ def_bool PAHOLE_VERSION >= 119
config DEBUG_INFO_BTF_MODULES
def_bool y
diff --git a/lib/errname.c b/lib/errname.c
index 05cbf73..67739b1 100644
--- a/lib/errname.c
+++ b/lib/errname.c
@@ -21,6 +21,7 @@ static const char *names_0[] = {
E(EADDRNOTAVAIL),
E(EADV),
E(EAFNOSUPPORT),
+ E(EAGAIN), /* EWOULDBLOCK */
E(EALREADY),
E(EBADE),
E(EBADF),
@@ -31,15 +32,17 @@ static const char *names_0[] = {
E(EBADSLT),
E(EBFONT),
E(EBUSY),
-#ifdef ECANCELLED
- E(ECANCELLED),
-#endif
+ E(ECANCELED), /* ECANCELLED */
E(ECHILD),
E(ECHRNG),
E(ECOMM),
E(ECONNABORTED),
+ E(ECONNREFUSED), /* EREFUSED */
E(ECONNRESET),
+ E(EDEADLK), /* EDEADLOCK */
+#if EDEADLK != EDEADLOCK /* mips, sparc, powerpc */
E(EDEADLOCK),
+#endif
E(EDESTADDRREQ),
E(EDOM),
E(EDOTDOT),
@@ -166,14 +169,17 @@ static const char *names_0[] = {
E(EUSERS),
E(EXDEV),
E(EXFULL),
-
- E(ECANCELED), /* ECANCELLED */
- E(EAGAIN), /* EWOULDBLOCK */
- E(ECONNREFUSED), /* EREFUSED */
- E(EDEADLK), /* EDEADLOCK */
};
#undef E
+#ifdef EREFUSED /* parisc */
+static_assert(EREFUSED == ECONNREFUSED);
+#endif
+#ifdef ECANCELLED /* parisc */
+static_assert(ECANCELLED == ECANCELED);
+#endif
+static_assert(EAGAIN == EWOULDBLOCK); /* everywhere */
+
#define E(err) [err - 512 + BUILD_BUG_ON_ZERO(err < 512 || err > 550)] = "-" #err
static const char *names_512[] = {
E(ERESTARTSYS),
diff --git a/lib/kobject.c b/lib/kobject.c
index ea53b30..184a3da 100644
--- a/lib/kobject.c
+++ b/lib/kobject.c
@@ -126,10 +126,10 @@ static int create_dir(struct kobject *kobj)
return 0;
}
-static int get_kobj_path_length(struct kobject *kobj)
+static int get_kobj_path_length(const struct kobject *kobj)
{
int length = 1;
- struct kobject *parent = kobj;
+ const struct kobject *parent = kobj;
/* walk up the ancestors until we hit the one pointing to the
* root.
@@ -144,21 +144,25 @@ static int get_kobj_path_length(struct kobject *kobj)
return length;
}
-static void fill_kobj_path(struct kobject *kobj, char *path, int length)
+static int fill_kobj_path(const struct kobject *kobj, char *path, int length)
{
- struct kobject *parent;
+ const struct kobject *parent;
--length;
for (parent = kobj; parent; parent = parent->parent) {
int cur = strlen(kobject_name(parent));
/* back up enough to print this name with '/' */
length -= cur;
+ if (length <= 0)
+ return -EINVAL;
memcpy(path + length, kobject_name(parent), cur);
*(path + --length) = '/';
}
pr_debug("kobject: '%s' (%p): %s: path = '%s'\n", kobject_name(kobj),
kobj, __func__, path);
+
+ return 0;
}
/**
@@ -168,18 +172,22 @@ static void fill_kobj_path(struct kobject *kobj, char *path, int length)
*
* Return: The newly allocated memory, caller must free with kfree().
*/
-char *kobject_get_path(struct kobject *kobj, gfp_t gfp_mask)
+char *kobject_get_path(const struct kobject *kobj, gfp_t gfp_mask)
{
char *path;
int len;
+retry:
len = get_kobj_path_length(kobj);
if (len == 0)
return NULL;
path = kzalloc(len, gfp_mask);
if (!path)
return NULL;
- fill_kobj_path(kobj, path, len);
+ if (fill_kobj_path(kobj, path, len)) {
+ kfree(path);
+ goto retry;
+ }
return path;
}
diff --git a/lib/lockref.c b/lib/lockref.c
index 5b34bbd..81ac5f3 100644
--- a/lib/lockref.c
+++ b/lib/lockref.c
@@ -24,7 +24,6 @@
} \
if (!--retry) \
break; \
- cpu_relax(); \
} \
} while (0)
diff --git a/lib/mpi/mpicoder.c b/lib/mpi/mpicoder.c
index 39c4c67..3cb6bd1 100644
--- a/lib/mpi/mpicoder.c
+++ b/lib/mpi/mpicoder.c
@@ -504,7 +504,8 @@ MPI mpi_read_raw_from_sgl(struct scatterlist *sgl, unsigned int nbytes)
while (sg_miter_next(&miter)) {
buff = miter.addr;
- len = miter.length;
+ len = min_t(unsigned, miter.length, nbytes);
+ nbytes -= len;
for (x = 0; x < len; x++) {
a <<= 8;
diff --git a/lib/nlattr.c b/lib/nlattr.c
index 86029ad..73635bd 100644
--- a/lib/nlattr.c
+++ b/lib/nlattr.c
@@ -10,6 +10,7 @@
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/jiffies.h>
+#include <linux/nospec.h>
#include <linux/skbuff.h>
#include <linux/string.h>
#include <linux/types.h>
@@ -369,6 +370,7 @@ static int validate_nla(const struct nlattr *nla, int maxtype,
if (type <= 0 || type > maxtype)
return 0;
+ type = array_index_nospec(type, maxtype + 1);
pt = &policy[type];
BUG_ON(pt->type > NLA_TYPE_MAX);
@@ -584,6 +586,7 @@ static int __nla_validate_parse(const struct nlattr *head, int len, int maxtype,
}
continue;
}
+ type = array_index_nospec(type, maxtype + 1);
if (policy) {
int err = validate_nla(nla, maxtype, policy,
validate, extack, depth);
diff --git a/lib/ubsan.c b/lib/ubsan.c
index bdc380f..60c7099 100644
--- a/lib/ubsan.c
+++ b/lib/ubsan.c
@@ -154,16 +154,7 @@ static void ubsan_epilogue(void)
current->in_ubsan--;
- if (panic_on_warn) {
- /*
- * This thread may hit another WARN() in the panic path.
- * Resetting this prevents additional WARN() from panicking the
- * system on this thread. Other threads are blocked by the
- * panic_mutex in panic().
- */
- panic_on_warn = 0;
- panic("panic_on_warn set ...\n");
- }
+ check_panic_on_warn("UBSAN");
}
void __ubsan_handle_divrem_overflow(void *_data, void *lhs, void *rhs)
diff --git a/lib/usercopy.c b/lib/usercopy.c
index 7413dd3..7ee63df 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -3,6 +3,7 @@
#include <linux/fault-inject-usercopy.h>
#include <linux/instrumented.h>
#include <linux/uaccess.h>
+#include <linux/nospec.h>
/* out-of-line parts */
@@ -12,6 +13,12 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
unsigned long res = n;
might_fault();
if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+ /*
+ * Ensure that bad access_ok() speculation will not
+ * lead to nasty side effects *after* the copy is
+ * finished:
+ */
+ barrier_nospec();
instrument_copy_from_user(to, from, n);
res = raw_copy_from_user(to, from, n);
}
diff --git a/mm/filemap.c b/mm/filemap.c
index 30c9c2a..81e2872 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2538,18 +2538,19 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter,
struct page *page;
int err = 0;
+ /* "last_index" is the index of the page beyond the end of the read */
last_index = DIV_ROUND_UP(iocb->ki_pos + iter->count, PAGE_SIZE);
retry:
if (fatal_signal_pending(current))
return -EINTR;
- filemap_get_read_batch(mapping, index, last_index, pvec);
+ filemap_get_read_batch(mapping, index, last_index - 1, pvec);
if (!pagevec_count(pvec)) {
if (iocb->ki_flags & IOCB_NOIO)
return -EAGAIN;
page_cache_sync_readahead(mapping, ra, filp, index,
last_index - index);
- filemap_get_read_batch(mapping, index, last_index, pvec);
+ filemap_get_read_batch(mapping, index, last_index - 1, pvec);
}
if (!pagevec_count(pvec)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
diff --git a/mm/gup.c b/mm/gup.c
index 2370565..0a1839b3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1877,7 +1877,7 @@ static long check_and_migrate_movable_pages(unsigned long nr_pages,
*/
if (!is_pinnable_page(head)) {
if (PageHuge(head)) {
- if (!isolate_huge_page(head, &movable_page_list))
+ if (isolate_hugetlb(head, &movable_page_list))
isolation_error_count++;
} else {
if (!PageLRU(head) && drain_allow) {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 07941a1..98ff57c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1926,7 +1926,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
{
struct mm_struct *mm = vma->vm_mm;
pgtable_t pgtable;
- pmd_t _pmd;
+ pmd_t _pmd, old_pmd;
int i;
/*
@@ -1937,7 +1937,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
*
* See Documentation/vm/mmu_notifier.rst
*/
- pmdp_huge_clear_flush(vma, haddr, pmd);
+ old_pmd = pmdp_huge_clear_flush(vma, haddr, pmd);
pgtable = pgtable_trans_huge_withdraw(mm, pmd);
pmd_populate(mm, &_pmd, pgtable);
@@ -1946,6 +1946,8 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
pte_t *pte, entry;
entry = pfn_pte(my_zero_pfn(haddr), vma->vm_page_prot);
entry = pte_mkspecial(entry);
+ if (pmd_uffd_wp(old_pmd))
+ entry = pte_mkuffd_wp(entry);
pte = pte_offset_map(&_pmd, haddr);
VM_BUG_ON(!pte_none(*pte));
set_pte_at(mm, haddr, pte, entry);
@@ -2778,6 +2780,9 @@ void deferred_split_huge_page(struct page *page)
if (PageSwapCache(page))
return;
+ if (!list_empty(page_deferred_list(page)))
+ return;
+
spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
if (list_empty(page_deferred_list(page))) {
count_vm_event(THP_DEFERRED_SPLIT_PAGE);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e7bd42f..2f5c1b2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -82,6 +82,8 @@ struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp;
/* Forward declaration */
static int hugetlb_acct_memory(struct hstate *h, long delta);
+static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
static inline bool subpool_is_free(struct hugepage_subpool *spool)
{
@@ -2654,8 +2656,7 @@ static int alloc_and_dissolve_huge_page(struct hstate *h, struct page *old_page,
* Fail with -EBUSY if not possible.
*/
spin_unlock_irq(&hugetlb_lock);
- if (!isolate_huge_page(old_page, list))
- ret = -EBUSY;
+ ret = isolate_hugetlb(old_page, list);
spin_lock_irq(&hugetlb_lock);
goto free_new;
} else if (!HPageFreed(old_page)) {
@@ -2731,7 +2732,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
if (hstate_is_gigantic(h))
return -ENOMEM;
- if (page_count(head) && isolate_huge_page(head, list))
+ if (page_count(head) && !isolate_hugetlb(head, list))
ret = 0;
else if (!page_count(head))
ret = alloc_and_dissolve_huge_page(h, head, list);
@@ -4164,6 +4165,25 @@ static int hugetlb_vm_op_split(struct vm_area_struct *vma, unsigned long addr)
{
if (addr & ~(huge_page_mask(hstate_vma(vma))))
return -EINVAL;
+
+ /*
+ * PMD sharing is only possible for PUD_SIZE-aligned address ranges
+ * in HugeTLB VMAs. If we will lose PUD_SIZE alignment due to this
+ * split, unshare PMDs in the PUD_SIZE interval surrounding addr now.
+ */
+ if (addr & ~PUD_MASK) {
+ /*
+ * hugetlb_vm_op_split is called right before we attempt to
+ * split the VMA. We will need to unshare PMDs in the old and
+ * new VMAs, so let's unshare before we split.
+ */
+ unsigned long floor = addr & PUD_MASK;
+ unsigned long ceil = floor + PUD_SIZE;
+
+ if (floor >= vma->vm_start && ceil <= vma->vm_end)
+ hugetlb_unshare_pmds(vma, floor, ceil);
+ }
+
return 0;
}
@@ -6256,15 +6276,15 @@ follow_huge_pgd(struct mm_struct *mm, unsigned long address, pgd_t *pgd, int fla
return pte_page(*(pte_t *)pgd) + ((address & ~PGDIR_MASK) >> PAGE_SHIFT);
}
-bool isolate_huge_page(struct page *page, struct list_head *list)
+int isolate_hugetlb(struct page *page, struct list_head *list)
{
- bool ret = true;
+ int ret = 0;
spin_lock_irq(&hugetlb_lock);
if (!PageHeadHuge(page) ||
!HPageMigratable(page) ||
!get_page_unless_zero(page)) {
- ret = false;
+ ret = -EBUSY;
goto unlock;
}
ClearHPageMigratable(page);
@@ -6349,26 +6369,21 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)
}
}
-/*
- * This function will unconditionally remove all the shared pmd pgtable entries
- * within the specific vma for a hugetlbfs memory range.
- */
-void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
+static void hugetlb_unshare_pmds(struct vm_area_struct *vma,
+ unsigned long start,
+ unsigned long end)
{
struct hstate *h = hstate_vma(vma);
unsigned long sz = huge_page_size(h);
struct mm_struct *mm = vma->vm_mm;
struct mmu_notifier_range range;
- unsigned long address, start, end;
+ unsigned long address;
spinlock_t *ptl;
pte_t *ptep;
if (!(vma->vm_flags & VM_MAYSHARE))
return;
- start = ALIGN(vma->vm_start, PUD_SIZE);
- end = ALIGN_DOWN(vma->vm_end, PUD_SIZE);
-
if (start >= end)
return;
@@ -6400,6 +6415,16 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
mmu_notifier_invalidate_range_end(&range);
}
+/*
+ * This function will unconditionally remove all the shared pmd pgtable entries
+ * within the specific vma for a hugetlbfs memory range.
+ */
+void hugetlb_unshare_all_pmds(struct vm_area_struct *vma)
+{
+ hugetlb_unshare_pmds(vma, ALIGN(vma->vm_start, PUD_SIZE),
+ ALIGN_DOWN(vma->vm_end, PUD_SIZE));
+}
+
#ifdef CONFIG_CMA
static bool cma_reserve_called __initdata;
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 884a950..887af87 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -117,16 +117,8 @@ static void end_report(unsigned long *flags, unsigned long addr)
pr_err("==================================================================\n");
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
spin_unlock_irqrestore(&report_lock, *flags);
- if (panic_on_warn && !test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags)) {
- /*
- * This thread may hit another WARN() in the panic path.
- * Resetting this prevents additional WARN() from panicking the
- * system on this thread. Other threads are blocked by the
- * panic_mutex in panic().
- */
- panic_on_warn = 0;
- panic("panic_on_warn set ...\n");
- }
+ if (!test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
+ check_panic_on_warn("KASAN");
if (kasan_arg_fault == KASAN_ARG_FAULT_PANIC)
panic("kasan.fault=panic set ...\n");
kasan_enable_current();
diff --git a/mm/kfence/Makefile b/mm/kfence/Makefile
index 6872cd5..cb2bcf7 100644
--- a/mm/kfence/Makefile
+++ b/mm/kfence/Makefile
@@ -2,5 +2,5 @@
obj-$(CONFIG_KFENCE) := core.o report.o
-CFLAGS_kfence_test.o := -g -fno-omit-frame-pointer -fno-optimize-sibling-calls
+CFLAGS_kfence_test.o := -fno-omit-frame-pointer -fno-optimize-sibling-calls
obj-$(CONFIG_KFENCE_KUNIT_TEST) += kfence_test.o
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index d252027..3eab72f 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -678,10 +678,14 @@ static const struct file_operations objects_fops = {
.release = seq_release,
};
-static int __init kfence_debugfs_init(void)
+static int kfence_debugfs_init(void)
{
- struct dentry *kfence_dir = debugfs_create_dir("kfence", NULL);
+ struct dentry *kfence_dir;
+ if (!READ_ONCE(kfence_enabled))
+ return 0;
+
+ kfence_dir = debugfs_create_dir("kfence", NULL);
debugfs_create_file("stats", 0444, kfence_dir, NULL, &stats_fops);
debugfs_create_file("objects", 0400, kfence_dir, NULL, &objects_fops);
return 0;
diff --git a/mm/kfence/report.c b/mm/kfence/report.c
index 37e140e..cbd9456 100644
--- a/mm/kfence/report.c
+++ b/mm/kfence/report.c
@@ -267,8 +267,7 @@ void kfence_report_error(unsigned long address, bool is_write, struct pt_regs *r
lockdep_on();
- if (panic_on_warn)
- panic("panic_on_warn set ...\n");
+ check_panic_on_warn("KFENCE");
/* We encountered a memory safety error, taint the kernel! */
add_taint(TAINT_BAD_PAGE, LOCKDEP_STILL_OK);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index fd25d12..3afcb14 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1458,14 +1458,6 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
if (!hugepage_vma_check(vma, vma->vm_flags | VM_HUGEPAGE))
return;
- /*
- * Symmetry with retract_page_tables(): Exclude MAP_PRIVATE mappings
- * that got written to. Without this, we'd have to also lock the
- * anon_vma if one exists.
- */
- if (vma->anon_vma)
- return;
-
hpage = find_lock_page(vma->vm_file->f_mapping,
linear_page_index(vma, haddr));
if (!hpage)
@@ -1537,6 +1529,10 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
}
/* step 4: collapse pmd */
+ /* we make no change to anon, but protect concurrent anon page lookup */
+ if (vma->anon_vma)
+ anon_vma_lock_write(vma->anon_vma);
+
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, haddr,
haddr + HPAGE_PMD_SIZE);
mmu_notifier_invalidate_range_start(&range);
@@ -1546,6 +1542,8 @@ void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr)
mmu_notifier_invalidate_range_end(&range);
pte_free(mm, pmd_pgtable(_pmd));
+ if (vma->anon_vma)
+ anon_vma_unlock_write(vma->anon_vma);
i_mmap_unlock_write(vma->vm_file->f_mapping);
drop_hpage:
diff --git a/mm/memblock.c b/mm/memblock.c
index 838d59a..2b73977 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1615,13 +1615,7 @@ void __init __memblock_free_late(phys_addr_t base, phys_addr_t size)
end = PFN_DOWN(base + size);
for (; cursor < end; cursor++) {
- /*
- * Reserved pages are always initialized by the end of
- * memblock_free_all() (by memmap_init() and, if deferred
- * initialization is enabled, memmap_init_reserved_pages()), so
- * these pages can be released directly to the buddy allocator.
- */
- __free_pages_core(pfn_to_page(cursor), 0);
+ memblock_free_pages(pfn_to_page(cursor), cursor, 0);
totalram_pages_inc();
}
}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3d3364c..b68b2fe 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3872,6 +3872,10 @@ static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ pr_warn_once("Cgroup memory moving (move_charge_at_immigrate) is deprecated. "
+ "Please report your usecase to linux-mm@kvack.org if you "
+ "depend on this functionality.\n");
+
if (val & ~MOVE_MASK)
return -EINVAL;
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 2ad0f45..9f9dd96 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2106,7 +2106,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
bool lru = PageLRU(page);
if (PageHuge(page)) {
- isolated = isolate_huge_page(page, pagelist);
+ isolated = !isolate_hugetlb(page, pagelist);
} else {
if (lru)
isolated = !isolate_lru_page(page);
diff --git a/mm/memory.c b/mm/memory.c
index a4d0f74..8d71a82 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3462,8 +3462,21 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
struct vm_area_struct *vma = vmf->vma;
struct mmu_notifier_range range;
- if (!lock_page_or_retry(page, vma->vm_mm, vmf->flags))
+ /*
+ * We need a reference to lock the page because we don't hold
+ * the PTL so a racing thread can remove the device-exclusive
+ * entry and unmap it. If the page is free the entry must
+ * have been removed already. If it happens to have already
+ * been re-allocated after being freed all we do is lock and
+ * unlock it.
+ */
+ if (!get_page_unless_zero(page))
+ return 0;
+
+ if (!lock_page_or_retry(page, vma->vm_mm, vmf->flags)) {
+ put_page(page);
return VM_FAULT_RETRY;
+ }
mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma,
vma->vm_mm, vmf->address & PAGE_MASK,
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3476,6 +3489,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
pte_unmap_unlock(vmf->pte, vmf->ptl);
unlock_page(page);
+ put_page(page);
mmu_notifier_invalidate_range_end(&range);
return 0;
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9fd0be3..81f2a97 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1704,7 +1704,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
if (PageHuge(page)) {
pfn = page_to_pfn(head) + compound_nr(head) - 1;
- isolate_huge_page(head, &source);
+ isolate_hugetlb(head, &source);
continue;
} else if (PageTransHuge(page))
pfn = page_to_pfn(head) + thp_nr_pages(page) - 1;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 4472be6..8187536 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -603,8 +603,9 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
/* With MPOL_MF_MOVE, we migrate only unshared hugepage. */
if (flags & (MPOL_MF_MOVE_ALL) ||
- (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) {
- if (!isolate_huge_page(page, qp->pagelist) &&
+ (flags & MPOL_MF_MOVE && page_mapcount(page) == 1 &&
+ !hugetlb_pmd_shared(pte))) {
+ if (isolate_hugetlb(page, qp->pagelist) &&
(flags & MPOL_MF_STRICT))
/*
* Failed to isolate page but allow migrating pages
diff --git a/mm/migrate.c b/mm/migrate.c
index 7da052c..dd50b1c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -134,7 +134,7 @@ static void putback_movable_page(struct page *page)
*
* This function shall be used whenever the isolated pageset has been
* built from lru, balloon, hugetlbfs page. See isolate_migratepages_range()
- * and isolate_huge_page().
+ * and isolate_hugetlb().
*/
void putback_movable_pages(struct list_head *l)
{
@@ -1722,8 +1722,9 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
if (PageHuge(page)) {
if (PageHead(page)) {
- isolate_huge_page(page, pagelist);
- err = 1;
+ err = isolate_hugetlb(page, pagelist);
+ if (!err)
+ err = 1;
}
} else {
struct page *head;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a71722b..c929357 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5490,9 +5490,12 @@ EXPORT_SYMBOL(get_zeroed_page);
*/
void __free_pages(struct page *page, unsigned int order)
{
+ /* get PageHead before we drop reference */
+ int head = PageHead(page);
+
if (put_page_testzero(page))
free_the_page(page, order);
- else if (!PageHead(page))
+ else if (!head)
while (order-- > 0)
free_the_page(page + (1 << order), order);
}
diff --git a/mm/slab.c b/mm/slab.c
index 1bd283e..f5b2246 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -855,7 +855,7 @@ static int init_cache_node(struct kmem_cache *cachep, int node, gfp_t gfp)
return 0;
}
-#if (defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)) || defined(CONFIG_SMP)
+#if defined(CONFIG_NUMA) || defined(CONFIG_SMP)
/*
* Allocates and initializes node for a node on each slab cache, used for
* either memory or cpu hotplug. If memory is being hot-added, the kmem_cache_node
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 22d10f7..b7e1620a 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -673,6 +673,7 @@ static void __del_from_avail_list(struct swap_info_struct *p)
{
int nid;
+ assert_spin_locked(&p->lock);
for_each_node(nid)
plist_del(&p->avail_lists[nid], &swap_avail_heads[nid]);
}
@@ -1093,6 +1094,7 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size)
goto check_out;
pr_debug("scan_swap_map of si %d failed to find offset\n",
si->type);
+ cond_resched();
spin_lock(&swap_avail_lock);
nextsi:
@@ -2564,8 +2566,8 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile)
spin_unlock(&swap_lock);
goto out_dput;
}
- del_from_avail_list(p);
spin_lock(&p->lock);
+ del_from_avail_list(p);
if (p->prio < 0) {
struct swap_info_struct *si = p;
int nid;
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 8375eec..3e48220 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2927,9 +2927,11 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
* allocation request, free them via __vfree() if any.
*/
if (area->nr_pages != nr_small_pages) {
- warn_alloc(gfp_mask, NULL,
- "vmalloc error: size %lu, page order %u, failed to allocate pages",
- area->nr_pages * PAGE_SIZE, page_order);
+ /* vm_area_alloc_pages() can also fail due to a fatal signal */
+ if (!fatal_signal_pending(current))
+ warn_alloc(gfp_mask, NULL,
+ "vmalloc error: size %lu, page order %u, failed to allocate pages",
+ area->nr_pages * PAGE_SIZE, page_order);
goto fail;
}
diff --git a/net/9p/client.c b/net/9p/client.c
index 08e0c99..c4c1e44 100644
--- a/net/9p/client.c
+++ b/net/9p/client.c
@@ -1315,7 +1315,7 @@ int p9_client_create_dotl(struct p9_fid *ofid, const char *name, u32 flags,
qid->type, qid->path, qid->version, iounit);
memmove(&ofid->qid, qid, sizeof(struct p9_qid));
- ofid->mode = mode;
+ ofid->mode = flags;
ofid->iounit = iounit;
free_and_error:
diff --git a/net/9p/trans_rdma.c b/net/9p/trans_rdma.c
index f6d1458..e5bfe8d 100644
--- a/net/9p/trans_rdma.c
+++ b/net/9p/trans_rdma.c
@@ -388,6 +388,7 @@ post_recv(struct p9_client *client, struct p9_rdma_context *c)
struct p9_trans_rdma *rdma = client->trans;
struct ib_recv_wr wr;
struct ib_sge sge;
+ int ret;
c->busa = ib_dma_map_single(rdma->cm_id->device,
c->rc.sdata, client->msize,
@@ -405,7 +406,12 @@ post_recv(struct p9_client *client, struct p9_rdma_context *c)
wr.wr_cqe = &c->cqe;
wr.sg_list = &sge;
wr.num_sge = 1;
- return ib_post_recv(rdma->qp, &wr, NULL);
+
+ ret = ib_post_recv(rdma->qp, &wr, NULL);
+ if (ret)
+ ib_dma_unmap_single(rdma->cm_id->device, c->busa,
+ client->msize, DMA_FROM_DEVICE);
+ return ret;
error:
p9_debug(P9_DEBUG_ERROR, "EIO\n");
@@ -502,7 +508,7 @@ static int rdma_request(struct p9_client *client, struct p9_req_t *req)
if (down_interruptible(&rdma->sq_sem)) {
err = -EINTR;
- goto send_error;
+ goto dma_unmap;
}
/* Mark request as `sent' *before* we actually send it,
@@ -512,11 +518,14 @@ static int rdma_request(struct p9_client *client, struct p9_req_t *req)
req->status = REQ_STATUS_SENT;
err = ib_post_send(rdma->qp, &wr, NULL);
if (err)
- goto send_error;
+ goto dma_unmap;
/* Success */
return 0;
+dma_unmap:
+ ib_dma_unmap_single(rdma->cm_id->device, c->busa,
+ c->req->tc.size, DMA_TO_DEVICE);
/* Handle errors that happened during or while preparing the send: */
send_error:
req->status = REQ_STATUS_ERROR;
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index 4255f2a..9e4da8c 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -393,19 +393,24 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
return ret;
}
-static int xen_9pfs_front_probe(struct xenbus_device *dev,
- const struct xenbus_device_id *id)
+static int xen_9pfs_front_init(struct xenbus_device *dev)
{
int ret, i;
struct xenbus_transaction xbt;
- struct xen_9pfs_front_priv *priv = NULL;
- char *versions;
+ struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev);
+ char *versions, *v;
unsigned int max_rings, max_ring_order, len = 0;
versions = xenbus_read(XBT_NIL, dev->otherend, "versions", &len);
if (IS_ERR(versions))
return PTR_ERR(versions);
- if (strcmp(versions, "1")) {
+ for (v = versions; *v; v++) {
+ if (simple_strtoul(v, &v, 10) == 1) {
+ v = NULL;
+ break;
+ }
+ }
+ if (v) {
kfree(versions);
return -EINVAL;
}
@@ -420,11 +425,6 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order))
p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order) / 2;
- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
- if (!priv)
- return -ENOMEM;
-
- priv->dev = dev;
priv->num_rings = XEN_9PFS_NUM_RINGS;
priv->rings = kcalloc(priv->num_rings, sizeof(*priv->rings),
GFP_KERNEL);
@@ -483,23 +483,35 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
goto error;
}
- write_lock(&xen_9pfs_lock);
- list_add_tail(&priv->list, &xen_9pfs_devs);
- write_unlock(&xen_9pfs_lock);
- dev_set_drvdata(&dev->dev, priv);
- xenbus_switch_state(dev, XenbusStateInitialised);
-
return 0;
error_xenbus:
xenbus_transaction_end(xbt, 1);
xenbus_dev_fatal(dev, ret, "writing xenstore");
error:
- dev_set_drvdata(&dev->dev, NULL);
xen_9pfs_front_free(priv);
return ret;
}
+static int xen_9pfs_front_probe(struct xenbus_device *dev,
+ const struct xenbus_device_id *id)
+{
+ struct xen_9pfs_front_priv *priv = NULL;
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->dev = dev;
+ dev_set_drvdata(&dev->dev, priv);
+
+ write_lock(&xen_9pfs_lock);
+ list_add_tail(&priv->list, &xen_9pfs_devs);
+ write_unlock(&xen_9pfs_lock);
+
+ return 0;
+}
+
static int xen_9pfs_front_resume(struct xenbus_device *dev)
{
dev_warn(&dev->dev, "suspend/resume unsupported\n");
@@ -518,6 +530,8 @@ static void xen_9pfs_front_changed(struct xenbus_device *dev,
break;
case XenbusStateInitWait:
+ if (!xen_9pfs_front_init(dev))
+ xenbus_switch_state(dev, XenbusStateInitialised);
break;
case XenbusStateConnected:
diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
index a41b4dc..cabe8eb 100644
--- a/net/bluetooth/hci_core.c
+++ b/net/bluetooth/hci_core.c
@@ -1632,6 +1632,7 @@ static int hci_dev_do_open(struct hci_dev *hdev)
hdev->flush(hdev);
if (hdev->sent_cmd) {
+ cancel_delayed_work_sync(&hdev->cmd_timer);
kfree_skb(hdev->sent_cmd);
hdev->sent_cmd = NULL;
}
diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
index 2337e92..9f82fe0 100644
--- a/net/bluetooth/hci_event.c
+++ b/net/bluetooth/hci_event.c
@@ -4415,6 +4415,19 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev,
struct hci_ev_sync_conn_complete *ev = (void *) skb->data;
struct hci_conn *conn;
+ switch (ev->link_type) {
+ case SCO_LINK:
+ case ESCO_LINK:
+ break;
+ default:
+ /* As per Core 5.3 Vol 4 Part E 7.7.35 (p.2219), Link_Type
+ * for HCI_Synchronous_Connection_Complete is limited to
+ * either SCO or eSCO
+ */
+ bt_dev_err(hdev, "Ignoring connect complete event for invalid link type");
+ return;
+ }
+
BT_DBG("%s status 0x%2.2x", hdev->name, ev->status);
hci_dev_lock(hdev);
diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
index f1128c2..3f92a21 100644
--- a/net/bluetooth/hci_sock.c
+++ b/net/bluetooth/hci_sock.c
@@ -888,10 +888,6 @@ static int hci_sock_release(struct socket *sock)
}
sock_orphan(sk);
-
- skb_queue_purge(&sk->sk_receive_queue);
- skb_queue_purge(&sk->sk_write_queue);
-
release_sock(sk);
sock_put(sk);
return 0;
@@ -2012,6 +2008,12 @@ static int hci_sock_getsockopt(struct socket *sock, int level, int optname,
return err;
}
+static void hci_sock_destruct(struct sock *sk)
+{
+ skb_queue_purge(&sk->sk_receive_queue);
+ skb_queue_purge(&sk->sk_write_queue);
+}
+
static const struct proto_ops hci_sock_ops = {
.family = PF_BLUETOOTH,
.owner = THIS_MODULE,
@@ -2065,6 +2067,7 @@ static int hci_sock_create(struct net *net, struct socket *sock, int protocol,
sock->state = SS_UNCONNECTED;
sk->sk_state = BT_OPEN;
+ sk->sk_destruct = hci_sock_destruct;
bt_sock_link(&hci_sk_list, sk);
return 0;
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index e15fcf7..0194c25 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -708,6 +708,17 @@ void l2cap_chan_del(struct l2cap_chan *chan, int err)
}
EXPORT_SYMBOL_GPL(l2cap_chan_del);
+static void __l2cap_chan_list_id(struct l2cap_conn *conn, u16 id,
+ l2cap_chan_func_t func, void *data)
+{
+ struct l2cap_chan *chan, *l;
+
+ list_for_each_entry_safe(chan, l, &conn->chan_l, list) {
+ if (chan->ident == id)
+ func(chan, data);
+ }
+}
+
static void __l2cap_chan_list(struct l2cap_conn *conn, l2cap_chan_func_t func,
void *data)
{
@@ -775,23 +786,9 @@ static void l2cap_chan_le_connect_reject(struct l2cap_chan *chan)
static void l2cap_chan_ecred_connect_reject(struct l2cap_chan *chan)
{
- struct l2cap_conn *conn = chan->conn;
- struct l2cap_ecred_conn_rsp rsp;
- u16 result;
-
- if (test_bit(FLAG_DEFER_SETUP, &chan->flags))
- result = L2CAP_CR_LE_AUTHORIZATION;
- else
- result = L2CAP_CR_LE_BAD_PSM;
-
l2cap_state_change(chan, BT_DISCONN);
- memset(&rsp, 0, sizeof(rsp));
-
- rsp.result = cpu_to_le16(result);
-
- l2cap_send_cmd(conn, chan->ident, L2CAP_LE_CONN_RSP, sizeof(rsp),
- &rsp);
+ __l2cap_ecred_conn_rsp_defer(chan);
}
static void l2cap_chan_connect_reject(struct l2cap_chan *chan)
@@ -846,7 +843,7 @@ void l2cap_chan_close(struct l2cap_chan *chan, int reason)
break;
case L2CAP_MODE_EXT_FLOWCTL:
l2cap_chan_ecred_connect_reject(chan);
- break;
+ return;
}
}
}
@@ -2683,14 +2680,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
if (IS_ERR(skb))
return PTR_ERR(skb);
- /* Channel lock is released before requesting new skb and then
- * reacquired thus we need to recheck channel state.
- */
- if (chan->state != BT_CONNECTED) {
- kfree_skb(skb);
- return -ENOTCONN;
- }
-
l2cap_do_send(chan, skb);
return len;
}
@@ -2735,14 +2724,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
if (IS_ERR(skb))
return PTR_ERR(skb);
- /* Channel lock is released before requesting new skb and then
- * reacquired thus we need to recheck channel state.
- */
- if (chan->state != BT_CONNECTED) {
- kfree_skb(skb);
- return -ENOTCONN;
- }
-
l2cap_do_send(chan, skb);
err = len;
break;
@@ -2763,14 +2744,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len)
*/
err = l2cap_segment_sdu(chan, &seg_queue, msg, len);
- /* The channel could have been closed while segmenting,
- * check that it is still connected.
- */
- if (chan->state != BT_CONNECTED) {
- __skb_queue_purge(&seg_queue);
- err = -ENOTCONN;
- }
-
if (err)
break;
@@ -3962,43 +3935,86 @@ void __l2cap_le_connect_rsp_defer(struct l2cap_chan *chan)
&rsp);
}
-void __l2cap_ecred_conn_rsp_defer(struct l2cap_chan *chan)
+static void l2cap_ecred_list_defer(struct l2cap_chan *chan, void *data)
{
- struct {
- struct l2cap_ecred_conn_rsp rsp;
- __le16 dcid[5];
- } __packed pdu;
- struct l2cap_conn *conn = chan->conn;
- u16 ident = chan->ident;
- int i = 0;
+ int *result = data;
- if (!ident)
+ if (*result || test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
return;
- BT_DBG("chan %p ident %d", chan, ident);
-
- pdu.rsp.mtu = cpu_to_le16(chan->imtu);
- pdu.rsp.mps = cpu_to_le16(chan->mps);
- pdu.rsp.credits = cpu_to_le16(chan->rx_credits);
- pdu.rsp.result = cpu_to_le16(L2CAP_CR_LE_SUCCESS);
-
- mutex_lock(&conn->chan_lock);
-
- list_for_each_entry(chan, &conn->chan_l, list) {
- if (chan->ident != ident)
- continue;
-
- /* Reset ident so only one response is sent */
- chan->ident = 0;
-
- /* Include all channels pending with the same ident */
- pdu.dcid[i++] = cpu_to_le16(chan->scid);
+ switch (chan->state) {
+ case BT_CONNECT2:
+ /* If channel still pending accept add to result */
+ (*result)++;
+ return;
+ case BT_CONNECTED:
+ return;
+ default:
+ /* If not connected or pending accept it has been refused */
+ *result = -ECONNREFUSED;
+ return;
}
+}
- mutex_unlock(&conn->chan_lock);
+struct l2cap_ecred_rsp_data {
+ struct {
+ struct l2cap_ecred_conn_rsp rsp;
+ __le16 scid[L2CAP_ECRED_MAX_CID];
+ } __packed pdu;
+ int count;
+};
- l2cap_send_cmd(conn, ident, L2CAP_ECRED_CONN_RSP,
- sizeof(pdu.rsp) + i * sizeof(__le16), &pdu);
+static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data)
+{
+ struct l2cap_ecred_rsp_data *rsp = data;
+
+ if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
+ return;
+
+ /* Reset ident so only one response is sent */
+ chan->ident = 0;
+
+ /* Include all channels pending with the same ident */
+ if (!rsp->pdu.rsp.result)
+ rsp->pdu.rsp.dcid[rsp->count++] = cpu_to_le16(chan->scid);
+ else
+ l2cap_chan_del(chan, ECONNRESET);
+}
+
+void __l2cap_ecred_conn_rsp_defer(struct l2cap_chan *chan)
+{
+ struct l2cap_conn *conn = chan->conn;
+ struct l2cap_ecred_rsp_data data;
+ u16 id = chan->ident;
+ int result = 0;
+
+ if (!id)
+ return;
+
+ BT_DBG("chan %p id %d", chan, id);
+
+ memset(&data, 0, sizeof(data));
+
+ data.pdu.rsp.mtu = cpu_to_le16(chan->imtu);
+ data.pdu.rsp.mps = cpu_to_le16(chan->mps);
+ data.pdu.rsp.credits = cpu_to_le16(chan->rx_credits);
+ data.pdu.rsp.result = cpu_to_le16(L2CAP_CR_LE_SUCCESS);
+
+ /* Verify that all channels are ready */
+ __l2cap_chan_list_id(conn, id, l2cap_ecred_list_defer, &result);
+
+ if (result > 0)
+ return;
+
+ if (result < 0)
+ data.pdu.rsp.result = cpu_to_le16(L2CAP_CR_LE_AUTHORIZATION);
+
+ /* Build response */
+ __l2cap_chan_list_id(conn, id, l2cap_ecred_rsp_defer, &data);
+
+ l2cap_send_cmd(conn, id, L2CAP_ECRED_CONN_RSP,
+ sizeof(data.pdu.rsp) + (data.count * sizeof(__le16)),
+ &data.pdu);
}
void __l2cap_connect_rsp_defer(struct l2cap_chan *chan)
@@ -6102,6 +6118,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
__set_chan_timer(chan, chan->ops->get_sndtimeo(chan));
chan->ident = cmd->ident;
+ chan->mode = L2CAP_MODE_EXT_FLOWCTL;
if (test_bit(FLAG_DEFER_SETUP, &chan->flags)) {
l2cap_state_change(chan, BT_CONNECT2);
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index d2c6785..a267c9b 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -1623,6 +1623,14 @@ static struct sk_buff *l2cap_sock_alloc_skb_cb(struct l2cap_chan *chan,
if (!skb)
return ERR_PTR(err);
+ /* Channel lock is released before requesting new skb and then
+ * reacquired thus we need to recheck channel state.
+ */
+ if (chan->state != BT_CONNECTED) {
+ kfree_skb(skb);
+ return ERR_PTR(-ENOTCONN);
+ }
+
skb->priority = sk->sk_priority;
bt_cb(skb)->l2cap.chan = chan;
diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
index 21e24da..4397e14 100644
--- a/net/bluetooth/rfcomm/sock.c
+++ b/net/bluetooth/rfcomm/sock.c
@@ -391,6 +391,7 @@ static int rfcomm_sock_connect(struct socket *sock, struct sockaddr *addr, int a
addr->sa_family != AF_BLUETOOTH)
return -EINVAL;
+ sock_hold(sk);
lock_sock(sk);
if (sk->sk_state != BT_OPEN && sk->sk_state != BT_BOUND) {
@@ -410,14 +411,18 @@ static int rfcomm_sock_connect(struct socket *sock, struct sockaddr *addr, int a
d->sec_level = rfcomm_pi(sk)->sec_level;
d->role_switch = rfcomm_pi(sk)->role_switch;
+ /* Drop sock lock to avoid potential deadlock with the RFCOMM lock */
+ release_sock(sk);
err = rfcomm_dlc_open(d, &rfcomm_pi(sk)->src, &sa->rc_bdaddr,
sa->rc_channel);
- if (!err)
+ lock_sock(sk);
+ if (!err && !sock_flag(sk, SOCK_ZAPPED))
err = bt_sock_wait_state(sk, BT_CONNECTED,
sock_sndtimeo(sk, flags & O_NONBLOCK));
done:
release_sock(sk);
+ sock_put(sk);
return err;
}
diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
index a718204..f3c7cfb 100644
--- a/net/bridge/br_netfilter_hooks.c
+++ b/net/bridge/br_netfilter_hooks.c
@@ -871,6 +871,7 @@ static unsigned int ip_sabotage_in(void *priv,
if (nf_bridge && !nf_bridge->in_prerouting &&
!netif_is_l3_master(skb->dev) &&
!netif_is_l3_slave(skb->dev)) {
+ nf_bridge_info_free(skb);
state->okfn(state->net, state->sk, skb);
return NF_STOLEN;
}
diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
index 1677455..a09b2fc 100644
--- a/net/bridge/netfilter/ebtables.c
+++ b/net/bridge/netfilter/ebtables.c
@@ -1090,7 +1090,7 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
audit_log_nfcfg(repl->name, AF_BRIDGE, repl->nentries,
AUDIT_XT_OP_REPLACE, GFP_KERNEL);
- return ret;
+ return 0;
free_unlock:
mutex_unlock(&ebt_mutex);
diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c
index e12fd3c..997c4eb 100644
--- a/net/caif/caif_socket.c
+++ b/net/caif/caif_socket.c
@@ -1020,6 +1020,7 @@ static void caif_sock_destructor(struct sock *sk)
return;
}
sk_stream_kill_queues(&cf_sk->sk);
+ WARN_ON(sk->sk_forward_alloc);
caif_free_client(&cf_sk->layer);
}
diff --git a/net/caif/caif_usb.c b/net/caif/caif_usb.c
index b02e129..24488a4 100644
--- a/net/caif/caif_usb.c
+++ b/net/caif/caif_usb.c
@@ -134,6 +134,9 @@ static int cfusbl_device_notify(struct notifier_block *me, unsigned long what,
struct usb_device *usbdev;
int res;
+ if (what == NETDEV_UNREGISTER && dev->reg_state >= NETREG_UNREGISTERED)
+ return 0;
+
/* Check whether we have a NCM device, and find its VID/PID. */
if (!(dev->dev.parent && dev->dev.parent->driver &&
strcmp(dev->dev.parent->driver->name, "cdc_ncm") == 0))
diff --git a/net/can/bcm.c b/net/can/bcm.c
index aab3a18..5727a07 100644
--- a/net/can/bcm.c
+++ b/net/can/bcm.c
@@ -936,6 +936,8 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
cf = op->frames + op->cfsiz * i;
err = memcpy_from_msg((u8 *)cf, msg, op->cfsiz);
+ if (err < 0)
+ goto free_op;
if (op->flags & CAN_FD_FRAME) {
if (cf->len > 64)
@@ -945,12 +947,8 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
err = -EINVAL;
}
- if (err < 0) {
- if (op->frames != &op->sframe)
- kfree(op->frames);
- kfree(op);
- return err;
- }
+ if (err < 0)
+ goto free_op;
if (msg_head->flags & TX_CP_CAN_ID) {
/* copy can_id into frame */
@@ -1021,6 +1019,12 @@ static int bcm_tx_setup(struct bcm_msg_head *msg_head, struct msghdr *msg,
bcm_tx_start_timer(op);
return msg_head->nframes * op->cfsiz + MHSIZ;
+
+free_op:
+ if (op->frames != &op->sframe)
+ kfree(op->frames);
+ kfree(op);
+ return err;
}
/*
diff --git a/net/can/isotp.c b/net/can/isotp.c
index 2682148..2a891f4 100644
--- a/net/can/isotp.c
+++ b/net/can/isotp.c
@@ -1482,6 +1482,21 @@ static int isotp_init(struct sock *sk)
return 0;
}
+static __poll_t isotp_poll(struct file *file, struct socket *sock, poll_table *wait)
+{
+ struct sock *sk = sock->sk;
+ struct isotp_sock *so = isotp_sk(sk);
+
+ __poll_t mask = datagram_poll(file, sock, wait);
+ poll_wait(file, &so->wait, wait);
+
+ /* Check for false positives due to TX state */
+ if ((mask & EPOLLWRNORM) && (so->tx.state != ISOTP_IDLE))
+ mask &= ~(EPOLLOUT | EPOLLWRNORM);
+
+ return mask;
+}
+
static int isotp_sock_no_ioctlcmd(struct socket *sock, unsigned int cmd,
unsigned long arg)
{
@@ -1497,7 +1512,7 @@ static const struct proto_ops isotp_ops = {
.socketpair = sock_no_socketpair,
.accept = sock_no_accept,
.getname = isotp_getname,
- .poll = datagram_poll,
+ .poll = isotp_poll,
.ioctl = isotp_sock_no_ioctlcmd,
.gettstamp = sock_gettstamp,
.listen = sock_no_listen,
diff --git a/net/can/j1939/address-claim.c b/net/can/j1939/address-claim.c
index f33c473..ca4ad6c 100644
--- a/net/can/j1939/address-claim.c
+++ b/net/can/j1939/address-claim.c
@@ -165,6 +165,46 @@ static void j1939_ac_process(struct j1939_priv *priv, struct sk_buff *skb)
* leaving this function.
*/
ecu = j1939_ecu_get_by_name_locked(priv, name);
+
+ if (ecu && ecu->addr == skcb->addr.sa) {
+ /* The ISO 11783-5 standard, in "4.5.2 - Address claim
+ * requirements", states:
+ * d) No CF shall begin, or resume, transmission on the
+ * network until 250 ms after it has successfully claimed
+ * an address except when responding to a request for
+ * address-claimed.
+ *
+ * But "Figure 6" and "Figure 7" in "4.5.4.2 - Address-claim
+ * prioritization" show that the CF begins the transmission
+ * after 250 ms from the first AC (address-claimed) message
+ * even if it sends another AC message during that time window
+ * to resolve the address contention with another CF.
+ *
+ * As stated in "4.4.2.3 - Address-claimed message":
+ * In order to successfully claim an address, the CF sending
+ * an address claimed message shall not receive a contending
+ * claim from another CF for at least 250 ms.
+ *
+ * As stated in "4.4.3.2 - NAME management (NM) message":
+ * 1) A commanding CF can
+ * d) request that a CF with a specified NAME transmit
+ * the address-claimed message with its current NAME.
+ * 2) A target CF shall
+ * d) send an address-claimed message in response to a
+ * request for a matching NAME
+ *
+ * Taking the above arguments into account, the 250 ms wait is
+ * requested only during network initialization.
+ *
+ * Do not restart the timer on AC message if both the NAME and
+ * the address match and so if the address has already been
+ * claimed (timer has expired) or the AC message has been sent
+ * to resolve the contention with another CF (timer is still
+ * running).
+ */
+ goto out_ecu_put;
+ }
+
if (!ecu && j1939_address_is_unicast(skcb->addr.sa))
ecu = j1939_ecu_create_locked(priv, name);
diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
index 55f29c9..bd8ec24 100644
--- a/net/can/j1939/transport.c
+++ b/net/can/j1939/transport.c
@@ -604,7 +604,10 @@ sk_buff *j1939_tp_tx_dat_new(struct j1939_priv *priv,
/* reserve CAN header */
skb_reserve(skb, offsetof(struct can_frame, data));
- memcpy(skb->cb, re_skcb, sizeof(skb->cb));
+ /* skb->cb must be large enough to hold a j1939_sk_buff_cb structure */
+ BUILD_BUG_ON(sizeof(skb->cb) < sizeof(*re_skcb));
+
+ memcpy(skb->cb, re_skcb, sizeof(*re_skcb));
skcb = j1939_skb_to_cb(skb);
if (swap_src_dst)
j1939_skbcb_swap(skcb);
@@ -1092,10 +1095,6 @@ static bool j1939_session_deactivate(struct j1939_session *session)
bool active;
j1939_session_list_lock(priv);
- /* This function should be called with a session ref-count of at
- * least 2.
- */
- WARN_ON_ONCE(kref_read(&session->kref) < 2);
active = j1939_session_deactivate_locked(session);
j1939_session_list_unlock(priv);
@@ -1128,8 +1127,6 @@ static void __j1939_session_cancel(struct j1939_session *session,
if (session->sk)
j1939_sk_send_loop_abort(session->sk, session->err);
- else
- j1939_sk_errqueue(session, J1939_ERRQUEUE_RX_ABORT);
}
static void j1939_session_cancel(struct j1939_session *session,
@@ -1144,6 +1141,9 @@ static void j1939_session_cancel(struct j1939_session *session,
}
j1939_session_list_unlock(session->priv);
+
+ if (!session->sk)
+ j1939_sk_errqueue(session, J1939_ERRQUEUE_RX_ABORT);
}
static enum hrtimer_restart j1939_tp_txtimer(struct hrtimer *hrtimer)
@@ -1257,6 +1257,9 @@ static enum hrtimer_restart j1939_tp_rxtimer(struct hrtimer *hrtimer)
__j1939_session_cancel(session, J1939_XTP_ABORT_TIMEOUT);
}
j1939_session_list_unlock(session->priv);
+
+ if (!session->sk)
+ j1939_sk_errqueue(session, J1939_ERRQUEUE_RX_ABORT);
}
j1939_session_put(session);
diff --git a/net/core/dev.c b/net/core/dev.c
index 33d6b69..7fc8ae7 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3113,8 +3113,10 @@ void __dev_kfree_skb_any(struct sk_buff *skb, enum skb_free_reason reason)
{
if (in_hardirq() || irqs_disabled())
__dev_kfree_skb_irq(skb, reason);
+ else if (unlikely(reason == SKB_REASON_DROPPED))
+ kfree_skb(skb);
else
- dev_kfree_skb(skb);
+ consume_skb(skb);
}
EXPORT_SYMBOL(__dev_kfree_skb_any);
@@ -10646,7 +10648,7 @@ void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64,
BUILD_BUG_ON(n > sizeof(*stats64) / sizeof(u64));
for (i = 0; i < n; i++)
- dst[i] = atomic_long_read(&src[i]);
+ dst[i] = (unsigned long)atomic_long_read(&src[i]);
/* zero out counters that only exist in rtnl_link_stats64 */
memset((char *)stats64 + n * sizeof(u64), 0,
sizeof(*stats64) - n * sizeof(u64));
diff --git a/net/core/filter.c b/net/core/filter.c
index b203114..519315a 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -5506,7 +5506,7 @@ static int bpf_ipv4_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
neigh = __ipv6_neigh_lookup_noref_stub(dev, dst);
}
- if (!neigh)
+ if (!neigh || !(neigh->nud_state & NUD_VALID))
return BPF_FIB_LKUP_RET_NO_NEIGH;
return bpf_fib_set_fwd_params(params, neigh, dev, mtu);
@@ -5621,7 +5621,7 @@ static int bpf_ipv6_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
* not needed here.
*/
neigh = __ipv6_neigh_lookup_noref_stub(dev, dst);
- if (!neigh)
+ if (!neigh || !(neigh->nud_state & NUD_VALID))
return BPF_FIB_LKUP_RET_NO_NEIGH;
return bpf_fib_set_fwd_params(params, neigh, dev, mtu);
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 0e22ecb..95f588b 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -241,7 +241,7 @@ static int neigh_forced_gc(struct neigh_table *tbl)
(n->nud_state == NUD_NOARP) ||
(tbl->is_multicast &&
tbl->is_multicast(n->primary_key)) ||
- time_after(tref, n->updated))
+ !time_in_range(n->updated, tref, jiffies))
remove = true;
write_unlock(&n->lock);
@@ -261,7 +261,17 @@ static int neigh_forced_gc(struct neigh_table *tbl)
static void neigh_add_timer(struct neighbour *n, unsigned long when)
{
+ /* Use safe distance from the jiffies - LONG_MAX point while timer
+ * is running in DELAY/PROBE state but still show to user space
+ * large times in the past.
+ */
+ unsigned long mint = jiffies - (LONG_MAX - 86400 * HZ);
+
neigh_hold(n);
+ if (!time_in_range(n->confirmed, mint, jiffies))
+ n->confirmed = mint;
+ if (time_before(n->used, n->confirmed))
+ n->used = n->confirmed;
if (unlikely(mod_timer(&n->timer, when))) {
printk("NEIGH: BUG, double timer add, state is %x\n",
n->nud_state);
@@ -943,12 +953,14 @@ static void neigh_periodic_work(struct work_struct *work)
goto next_elt;
}
- if (time_before(n->used, n->confirmed))
+ if (time_before(n->used, n->confirmed) &&
+ time_is_before_eq_jiffies(n->confirmed))
n->used = n->confirmed;
if (refcount_read(&n->refcnt) == 1 &&
(state == NUD_FAILED ||
- time_after(jiffies, n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) {
+ !time_in_range_open(jiffies, n->used,
+ n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) {
*np = n->next;
neigh_mark_dead(n);
write_unlock(&n->lock);
diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
index 982d063..dcddc54 100644
--- a/net/core/net_namespace.c
+++ b/net/core/net_namespace.c
@@ -137,12 +137,12 @@ static int ops_init(const struct pernet_operations *ops, struct net *net)
return 0;
if (ops->id && ops->size) {
-cleanup:
ng = rcu_dereference_protected(net->gen,
lockdep_is_held(&pernet_ops_rwsem));
ng->ptr[*ops->id] = NULL;
}
+cleanup:
kfree(data);
out:
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index edfc0f8..bd75086 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -137,6 +137,20 @@ static void queue_process(struct work_struct *work)
}
}
+static int netif_local_xmit_active(struct net_device *dev)
+{
+ int i;
+
+ for (i = 0; i < dev->num_tx_queues; i++) {
+ struct netdev_queue *txq = netdev_get_tx_queue(dev, i);
+
+ if (READ_ONCE(txq->xmit_lock_owner) == smp_processor_id())
+ return 1;
+ }
+
+ return 0;
+}
+
static void poll_one_napi(struct napi_struct *napi)
{
int work;
@@ -183,7 +197,10 @@ void netpoll_poll_dev(struct net_device *dev)
if (!ni || down_trylock(&ni->dev_lock))
return;
- if (!netif_running(dev)) {
+ /* Some drivers will take the same locks in poll and xmit,
+ * we can't poll if local CPU is already in xmit.
+ */
+ if (!netif_running(dev) || netif_local_xmit_active(dev)) {
up(&ni->dev_lock);
return;
}
diff --git a/net/core/scm.c b/net/core/scm.c
index 5c356f0..acb7d77 100644
--- a/net/core/scm.c
+++ b/net/core/scm.c
@@ -229,6 +229,8 @@ int put_cmsg(struct msghdr * msg, int level, int type, int len, void *data)
if (msg->msg_control_is_user) {
struct cmsghdr __user *cm = msg->msg_control_user;
+ check_object_size(data, cmlen - sizeof(*cm), true);
+
if (!user_write_access_begin(cm, cmlen))
goto efault;
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 058ec2f..2d3f82b 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3884,7 +3884,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
skb_shinfo(skb)->frag_list = NULL;
- do {
+ while (list_skb) {
nskb = list_skb;
list_skb = list_skb->next;
@@ -3930,8 +3930,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
if (skb_needs_linearize(nskb, features) &&
__skb_linearize(nskb))
goto err_linearize;
-
- } while (list_skb);
+ }
skb->truesize = skb->truesize - delta_truesize;
skb->data_len = skb->data_len - delta_len;
diff --git a/net/core/sock.c b/net/core/sock.c
index b7ac53e..ff7e8fc 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3118,7 +3118,7 @@ void sk_stop_timer_sync(struct sock *sk, struct timer_list *timer)
}
EXPORT_SYMBOL(sk_stop_timer_sync);
-void sock_init_data(struct socket *sock, struct sock *sk)
+void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid)
{
sk_init_common(sk);
sk->sk_send_head = NULL;
@@ -3137,11 +3137,10 @@ void sock_init_data(struct socket *sock, struct sock *sk)
sk->sk_type = sock->type;
RCU_INIT_POINTER(sk->sk_wq, &sock->wq);
sock->sk = sk;
- sk->sk_uid = SOCK_INODE(sock)->i_uid;
} else {
RCU_INIT_POINTER(sk->sk_wq, NULL);
- sk->sk_uid = make_kuid(sock_net(sk)->user_ns, 0);
}
+ sk->sk_uid = uid;
rwlock_init(&sk->sk_callback_lock);
if (sk->sk_kern_sock)
@@ -3199,6 +3198,16 @@ void sock_init_data(struct socket *sock, struct sock *sk)
refcount_set(&sk->sk_refcnt, 1);
atomic_set(&sk->sk_drops, 0);
}
+EXPORT_SYMBOL(sock_init_data_uid);
+
+void sock_init_data(struct socket *sock, struct sock *sk)
+{
+ kuid_t uid = sock ?
+ SOCK_INODE(sock)->i_uid :
+ make_kuid(sock_net(sk)->user_ns, 0);
+
+ sock_init_data_uid(sock, sk, uid);
+}
EXPORT_SYMBOL(sock_init_data);
void lock_sock_nested(struct sock *sk, int subclass)
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index ae6013a..86b4e89 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -1514,15 +1514,16 @@ void sock_map_unhash(struct sock *sk)
psock = sk_psock(sk);
if (unlikely(!psock)) {
rcu_read_unlock();
- if (sk->sk_prot->unhash)
- sk->sk_prot->unhash(sk);
- return;
+ saved_unhash = READ_ONCE(sk->sk_prot)->unhash;
+ } else {
+ saved_unhash = psock->saved_unhash;
+ sock_map_remove_links(sk, psock);
+ rcu_read_unlock();
}
-
- saved_unhash = psock->saved_unhash;
- sock_map_remove_links(sk, psock);
- rcu_read_unlock();
- saved_unhash(sk);
+ if (WARN_ON_ONCE(saved_unhash == sock_map_unhash))
+ return;
+ if (saved_unhash)
+ saved_unhash(sk);
}
EXPORT_SYMBOL_GPL(sock_map_unhash);
@@ -1535,17 +1536,18 @@ void sock_map_destroy(struct sock *sk)
psock = sk_psock_get(sk);
if (unlikely(!psock)) {
rcu_read_unlock();
- if (sk->sk_prot->destroy)
- sk->sk_prot->destroy(sk);
- return;
+ saved_destroy = READ_ONCE(sk->sk_prot)->destroy;
+ } else {
+ saved_destroy = psock->saved_destroy;
+ sock_map_remove_links(sk, psock);
+ rcu_read_unlock();
+ sk_psock_stop(psock);
+ sk_psock_put(sk, psock);
}
-
- saved_destroy = psock->saved_destroy;
- sock_map_remove_links(sk, psock);
- rcu_read_unlock();
- sk_psock_stop(psock);
- sk_psock_put(sk, psock);
- saved_destroy(sk);
+ if (WARN_ON_ONCE(saved_destroy == sock_map_destroy))
+ return;
+ if (saved_destroy)
+ saved_destroy(sk);
}
EXPORT_SYMBOL_GPL(sock_map_destroy);
@@ -1560,16 +1562,21 @@ void sock_map_close(struct sock *sk, long timeout)
if (unlikely(!psock)) {
rcu_read_unlock();
release_sock(sk);
- return sk->sk_prot->close(sk, timeout);
+ saved_close = READ_ONCE(sk->sk_prot)->close;
+ } else {
+ saved_close = psock->saved_close;
+ sock_map_remove_links(sk, psock);
+ rcu_read_unlock();
+ sk_psock_stop(psock);
+ release_sock(sk);
+ cancel_work_sync(&psock->work);
+ sk_psock_put(sk, psock);
}
-
- saved_close = psock->saved_close;
- sock_map_remove_links(sk, psock);
- rcu_read_unlock();
- sk_psock_stop(psock);
- release_sock(sk);
- cancel_work_sync(&psock->work);
- sk_psock_put(sk, psock);
+ /* Make sure we do not recurse. This is a bug.
+ * Leak the socket instead of crashing on a stack overflow.
+ */
+ if (WARN_ON_ONCE(saved_close == sock_map_close))
+ return;
saved_close(sk, timeout);
}
EXPORT_SYMBOL_GPL(sock_map_close);
diff --git a/net/core/stream.c b/net/core/stream.c
index d7c5413..cd60746 100644
--- a/net/core/stream.c
+++ b/net/core/stream.c
@@ -209,7 +209,6 @@ void sk_stream_kill_queues(struct sock *sk)
sk_mem_reclaim(sk);
WARN_ON(sk->sk_wmem_queued);
- WARN_ON(sk->sk_forward_alloc);
/* It is _impossible_ for the backlog to contain anything
* when we get here. All user references to this socket
diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
index 0716206..a28536a 100644
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -551,11 +551,9 @@ static struct sock *dccp_v6_request_recv_sock(const struct sock *sk,
*own_req = inet_ehash_nolisten(newsk, req_to_sk(req_unhash), NULL);
/* Clone pktoptions received with SYN, if we own the req */
if (*own_req && ireq->pktopts) {
- newnp->pktoptions = skb_clone(ireq->pktopts, GFP_ATOMIC);
+ newnp->pktoptions = skb_clone_and_charge_r(ireq->pktopts, newsk);
consume_skb(ireq->pktopts);
ireq->pktopts = NULL;
- if (newnp->pktoptions)
- skb_set_owner_r(newnp->pktoptions, newsk);
}
return newsk;
@@ -615,7 +613,7 @@ static int dccp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
--ANK (980728)
*/
if (np->rxopt.all)
- opt_skb = skb_clone(skb, GFP_ATOMIC);
+ opt_skb = skb_clone_and_charge_r(skb, sk);
if (sk->sk_state == DCCP_OPEN) { /* Fast path */
if (dccp_rcv_established(sk, skb, dccp_hdr(skb), skb->len))
@@ -679,7 +677,6 @@ static int dccp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
if (ipv6_opt_accepted(sk, opt_skb,
&DCCP_SKB_CB(opt_skb)->header.h6)) {
- skb_set_owner_r(opt_skb, sk);
memmove(IP6CB(opt_skb),
&DCCP_SKB_CB(opt_skb)->header.h6,
sizeof(struct inet6_skb_parm));
diff --git a/net/dsa/tag_brcm.c b/net/dsa/tag_brcm.c
index 96dbb8e..ed5f68c 100644
--- a/net/dsa/tag_brcm.c
+++ b/net/dsa/tag_brcm.c
@@ -7,6 +7,7 @@
#include <linux/dsa/brcm.h>
#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
#include <linux/list.h>
#include <linux/slab.h>
@@ -248,6 +249,7 @@ static struct sk_buff *brcm_leg_tag_xmit(struct sk_buff *skb,
static struct sk_buff *brcm_leg_tag_rcv(struct sk_buff *skb,
struct net_device *dev)
{
+ int len = BRCM_LEG_TAG_LEN;
int source_port;
u8 *brcm_tag;
@@ -262,12 +264,16 @@ static struct sk_buff *brcm_leg_tag_rcv(struct sk_buff *skb,
if (!skb->dev)
return NULL;
+ /* VLAN tag is added by BCM63xx internal switch */
+ if (netdev_uses_dsa(skb->dev))
+ len += VLAN_HLEN;
+
/* Remove Broadcom tag and update checksum */
- skb_pull_rcsum(skb, BRCM_LEG_TAG_LEN);
+ skb_pull_rcsum(skb, len);
dsa_default_offload_fwd_mark(skb);
- dsa_strip_etype_header(skb, BRCM_LEG_TAG_LEN);
+ dsa_strip_etype_header(skb, len);
return skb;
}
diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
index 6991d77..939c63d 100644
--- a/net/ethtool/ioctl.c
+++ b/net/ethtool/ioctl.c
@@ -2074,7 +2074,8 @@ static int ethtool_get_phy_stats(struct net_device *dev, void __user *useraddr)
return n_stats;
if (n_stats > S32_MAX / sizeof(u64))
return -ENOMEM;
- WARN_ON_ONCE(!n_stats);
+ if (WARN_ON_ONCE(!n_stats))
+ return -EOPNOTSUPP;
if (copy_from_user(&stats, useraddr, sizeof(stats)))
return -EFAULT;
diff --git a/net/ethtool/linkmodes.c b/net/ethtool/linkmodes.c
index f9eda59..3d05b9bf 100644
--- a/net/ethtool/linkmodes.c
+++ b/net/ethtool/linkmodes.c
@@ -277,11 +277,12 @@ static int ethnl_update_linkmodes(struct genl_info *info, struct nlattr **tb,
"lanes configuration not supported by device");
return -EOPNOTSUPP;
}
- } else if (!lsettings->autoneg) {
- /* If autoneg is off and lanes parameter is not passed from user,
- * set the lanes parameter to 0.
+ } else if (!lsettings->autoneg && ksettings->lanes) {
+ /* If autoneg is off and lanes parameter is not passed from user but
+ * it was defined previously then set the lanes parameter to 0.
*/
ksettings->lanes = 0;
+ *mod = true;
}
ret = ethnl_update_bitset(ksettings->link_modes.advertising,
diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
index 414bf4d..8c9a20e 100644
--- a/net/hsr/hsr_framereg.c
+++ b/net/hsr/hsr_framereg.c
@@ -385,7 +385,7 @@ void hsr_addr_subst_dest(struct hsr_node *node_src, struct sk_buff *skb,
node_dst = find_node_by_addr_A(&port->hsr->node_db,
eth_hdr(skb)->h_dest);
if (!node_dst) {
- if (net_ratelimit())
+ if (port->hsr->prot_version != PRP_V1 && net_ratelimit())
netdev_err(skb->dev, "%s: Unknown node\n", __func__);
return;
}
diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
index 75c88d4..c21d57f 100644
--- a/net/ipv4/fib_frontend.c
+++ b/net/ipv4/fib_frontend.c
@@ -573,6 +573,9 @@ static int rtentry_to_fib_config(struct net *net, int cmd, struct rtentry *rt,
cfg->fc_scope = RT_SCOPE_UNIVERSE;
}
+ if (!cfg->fc_table)
+ cfg->fc_table = RT_TABLE_MAIN;
+
if (cmd == SIOCDELRT)
return 0;
diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
index 250af6e..607a4f8 100644
--- a/net/ipv4/fib_semantics.c
+++ b/net/ipv4/fib_semantics.c
@@ -30,6 +30,7 @@
#include <linux/slab.h>
#include <linux/netlink.h>
#include <linux/hash.h>
+#include <linux/nospec.h>
#include <net/arp.h>
#include <net/ip.h>
@@ -1020,6 +1021,7 @@ bool fib_metrics_match(struct fib_config *cfg, struct fib_info *fi)
if (type > RTAX_MAX)
return false;
+ type = array_index_nospec(type, RTAX_MAX + 1);
if (type == RTAX_CC_ALGO) {
char tmp[TCP_CA_NAME_MAX];
bool ecn_ca = false;
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
index 609c4ff..7b749a9 100644
--- a/net/ipv4/icmp.c
+++ b/net/ipv4/icmp.c
@@ -755,6 +755,11 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
room = 576;
room -= sizeof(struct iphdr) + icmp_param.replyopts.opt.opt.optlen;
room -= sizeof(struct icmphdr);
+ /* Guard against tiny mtu. We need to include at least one
+ * IP network header for this message to make any sense.
+ */
+ if (room <= (int)sizeof(struct iphdr))
+ goto ende;
icmp_param.data_len = skb_in->len - icmp_param.offset;
if (icmp_param.data_len > room)
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index 0d378da..39b3db5b 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -571,8 +571,20 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
spin_lock(lock);
if (osk) {
WARN_ON_ONCE(sk->sk_hash != osk->sk_hash);
- ret = sk_nulls_del_node_init_rcu(osk);
- } else if (found_dup_sk) {
+ ret = sk_hashed(osk);
+ if (ret) {
+ /* Before deleting the node, we insert a new one to make
+ * sure that the look-up-sk process would not miss either
+ * of them and that at least one node would exist in ehash
+ * table all the time. Otherwise there's a tiny chance
+ * that lookup process could find nothing in ehash table.
+ */
+ __sk_nulls_add_node_tail_rcu(sk, list);
+ sk_nulls_del_node_init_rcu(osk);
+ }
+ goto unlock;
+ }
+ if (found_dup_sk) {
*found_dup_sk = inet_ehash_lookup_by_sk(sk, list);
if (*found_dup_sk)
ret = false;
@@ -581,6 +593,7 @@ bool inet_ehash_insert(struct sock *sk, struct sock *osk, bool *found_dup_sk)
if (ret)
__sk_nulls_add_node_rcu(sk, list);
+unlock:
spin_unlock(lock);
return ret;
@@ -747,17 +760,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
u32 index;
if (port) {
- head = &hinfo->bhash[inet_bhashfn(net, port,
- hinfo->bhash_size)];
- tb = inet_csk(sk)->icsk_bind_hash;
- spin_lock_bh(&head->lock);
- if (sk_head(&tb->owners) == sk && !sk->sk_bind_node.next) {
- inet_ehash_nolisten(sk, NULL, NULL);
- spin_unlock_bh(&head->lock);
- return 0;
- }
- spin_unlock(&head->lock);
- /* No definite answer... Walk to established hash table */
+ local_bh_disable();
ret = check_established(death_row, sk, port, NULL);
local_bh_enable();
return ret;
diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 437afe3..fe6340c 100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -81,10 +81,10 @@ void inet_twsk_put(struct inet_timewait_sock *tw)
}
EXPORT_SYMBOL_GPL(inet_twsk_put);
-static void inet_twsk_add_node_rcu(struct inet_timewait_sock *tw,
- struct hlist_nulls_head *list)
+static void inet_twsk_add_node_tail_rcu(struct inet_timewait_sock *tw,
+ struct hlist_nulls_head *list)
{
- hlist_nulls_add_head_rcu(&tw->tw_node, list);
+ hlist_nulls_add_tail_rcu(&tw->tw_node, list);
}
static void inet_twsk_add_bind_node(struct inet_timewait_sock *tw,
@@ -120,7 +120,7 @@ void inet_twsk_hashdance(struct inet_timewait_sock *tw, struct sock *sk,
spin_lock(lock);
- inet_twsk_add_node_rcu(tw, &ehead->chain);
+ inet_twsk_add_node_tail_rcu(tw, &ehead->chain);
/* Step 3: Remove SK from hash chain */
if (__sk_nulls_del_node_init_rcu(sk))
diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
index 454c435..c094963 100644
--- a/net/ipv4/ip_gre.c
+++ b/net/ipv4/ip_gre.c
@@ -552,7 +552,7 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
truncate = true;
}
- nhoff = skb_network_header(skb) - skb_mac_header(skb);
+ nhoff = skb_network_offset(skb);
if (skb->protocol == htons(ETH_P_IP) &&
(ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
truncate = true;
@@ -561,7 +561,7 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev)
int thoff;
if (skb_transport_header_was_set(skb))
- thoff = skb_transport_header(skb) - skb_mac_header(skb);
+ thoff = skb_transport_offset(skb);
else
thoff = nhoff + sizeof(struct ipv6hdr);
if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
index fe9101d..426dc91 100644
--- a/net/ipv4/ip_tunnel.c
+++ b/net/ipv4/ip_tunnel.c
@@ -613,10 +613,10 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
}
headroom += LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len;
- if (headroom > dev->needed_headroom)
- dev->needed_headroom = headroom;
+ if (headroom > READ_ONCE(dev->needed_headroom))
+ WRITE_ONCE(dev->needed_headroom, headroom);
- if (skb_cow_head(skb, dev->needed_headroom)) {
+ if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
ip_rt_put(rt);
goto tx_dropped;
}
@@ -797,10 +797,10 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
+ rt->dst.header_len + ip_encap_hlen(&tunnel->encap);
- if (max_headroom > dev->needed_headroom)
- dev->needed_headroom = max_headroom;
+ if (max_headroom > READ_ONCE(dev->needed_headroom))
+ WRITE_ONCE(dev->needed_headroom, max_headroom);
- if (skb_cow_head(skb, dev->needed_headroom)) {
+ if (skb_cow_head(skb, READ_ONCE(dev->needed_headroom))) {
ip_rt_put(rt);
dev->stats.tx_dropped++;
kfree_skb(skb);
diff --git a/net/ipv4/metrics.c b/net/ipv4/metrics.c
index 25ea6ac..6a14279 100644
--- a/net/ipv4/metrics.c
+++ b/net/ipv4/metrics.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/netlink.h>
+#include <linux/nospec.h>
#include <linux/rtnetlink.h>
#include <linux/types.h>
#include <net/ip.h>
@@ -28,6 +29,7 @@ static int ip_metrics_convert(struct net *net, struct nlattr *fc_mx,
return -EINVAL;
}
+ type = array_index_nospec(type, RTAX_MAX + 1);
if (type == RTAX_CC_ALGO) {
char tmp[TCP_CA_NAME_MAX];
diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
index c53f14b..71bf3ae 100644
--- a/net/ipv4/netfilter/arp_tables.c
+++ b/net/ipv4/netfilter/arp_tables.c
@@ -1524,6 +1524,10 @@ int arpt_register_table(struct net *net,
new_table = xt_register_table(net, table, &bootstrap, newinfo);
if (IS_ERR(new_table)) {
+ struct arpt_entry *iter;
+
+ xt_entry_foreach(iter, loc_cpu_entry, newinfo->size)
+ cleanup_entry(iter, net);
xt_free_table_info(newinfo);
return PTR_ERR(new_table);
}
diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
index 13acb687..a748a1e 100644
--- a/net/ipv4/netfilter/ip_tables.c
+++ b/net/ipv4/netfilter/ip_tables.c
@@ -1044,7 +1044,6 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
struct xt_counters *counters;
struct ipt_entry *iter;
- ret = 0;
counters = xt_counters_alloc(num_counters);
if (!counters) {
ret = -ENOMEM;
@@ -1090,7 +1089,7 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
net_warn_ratelimited("iptables: counters copy to user failed while replacing table\n");
}
vfree(counters);
- return ret;
+ return 0;
put_module:
module_put(t->me);
@@ -1741,6 +1740,10 @@ int ipt_register_table(struct net *net, const struct xt_table *table,
new_table = xt_register_table(net, table, &bootstrap, newinfo);
if (IS_ERR(new_table)) {
+ struct ipt_entry *iter;
+
+ xt_entry_foreach(iter, loc_cpu_entry, newinfo->size)
+ cleanup_entry(iter, net);
xt_free_table_info(newinfo);
return PTR_ERR(new_table);
}
diff --git a/net/ipv4/netfilter/nf_tproxy_ipv4.c b/net/ipv4/netfilter/nf_tproxy_ipv4.c
index b2bae0b..61cb234 100644
--- a/net/ipv4/netfilter/nf_tproxy_ipv4.c
+++ b/net/ipv4/netfilter/nf_tproxy_ipv4.c
@@ -38,7 +38,7 @@ nf_tproxy_handle_time_wait4(struct net *net, struct sk_buff *skb,
hp->source, lport ? lport : hp->dest,
skb->dev, NF_TPROXY_LOOKUP_LISTENER);
if (sk2) {
- inet_twsk_deschedule_put(inet_twsk(sk));
+ nf_tproxy_twsk_deschedule_put(inet_twsk(sk));
sk = sk2;
}
}
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index fe1972a..51f3456 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -439,6 +439,7 @@ void tcp_init_sock(struct sock *sk)
/* There's a bubble in the pipe until at least the first ACK. */
tp->app_limited = ~0U;
+ tp->rate_app_limited = 1;
/* See draft-stevens-tcpca-spec-01 for discussion of the
* initialization of these values.
@@ -3066,6 +3067,7 @@ int tcp_disconnect(struct sock *sk, int flags)
tp->last_oow_ack_time = 0;
/* There's a bubble in the pipe until at least the first ACK. */
tp->app_limited = ~0U;
+ tp->rate_app_limited = 1;
tp->rack.mstamp = 0;
tp->rack.advanced = 0;
tp->rack.reo_wnd_steps = 1;
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index b4b642e..20ad554 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -6,6 +6,7 @@
#include <linux/bpf.h>
#include <linux/init.h>
#include <linux/wait.h>
+#include <linux/util_macros.h>
#include <net/inet_common.h>
#include <net/tls.h>
@@ -186,6 +187,9 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
if (unlikely(flags & MSG_ERRQUEUE))
return inet_recv_error(sk, msg, len, addr_len);
+ if (!len)
+ return 0;
+
psock = sk_psock_get(sk);
if (unlikely(!psock))
return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
@@ -244,6 +248,9 @@ static int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (unlikely(flags & MSG_ERRQUEUE))
return inet_recv_error(sk, msg, len, addr_len);
+ if (!len)
+ return 0;
+
psock = sk_psock_get(sk);
if (unlikely(!psock))
return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
@@ -640,10 +647,9 @@ EXPORT_SYMBOL_GPL(tcp_bpf_update_proto);
*/
void tcp_bpf_clone(const struct sock *sk, struct sock *newsk)
{
- int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
struct proto *prot = newsk->sk_prot;
- if (prot == &tcp_bpf_prots[family][TCP_BPF_BASE])
+ if (is_insidevar(prot, tcp_bpf_prots))
newsk->sk_prot = sk->sk_prot_creator;
}
#endif /* CONFIG_BPF_SYSCALL */
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index 41368e7..aa67d5a 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -565,6 +565,9 @@ EXPORT_SYMBOL(tcp_create_openreq_child);
* validation and inside tcp_v4_reqsk_send_ack(). Can we do better?
*
* We don't need to initialize tmp_opt.sack_ok as we don't use the results
+ *
+ * Note: If @fastopen is true, this can be called from process context.
+ * Otherwise, this is from BH context.
*/
struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
@@ -717,7 +720,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
&tcp_rsk(req)->last_oow_ack_time))
req->rsk_ops->send_ack(sk, skb, req);
if (paws_reject)
- __NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
return NULL;
}
@@ -736,7 +739,7 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
* "fourth, check the SYN bit"
*/
if (flg & (TCP_FLAG_RST|TCP_FLAG_SYN)) {
- __TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
goto embryonic_reset;
}
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 33ba126..1f39b56 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -3610,7 +3610,7 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
th->window = htons(min(req->rsk_rcv_wnd, 65535U));
tcp_options_write((__be32 *)(th + 1), NULL, &opts);
th->doff = (tcp_header_size >> 2);
- __TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
#ifdef CONFIG_TCP_MD5SIG
/* Okay, we have all we need - do the md5 hash if needed */
diff --git a/net/ipv4/tcp_ulp.c b/net/ipv4/tcp_ulp.c
index b5d707a..8e135af 100644
--- a/net/ipv4/tcp_ulp.c
+++ b/net/ipv4/tcp_ulp.c
@@ -136,7 +136,7 @@ static int __tcp_set_ulp(struct sock *sk, const struct tcp_ulp_ops *ulp_ops)
if (icsk->icsk_ulp_ops)
goto out_err;
- err = -EINVAL;
+ err = -ENOTCONN;
if (!ulp_ops->clone && sk->sk_state == TCP_LISTEN)
goto out_err;
diff --git a/net/ipv4/udp_bpf.c b/net/ipv4/udp_bpf.c
index bbe6569..56e1047 100644
--- a/net/ipv4/udp_bpf.c
+++ b/net/ipv4/udp_bpf.c
@@ -69,6 +69,9 @@ static int udp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
if (unlikely(flags & MSG_ERRQUEUE))
return inet_recv_error(sk, msg, len, addr_len);
+ if (!len)
+ return 0;
+
psock = sk_psock_get(sk);
if (unlikely(!psock))
return sk_udp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 8800987..6ba34f5 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -3129,17 +3129,17 @@ static void add_v4_addrs(struct inet6_dev *idev)
offset = sizeof(struct in6_addr) - 4;
memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4);
- if (idev->dev->flags&IFF_POINTOPOINT) {
+ if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) {
+ scope = IPV6_ADDR_COMPATv4;
+ plen = 96;
+ pflags |= RTF_NONEXTHOP;
+ } else {
if (idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_NONE)
return;
addr.s6_addr32[0] = htonl(0xfe800000);
scope = IFA_LINK;
plen = 64;
- } else {
- scope = IPV6_ADDR_COMPATv4;
- plen = 96;
- pflags |= RTF_NONEXTHOP;
}
if (addr.s6_addr32[3]) {
@@ -3447,6 +3447,30 @@ static void addrconf_gre_config(struct net_device *dev)
}
#endif
+static void addrconf_init_auto_addrs(struct net_device *dev)
+{
+ switch (dev->type) {
+#if IS_ENABLED(CONFIG_IPV6_SIT)
+ case ARPHRD_SIT:
+ addrconf_sit_config(dev);
+ break;
+#endif
+#if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE)
+ case ARPHRD_IP6GRE:
+ case ARPHRD_IPGRE:
+ addrconf_gre_config(dev);
+ break;
+#endif
+ case ARPHRD_LOOPBACK:
+ init_loopback(dev);
+ break;
+
+ default:
+ addrconf_dev_config(dev);
+ break;
+ }
+}
+
static int fixup_permanent_addr(struct net *net,
struct inet6_dev *idev,
struct inet6_ifaddr *ifp)
@@ -3611,26 +3635,7 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
run_pending = 1;
}
- switch (dev->type) {
-#if IS_ENABLED(CONFIG_IPV6_SIT)
- case ARPHRD_SIT:
- addrconf_sit_config(dev);
- break;
-#endif
-#if IS_ENABLED(CONFIG_NET_IPGRE) || IS_ENABLED(CONFIG_IPV6_GRE)
- case ARPHRD_IP6GRE:
- case ARPHRD_IPGRE:
- addrconf_gre_config(dev);
- break;
-#endif
- case ARPHRD_LOOPBACK:
- init_loopback(dev);
- break;
-
- default:
- addrconf_dev_config(dev);
- break;
- }
+ addrconf_init_auto_addrs(dev);
if (!IS_ERR_OR_NULL(idev)) {
if (run_pending)
@@ -6385,7 +6390,7 @@ static int addrconf_sysctl_addr_gen_mode(struct ctl_table *ctl, int write,
if (idev->cnf.addr_gen_mode != new_val) {
idev->cnf.addr_gen_mode = new_val;
- addrconf_dev_config(idev->dev);
+ addrconf_init_auto_addrs(idev->dev);
}
} else if (&net->ipv6.devconf_all->addr_gen_mode == ctl->data) {
struct net_device *dev;
@@ -6396,7 +6401,7 @@ static int addrconf_sysctl_addr_gen_mode(struct ctl_table *ctl, int write,
if (idev &&
idev->cnf.addr_gen_mode != new_val) {
idev->cnf.addr_gen_mode = new_val;
- addrconf_dev_config(idev->dev);
+ addrconf_init_auto_addrs(idev->dev);
}
}
}
diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
index f4559e5..a30ff5d 100644
--- a/net/ipv6/datagram.c
+++ b/net/ipv6/datagram.c
@@ -51,7 +51,7 @@ static void ip6_datagram_flow_key_init(struct flowi6 *fl6, struct sock *sk)
fl6->flowi6_mark = sk->sk_mark;
fl6->fl6_dport = inet->inet_dport;
fl6->fl6_sport = inet->inet_sport;
- fl6->flowlabel = np->flow_label;
+ fl6->flowlabel = ip6_make_flowinfo(np->tclass, np->flow_label);
fl6->flowi6_uid = sk->sk_uid;
if (!fl6->flowi6_oif)
diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
index a1ac0e3..1636685 100644
--- a/net/ipv6/ila/ila_xlat.c
+++ b/net/ipv6/ila/ila_xlat.c
@@ -477,6 +477,7 @@ int ila_xlat_nl_cmd_get_mapping(struct sk_buff *skb, struct genl_info *info)
rcu_read_lock();
+ ret = -ESRCH;
ila = ila_lookup_by_params(&xp, ilan);
if (ila) {
ret = ila_dump_info(ila,
diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
index 13b1748..a91f93e 100644
--- a/net/ipv6/ip6_gre.c
+++ b/net/ipv6/ip6_gre.c
@@ -959,7 +959,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
truncate = true;
}
- nhoff = skb_network_header(skb) - skb_mac_header(skb);
+ nhoff = skb_network_offset(skb);
if (skb->protocol == htons(ETH_P_IP) &&
(ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff))
truncate = true;
@@ -968,7 +968,7 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb,
int thoff;
if (skb_transport_header_was_set(skb))
- thoff = skb_transport_header(skb) - skb_mac_header(skb);
+ thoff = skb_transport_offset(skb);
else
thoff = nhoff + sizeof(struct ipv6hdr);
if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index 675a80d..be63929 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -527,7 +527,20 @@ int ip6_forward(struct sk_buff *skb)
pneigh_lookup(&nd_tbl, net, &hdr->daddr, skb->dev, 0)) {
int proxied = ip6_forward_proxy_check(skb);
if (proxied > 0) {
- hdr->hop_limit--;
+ /* It's tempting to decrease the hop limit
+ * here by 1, as we do at the end of the
+ * function too.
+ *
+ * But that would be incorrect, as proxying is
+ * not forwarding. The ip6_input function
+ * will handle this packet locally, and it
+ * depends on the hop limit being unchanged.
+ *
+ * One example is the NDP hop limit, that
+ * always has to stay 255, but other would be
+ * similar checks around RA packets, where the
+ * user can even change the desired limit.
+ */
return ip6_input(skb);
} else if (proxied < 0) {
__IP6_INC_STATS(net, idev, IPSTATS_MIB_INDISCARDS);
@@ -1900,8 +1913,13 @@ struct sk_buff *__ip6_make_skb(struct sock *sk,
IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len);
if (proto == IPPROTO_ICMPV6) {
struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
+ u8 icmp6_type;
- ICMP6MSGOUT_INC_STATS(net, idev, icmp6_hdr(skb)->icmp6_type);
+ if (sk->sk_socket->type == SOCK_RAW && !inet_sk(sk)->hdrincl)
+ icmp6_type = fl6->fl6_icmp_type;
+ else
+ icmp6_type = icmp6_hdr(skb)->icmp6_type;
+ ICMP6MSGOUT_INC_STATS(net, idev, icmp6_type);
ICMP6_INC_STATS(net, idev, ICMP6_MIB_OUTMSGS);
}
diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
index ea50779..bc5d318 100644
--- a/net/ipv6/ip6_tunnel.c
+++ b/net/ipv6/ip6_tunnel.c
@@ -1237,8 +1237,8 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
*/
max_headroom = LL_RESERVED_SPACE(dst->dev) + sizeof(struct ipv6hdr)
+ dst->header_len + t->hlen;
- if (max_headroom > dev->needed_headroom)
- dev->needed_headroom = max_headroom;
+ if (max_headroom > READ_ONCE(dev->needed_headroom))
+ WRITE_ONCE(dev->needed_headroom, max_headroom);
err = ip6_tnl_encap(skb, t, &proto, fl6);
if (err)
diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
index a579ea1..277a5ee 100644
--- a/net/ipv6/netfilter/ip6_tables.c
+++ b/net/ipv6/netfilter/ip6_tables.c
@@ -1062,7 +1062,6 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
struct xt_counters *counters;
struct ip6t_entry *iter;
- ret = 0;
counters = xt_counters_alloc(num_counters);
if (!counters) {
ret = -ENOMEM;
@@ -1108,7 +1107,7 @@ __do_replace(struct net *net, const char *name, unsigned int valid_hooks,
net_warn_ratelimited("ip6tables: counters copy to user failed while replacing table\n");
}
vfree(counters);
- return ret;
+ return 0;
put_module:
module_put(t->me);
@@ -1751,6 +1750,10 @@ int ip6t_register_table(struct net *net, const struct xt_table *table,
new_table = xt_register_table(net, table, &bootstrap, newinfo);
if (IS_ERR(new_table)) {
+ struct ip6t_entry *iter;
+
+ xt_entry_foreach(iter, loc_cpu_entry, newinfo->size)
+ cleanup_entry(iter, net);
xt_free_table_info(newinfo);
return PTR_ERR(new_table);
}
diff --git a/net/ipv6/netfilter/nf_tproxy_ipv6.c b/net/ipv6/netfilter/nf_tproxy_ipv6.c
index 6bac68f..3fe4f15 100644
--- a/net/ipv6/netfilter/nf_tproxy_ipv6.c
+++ b/net/ipv6/netfilter/nf_tproxy_ipv6.c
@@ -63,7 +63,7 @@ nf_tproxy_handle_time_wait6(struct sk_buff *skb, int tproto, int thoff,
lport ? lport : hp->dest,
skb->dev, NF_TPROXY_LOOKUP_LISTENER);
if (sk2) {
- inet_twsk_deschedule_put(inet_twsk(sk));
+ nf_tproxy_twsk_deschedule_put(inet_twsk(sk));
sk = sk2;
}
}
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 0655fd8..7b26882 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -5555,16 +5555,17 @@ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
&nexthop_len);
} else {
+ struct fib6_info *sibling, *next_sibling;
struct fib6_nh *nh = f6i->fib6_nh;
nexthop_len = 0;
if (f6i->fib6_nsiblings) {
- nexthop_len = nla_total_size(0) /* RTA_MULTIPATH */
- + NLA_ALIGN(sizeof(struct rtnexthop))
- + nla_total_size(16) /* RTA_GATEWAY */
- + lwtunnel_get_encap_size(nh->fib_nh_lws);
+ rt6_nh_nlmsg_size(nh, &nexthop_len);
- nexthop_len *= f6i->fib6_nsiblings;
+ list_for_each_entry_safe(sibling, next_sibling,
+ &f6i->fib6_siblings, fib6_siblings) {
+ rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
+ }
}
nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 93b3e7c..3f33145 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -269,6 +269,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
fl6.flowi6_proto = IPPROTO_TCP;
fl6.daddr = sk->sk_v6_daddr;
fl6.saddr = saddr ? *saddr : np->saddr;
+ fl6.flowlabel = ip6_make_flowinfo(np->tclass, np->flow_label);
fl6.flowi6_oif = sk->sk_bound_dev_if;
fl6.flowi6_mark = sk->sk_mark;
fl6.fl6_dport = usin->sin6_port;
@@ -1429,14 +1430,11 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
/* Clone pktoptions received with SYN, if we own the req */
if (ireq->pktopts) {
- newnp->pktoptions = skb_clone(ireq->pktopts,
- sk_gfp_mask(sk, GFP_ATOMIC));
+ newnp->pktoptions = skb_clone_and_charge_r(ireq->pktopts, newsk);
consume_skb(ireq->pktopts);
ireq->pktopts = NULL;
- if (newnp->pktoptions) {
+ if (newnp->pktoptions)
tcp_v6_restore_cb(newnp->pktoptions);
- skb_set_owner_r(newnp->pktoptions, newsk);
- }
}
} else {
if (!req_unhash && found_dup_sk) {
@@ -1506,7 +1504,7 @@ static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
--ANK (980728)
*/
if (np->rxopt.all)
- opt_skb = skb_clone(skb, sk_gfp_mask(sk, GFP_ATOMIC));
+ opt_skb = skb_clone_and_charge_r(skb, sk);
if (sk->sk_state == TCP_ESTABLISHED) { /* Fast path */
struct dst_entry *dst;
@@ -1590,7 +1588,6 @@ static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb)
if (np->repflow)
np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
if (ipv6_opt_accepted(sk, opt_skb, &TCP_SKB_CB(opt_skb)->header.h6)) {
- skb_set_owner_r(opt_skb, sk);
tcp_v6_restore_cb(opt_skb);
opt_skb = xchg(&np->pktoptions, opt_skb);
} else {
diff --git a/net/iucv/iucv.c b/net/iucv/iucv.c
index f3343a8..8efc369 100644
--- a/net/iucv/iucv.c
+++ b/net/iucv/iucv.c
@@ -83,7 +83,7 @@ struct iucv_irq_data {
u16 ippathid;
u8 ipflags1;
u8 iptype;
- u32 res2[8];
+ u32 res2[9];
};
struct iucv_irq_list {
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
index 93271a2..a2b13e2 100644
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -104,9 +104,9 @@ static struct workqueue_struct *l2tp_wq;
/* per-net private data for this module */
static unsigned int l2tp_net_id;
struct l2tp_net {
- struct list_head l2tp_tunnel_list;
- /* Lock for write access to l2tp_tunnel_list */
- spinlock_t l2tp_tunnel_list_lock;
+ /* Lock for write access to l2tp_tunnel_idr */
+ spinlock_t l2tp_tunnel_idr_lock;
+ struct idr l2tp_tunnel_idr;
struct hlist_head l2tp_session_hlist[L2TP_HASH_SIZE_2];
/* Lock for write access to l2tp_session_hlist */
spinlock_t l2tp_session_hlist_lock;
@@ -208,13 +208,10 @@ struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id)
struct l2tp_tunnel *tunnel;
rcu_read_lock_bh();
- list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
- if (tunnel->tunnel_id == tunnel_id &&
- refcount_inc_not_zero(&tunnel->ref_count)) {
- rcu_read_unlock_bh();
-
- return tunnel;
- }
+ tunnel = idr_find(&pn->l2tp_tunnel_idr, tunnel_id);
+ if (tunnel && refcount_inc_not_zero(&tunnel->ref_count)) {
+ rcu_read_unlock_bh();
+ return tunnel;
}
rcu_read_unlock_bh();
@@ -224,13 +221,14 @@ EXPORT_SYMBOL_GPL(l2tp_tunnel_get);
struct l2tp_tunnel *l2tp_tunnel_get_nth(const struct net *net, int nth)
{
- const struct l2tp_net *pn = l2tp_pernet(net);
+ struct l2tp_net *pn = l2tp_pernet(net);
+ unsigned long tunnel_id, tmp;
struct l2tp_tunnel *tunnel;
int count = 0;
rcu_read_lock_bh();
- list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
- if (++count > nth &&
+ idr_for_each_entry_ul(&pn->l2tp_tunnel_idr, tunnel, tmp, tunnel_id) {
+ if (tunnel && ++count > nth &&
refcount_inc_not_zero(&tunnel->ref_count)) {
rcu_read_unlock_bh();
return tunnel;
@@ -1043,7 +1041,7 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb, uns
IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | IPSKB_REROUTED);
nf_reset_ct(skb);
- bh_lock_sock(sk);
+ bh_lock_sock_nested(sk);
if (sock_owned_by_user(sk)) {
kfree_skb(skb);
ret = NET_XMIT_DROP;
@@ -1150,8 +1148,10 @@ static void l2tp_tunnel_destruct(struct sock *sk)
}
/* Remove hooks into tunnel socket */
+ write_lock_bh(&sk->sk_callback_lock);
sk->sk_destruct = tunnel->old_sk_destruct;
sk->sk_user_data = NULL;
+ write_unlock_bh(&sk->sk_callback_lock);
/* Call the original destructor */
if (sk->sk_destruct)
@@ -1227,6 +1227,15 @@ static void l2tp_udp_encap_destroy(struct sock *sk)
l2tp_tunnel_delete(tunnel);
}
+static void l2tp_tunnel_remove(struct net *net, struct l2tp_tunnel *tunnel)
+{
+ struct l2tp_net *pn = l2tp_pernet(net);
+
+ spin_lock_bh(&pn->l2tp_tunnel_idr_lock);
+ idr_remove(&pn->l2tp_tunnel_idr, tunnel->tunnel_id);
+ spin_unlock_bh(&pn->l2tp_tunnel_idr_lock);
+}
+
/* Workqueue tunnel deletion function */
static void l2tp_tunnel_del_work(struct work_struct *work)
{
@@ -1234,7 +1243,6 @@ static void l2tp_tunnel_del_work(struct work_struct *work)
del_work);
struct sock *sk = tunnel->sock;
struct socket *sock = sk->sk_socket;
- struct l2tp_net *pn;
l2tp_tunnel_closeall(tunnel);
@@ -1248,12 +1256,7 @@ static void l2tp_tunnel_del_work(struct work_struct *work)
}
}
- /* Remove the tunnel struct from the tunnel list */
- pn = l2tp_pernet(tunnel->l2tp_net);
- spin_lock_bh(&pn->l2tp_tunnel_list_lock);
- list_del_rcu(&tunnel->list);
- spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
-
+ l2tp_tunnel_remove(tunnel->l2tp_net, tunnel);
/* drop initial ref */
l2tp_tunnel_dec_refcount(tunnel);
@@ -1384,8 +1387,6 @@ static int l2tp_tunnel_sock_create(struct net *net,
return err;
}
-static struct lock_class_key l2tp_socket_class;
-
int l2tp_tunnel_create(int fd, int version, u32 tunnel_id, u32 peer_tunnel_id,
struct l2tp_tunnel_cfg *cfg, struct l2tp_tunnel **tunnelp)
{
@@ -1455,12 +1456,19 @@ static int l2tp_validate_socket(const struct sock *sk, const struct net *net,
int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
struct l2tp_tunnel_cfg *cfg)
{
- struct l2tp_tunnel *tunnel_walk;
- struct l2tp_net *pn;
+ struct l2tp_net *pn = l2tp_pernet(net);
+ u32 tunnel_id = tunnel->tunnel_id;
struct socket *sock;
struct sock *sk;
int ret;
+ spin_lock_bh(&pn->l2tp_tunnel_idr_lock);
+ ret = idr_alloc_u32(&pn->l2tp_tunnel_idr, NULL, &tunnel_id, tunnel_id,
+ GFP_ATOMIC);
+ spin_unlock_bh(&pn->l2tp_tunnel_idr_lock);
+ if (ret)
+ return ret == -ENOSPC ? -EEXIST : ret;
+
if (tunnel->fd < 0) {
ret = l2tp_tunnel_sock_create(net, tunnel->tunnel_id,
tunnel->peer_tunnel_id, cfg,
@@ -1471,30 +1479,16 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
sock = sockfd_lookup(tunnel->fd, &ret);
if (!sock)
goto err;
-
- ret = l2tp_validate_socket(sock->sk, net, tunnel->encap);
- if (ret < 0)
- goto err_sock;
}
- tunnel->l2tp_net = net;
- pn = l2tp_pernet(net);
-
sk = sock->sk;
- sock_hold(sk);
- tunnel->sock = sk;
-
- spin_lock_bh(&pn->l2tp_tunnel_list_lock);
- list_for_each_entry(tunnel_walk, &pn->l2tp_tunnel_list, list) {
- if (tunnel_walk->tunnel_id == tunnel->tunnel_id) {
- spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
- sock_put(sk);
- ret = -EEXIST;
- goto err_sock;
- }
- }
- list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
- spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+ lock_sock(sk);
+ write_lock_bh(&sk->sk_callback_lock);
+ ret = l2tp_validate_socket(sk, net, tunnel->encap);
+ if (ret < 0)
+ goto err_inval_sock;
+ rcu_assign_sk_user_data(sk, tunnel);
+ write_unlock_bh(&sk->sk_callback_lock);
if (tunnel->encap == L2TP_ENCAPTYPE_UDP) {
struct udp_tunnel_sock_cfg udp_cfg = {
@@ -1505,15 +1499,20 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
};
setup_udp_tunnel_sock(net, sock, &udp_cfg);
- } else {
- sk->sk_user_data = tunnel;
}
tunnel->old_sk_destruct = sk->sk_destruct;
sk->sk_destruct = &l2tp_tunnel_destruct;
- lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class,
- "l2tp_sock");
sk->sk_allocation = GFP_ATOMIC;
+ release_sock(sk);
+
+ sock_hold(sk);
+ tunnel->sock = sk;
+ tunnel->l2tp_net = net;
+
+ spin_lock_bh(&pn->l2tp_tunnel_idr_lock);
+ idr_replace(&pn->l2tp_tunnel_idr, tunnel, tunnel->tunnel_id);
+ spin_unlock_bh(&pn->l2tp_tunnel_idr_lock);
trace_register_tunnel(tunnel);
@@ -1522,12 +1521,16 @@ int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
return 0;
-err_sock:
+err_inval_sock:
+ write_unlock_bh(&sk->sk_callback_lock);
+ release_sock(sk);
+
if (tunnel->fd < 0)
sock_release(sock);
else
sockfd_put(sock);
err:
+ l2tp_tunnel_remove(net, tunnel);
return ret;
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_register);
@@ -1641,8 +1644,8 @@ static __net_init int l2tp_init_net(struct net *net)
struct l2tp_net *pn = net_generic(net, l2tp_net_id);
int hash;
- INIT_LIST_HEAD(&pn->l2tp_tunnel_list);
- spin_lock_init(&pn->l2tp_tunnel_list_lock);
+ idr_init(&pn->l2tp_tunnel_idr);
+ spin_lock_init(&pn->l2tp_tunnel_idr_lock);
for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++)
INIT_HLIST_HEAD(&pn->l2tp_session_hlist[hash]);
@@ -1656,11 +1659,13 @@ static __net_exit void l2tp_exit_net(struct net *net)
{
struct l2tp_net *pn = l2tp_pernet(net);
struct l2tp_tunnel *tunnel = NULL;
+ unsigned long tunnel_id, tmp;
int hash;
rcu_read_lock_bh();
- list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
- l2tp_tunnel_delete(tunnel);
+ idr_for_each_entry_ul(&pn->l2tp_tunnel_idr, tunnel, tmp, tunnel_id) {
+ if (tunnel)
+ l2tp_tunnel_delete(tunnel);
}
rcu_read_unlock_bh();
@@ -1670,6 +1675,7 @@ static __net_exit void l2tp_exit_net(struct net *net)
for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++)
WARN_ON_ONCE(!hlist_empty(&pn->l2tp_session_hlist[hash]));
+ idr_destroy(&pn->l2tp_tunnel_idr);
}
static struct pernet_operations l2tp_net_ops = {
diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
index bf35710..9cef8e0 100644
--- a/net/l2tp/l2tp_ppp.c
+++ b/net/l2tp/l2tp_ppp.c
@@ -651,6 +651,65 @@ static int pppol2tp_tunnel_mtu(const struct l2tp_tunnel *tunnel)
return mtu - PPPOL2TP_HEADER_OVERHEAD;
}
+static struct l2tp_tunnel *pppol2tp_tunnel_get(struct net *net,
+ const struct l2tp_connect_info *info,
+ bool *new_tunnel)
+{
+ struct l2tp_tunnel *tunnel;
+ int error;
+
+ *new_tunnel = false;
+
+ tunnel = l2tp_tunnel_get(net, info->tunnel_id);
+
+ /* Special case: create tunnel context if session_id and
+ * peer_session_id is 0. Otherwise look up tunnel using supplied
+ * tunnel id.
+ */
+ if (!info->session_id && !info->peer_session_id) {
+ if (!tunnel) {
+ struct l2tp_tunnel_cfg tcfg = {
+ .encap = L2TP_ENCAPTYPE_UDP,
+ };
+
+ /* Prevent l2tp_tunnel_register() from trying to set up
+ * a kernel socket.
+ */
+ if (info->fd < 0)
+ return ERR_PTR(-EBADF);
+
+ error = l2tp_tunnel_create(info->fd,
+ info->version,
+ info->tunnel_id,
+ info->peer_tunnel_id, &tcfg,
+ &tunnel);
+ if (error < 0)
+ return ERR_PTR(error);
+
+ l2tp_tunnel_inc_refcount(tunnel);
+ error = l2tp_tunnel_register(tunnel, net, &tcfg);
+ if (error < 0) {
+ kfree(tunnel);
+ return ERR_PTR(error);
+ }
+
+ *new_tunnel = true;
+ }
+ } else {
+ /* Error if we can't find the tunnel */
+ if (!tunnel)
+ return ERR_PTR(-ENOENT);
+
+ /* Error if socket is not prepped */
+ if (!tunnel->sock) {
+ l2tp_tunnel_dec_refcount(tunnel);
+ return ERR_PTR(-ENOENT);
+ }
+ }
+
+ return tunnel;
+}
+
/* connect() handler. Attach a PPPoX socket to a tunnel UDP socket
*/
static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
@@ -664,7 +723,6 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
struct pppol2tp_session *ps;
struct l2tp_session_cfg cfg = { 0, };
bool drop_refcnt = false;
- bool drop_tunnel = false;
bool new_session = false;
bool new_tunnel = false;
int error;
@@ -673,6 +731,14 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
if (error < 0)
return error;
+ /* Don't bind if tunnel_id is 0 */
+ if (!info.tunnel_id)
+ return -EINVAL;
+
+ tunnel = pppol2tp_tunnel_get(sock_net(sk), &info, &new_tunnel);
+ if (IS_ERR(tunnel))
+ return PTR_ERR(tunnel);
+
lock_sock(sk);
/* Check for already bound sockets */
@@ -685,62 +751,6 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
if (sk->sk_user_data)
goto end; /* socket is already attached */
- /* Don't bind if tunnel_id is 0 */
- error = -EINVAL;
- if (!info.tunnel_id)
- goto end;
-
- tunnel = l2tp_tunnel_get(sock_net(sk), info.tunnel_id);
- if (tunnel)
- drop_tunnel = true;
-
- /* Special case: create tunnel context if session_id and
- * peer_session_id is 0. Otherwise look up tunnel using supplied
- * tunnel id.
- */
- if (!info.session_id && !info.peer_session_id) {
- if (!tunnel) {
- struct l2tp_tunnel_cfg tcfg = {
- .encap = L2TP_ENCAPTYPE_UDP,
- };
-
- /* Prevent l2tp_tunnel_register() from trying to set up
- * a kernel socket.
- */
- if (info.fd < 0) {
- error = -EBADF;
- goto end;
- }
-
- error = l2tp_tunnel_create(info.fd,
- info.version,
- info.tunnel_id,
- info.peer_tunnel_id, &tcfg,
- &tunnel);
- if (error < 0)
- goto end;
-
- l2tp_tunnel_inc_refcount(tunnel);
- error = l2tp_tunnel_register(tunnel, sock_net(sk),
- &tcfg);
- if (error < 0) {
- kfree(tunnel);
- goto end;
- }
- drop_tunnel = true;
- new_tunnel = true;
- }
- } else {
- /* Error if we can't find the tunnel */
- error = -ENOENT;
- if (!tunnel)
- goto end;
-
- /* Error if socket is not prepped */
- if (!tunnel->sock)
- goto end;
- }
-
if (tunnel->peer_tunnel_id == 0)
tunnel->peer_tunnel_id = info.peer_tunnel_id;
@@ -841,8 +851,7 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
}
if (drop_refcnt)
l2tp_session_dec_refcount(session);
- if (drop_tunnel)
- l2tp_tunnel_dec_refcount(tunnel);
+ l2tp_tunnel_dec_refcount(tunnel);
release_sock(sk);
return error;
diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
index 1deb3d8..a4d3fa1 100644
--- a/net/mac80211/agg-tx.c
+++ b/net/mac80211/agg-tx.c
@@ -491,7 +491,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
{
struct tid_ampdu_tx *tid_tx;
struct ieee80211_local *local = sta->local;
- struct ieee80211_sub_if_data *sdata = sta->sdata;
+ struct ieee80211_sub_if_data *sdata;
struct ieee80211_ampdu_params params = {
.sta = &sta->sta,
.action = IEEE80211_AMPDU_TX_START,
@@ -521,6 +521,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
*/
synchronize_net();
+ sdata = sta->sdata;
params.ssn = sta->tid_seq[tid] >> 4;
ret = drv_ampdu_action(local, sdata, ¶ms);
tid_tx->ssn = params.ssn;
@@ -534,6 +535,9 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
*/
set_bit(HT_AGG_STATE_DRV_READY, &tid_tx->state);
} else if (ret) {
+ if (!sdata)
+ return;
+
ht_dbg(sdata,
"BA request denied - HW unavailable for %pM tid %d\n",
sta->sta.addr, tid);
diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
index 48322e4..120bd9c 100644
--- a/net/mac80211/driver-ops.c
+++ b/net/mac80211/driver-ops.c
@@ -331,6 +331,9 @@ int drv_ampdu_action(struct ieee80211_local *local,
might_sleep();
+ if (!sdata)
+ return -EIO;
+
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
return -EIO;
diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
index a3347f2..041859b 100644
--- a/net/mac80211/iface.c
+++ b/net/mac80211/iface.c
@@ -2059,7 +2059,6 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
ret = cfg80211_register_netdevice(ndev);
if (ret) {
- ieee80211_if_free(ndev);
free_netdev(ndev);
return ret;
}
diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index f1e263b..e10bcfa2 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -1036,7 +1036,8 @@ static int __must_check __sta_info_destroy_part1(struct sta_info *sta)
list_del_rcu(&sta->list);
sta->removed = true;
- drv_sta_pre_rcu_remove(local, sta->sdata, sta);
+ if (sta->uploaded)
+ drv_sta_pre_rcu_remove(local, sta->sdata, sta);
if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN &&
rcu_access_pointer(sdata->u.vlan.sta) == sta)
@@ -2190,7 +2191,7 @@ static void sta_stats_decode_rate(struct ieee80211_local *local, u32 rate,
static int sta_set_rate_info_rx(struct sta_info *sta, struct rate_info *rinfo)
{
- u16 rate = READ_ONCE(sta_get_last_rx_stats(sta)->last_rate);
+ u32 rate = READ_ONCE(sta_get_last_rx_stats(sta)->last_rate);
if (rate == STA_STATS_RATE_INVALID)
return -EINVAL;
diff --git a/net/mac80211/wme.c b/net/mac80211/wme.c
index d50480b..9369130 100644
--- a/net/mac80211/wme.c
+++ b/net/mac80211/wme.c
@@ -143,12 +143,14 @@ u16 ieee80211_select_queue_80211(struct ieee80211_sub_if_data *sdata,
u16 __ieee80211_select_queue(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta, struct sk_buff *skb)
{
+ const struct ethhdr *eth = (void *)skb->data;
struct mac80211_qos_map *qos_map;
bool qos;
/* all mesh/ocb stations are required to support WME */
- if (sta && (sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
- sdata->vif.type == NL80211_IFTYPE_OCB))
+ if ((sdata->vif.type == NL80211_IFTYPE_MESH_POINT &&
+ !is_multicast_ether_addr(eth->h_dest)) ||
+ (sdata->vif.type == NL80211_IFTYPE_OCB && sta))
qos = true;
else if (sta)
qos = sta->sta.wme;
diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
index cbbde0f..77137a8 100644
--- a/net/mctp/af_mctp.c
+++ b/net/mctp/af_mctp.c
@@ -288,11 +288,17 @@ static void mctp_sk_unhash(struct sock *sk)
kfree_rcu(key, rcu);
}
+ sock_set_flag(sk, SOCK_DEAD);
spin_unlock_irqrestore(&net->mctp.keys_lock, flags);
synchronize_rcu();
}
+static void mctp_sk_destruct(struct sock *sk)
+{
+ skb_queue_purge(&sk->sk_receive_queue);
+}
+
static struct proto mctp_proto = {
.name = "MCTP",
.owner = THIS_MODULE,
@@ -329,6 +335,7 @@ static int mctp_pf_create(struct net *net, struct socket *sock,
return -ENOMEM;
sock_init_data(sock, sk);
+ sk->sk_destruct = mctp_sk_destruct;
rc = 0;
if (sk->sk_prot->init)
diff --git a/net/mctp/route.c b/net/mctp/route.c
index 6aebb4a..89e6739 100644
--- a/net/mctp/route.c
+++ b/net/mctp/route.c
@@ -135,6 +135,11 @@ static int mctp_key_add(struct mctp_sk_key *key, struct mctp_sock *msk)
spin_lock_irqsave(&net->mctp.keys_lock, flags);
+ if (sock_flag(&msk->sk, SOCK_DEAD)) {
+ rc = -EINVAL;
+ goto out_unlock;
+ }
+
hlist_for_each_entry(tmp, &net->mctp.keys, hlist) {
if (mctp_key_match(tmp, key->local_addr, key->peer_addr,
key->tag)) {
@@ -148,6 +153,7 @@ static int mctp_key_add(struct mctp_sk_key *key, struct mctp_sock *msk)
hlist_add_head(&key->sklist, &msk->keys);
}
+out_unlock:
spin_unlock_irqrestore(&net->mctp.keys_lock, flags);
return rc;
diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
index 58a7075..e69bed9 100644
--- a/net/mpls/af_mpls.c
+++ b/net/mpls/af_mpls.c
@@ -1428,6 +1428,7 @@ static int mpls_dev_sysctl_register(struct net_device *dev,
free:
kfree(table);
out:
+ mdev->sysctl = NULL;
return -ENOBUFS;
}
@@ -1437,6 +1438,9 @@ static void mpls_dev_sysctl_unregister(struct net_device *dev,
struct net *net = dev_net(dev);
struct ctl_table *table;
+ if (!mdev->sysctl)
+ return;
+
table = mdev->sysctl->ctl_table_arg;
unregister_net_sysctl_table(mdev->sysctl);
kfree(table);
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 2b1b401..935f351 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -886,13 +886,17 @@ static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,
return ret;
}
+static struct lock_class_key mptcp_slock_keys[2];
+static struct lock_class_key mptcp_keys[2];
+
static int mptcp_pm_nl_create_listen_socket(struct sock *sk,
struct mptcp_pm_addr_entry *entry)
{
+ bool is_ipv6 = sk->sk_family == AF_INET6;
int addrlen = sizeof(struct sockaddr_in);
struct sockaddr_storage addr;
- struct mptcp_sock *msk;
struct socket *ssock;
+ struct sock *newsk;
int backlog = 1024;
int err;
@@ -901,13 +905,27 @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk,
if (err)
return err;
- msk = mptcp_sk(entry->lsk->sk);
- if (!msk) {
+ newsk = entry->lsk->sk;
+ if (!newsk) {
err = -EINVAL;
goto out;
}
- ssock = __mptcp_nmpc_socket(msk);
+ /* The subflow socket lock is acquired in a nested to the msk one
+ * in several places, even by the TCP stack, and this msk is a kernel
+ * socket: lockdep complains. Instead of propagating the _nested
+ * modifiers in several places, re-init the lock class for the msk
+ * socket to an mptcp specific one.
+ */
+ sock_lock_init_class_and_name(newsk,
+ is_ipv6 ? "mlock-AF_INET6" : "mlock-AF_INET",
+ &mptcp_slock_keys[is_ipv6],
+ is_ipv6 ? "msk_lock-AF_INET6" : "msk_lock-AF_INET",
+ &mptcp_keys[is_ipv6]);
+
+ lock_sock(newsk);
+ ssock = __mptcp_nmpc_socket(mptcp_sk(newsk));
+ release_sock(newsk);
if (!ssock) {
err = -EINVAL;
goto out;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 47f359d..5d05d85 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2726,6 +2726,7 @@ static void mptcp_close(struct sock *sk, long timeout)
{
struct mptcp_subflow_context *subflow;
bool do_cancel_work = false;
+ int subflows_alive = 0;
lock_sock(sk);
sk->sk_shutdown = SHUTDOWN_MASK;
@@ -2747,11 +2748,19 @@ static void mptcp_close(struct sock *sk, long timeout)
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
bool slow = lock_sock_fast_nested(ssk);
+ subflows_alive += ssk->sk_state != TCP_CLOSE;
+
sock_orphan(ssk);
unlock_sock_fast(ssk, slow);
}
sock_orphan(sk);
+ /* all the subflows are closed, only timeout can change the msk
+ * state, let's not keep resources busy for no reasons
+ */
+ if (subflows_alive == 0)
+ inet_sk_state_store(sk, TCP_CLOSE);
+
sock_hold(sk);
pr_debug("msk=%p state=%d", sk, sk->sk_state);
if (sk->sk_state == TCP_CLOSE) {
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index ba4241d..232f437 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -358,7 +358,6 @@ void mptcp_subflow_reset(struct sock *ssk)
/* must hold: tcp_done() could drop last reference on parent */
sock_hold(sk);
- tcp_set_state(ssk, TCP_CLOSE);
tcp_send_active_reset(ssk, GFP_ATOMIC);
tcp_done(ssk);
if (!test_and_set_bit(MPTCP_WORK_CLOSE_SUBFLOW, &mptcp_sk(sk)->flags) &&
@@ -514,7 +513,7 @@ static struct request_sock_ops mptcp_subflow_v6_request_sock_ops __ro_after_init
static struct tcp_request_sock_ops subflow_request_sock_ipv6_ops __ro_after_init;
static struct inet_connection_sock_af_ops subflow_v6_specific __ro_after_init;
static struct inet_connection_sock_af_ops subflow_v6m_specific __ro_after_init;
-static struct proto tcpv6_prot_override;
+static struct proto tcpv6_prot_override __ro_after_init;
static int subflow_v6_conn_request(struct sock *sk, struct sk_buff *skb)
{
@@ -817,7 +816,7 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
}
static struct inet_connection_sock_af_ops subflow_specific __ro_after_init;
-static struct proto tcp_prot_override;
+static struct proto tcp_prot_override __ro_after_init;
enum mapping_status {
MAPPING_OK,
@@ -1284,6 +1283,7 @@ void __mptcp_error_report(struct sock *sk)
mptcp_for_each_subflow(msk, subflow) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
int err = sock_error(ssk);
+ int ssk_state;
if (!err)
continue;
@@ -1294,7 +1294,14 @@ void __mptcp_error_report(struct sock *sk)
if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(msk))
continue;
- inet_sk_state_store(sk, inet_sk_state_load(ssk));
+ /* We need to propagate only transition to CLOSE state.
+ * Orphaned socket will see such state change via
+ * subflow_sched_work_if_closed() and that path will properly
+ * destroy the msk as needed.
+ */
+ ssk_state = inet_sk_state_load(ssk);
+ if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD))
+ inet_sk_state_store(sk, ssk_state);
sk->sk_err = -err;
/* This barrier is coupled with smp_rmb() in mptcp_poll() */
@@ -1308,6 +1315,13 @@ static void subflow_error_report(struct sock *ssk)
{
struct sock *sk = mptcp_subflow_ctx(ssk)->conn;
+ /* bail early if this is a no-op, so that we avoid introducing a
+ * problematic lockdep dependency between TCP accept queue lock
+ * and msk socket spinlock
+ */
+ if (!sk->sk_socket)
+ return;
+
mptcp_data_lock(sk);
if (!sock_owned_by_user(sk))
__mptcp_error_report(sk);
@@ -1562,7 +1576,7 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
if (err)
return err;
- lock_sock(sf->sk);
+ lock_sock_nested(sf->sk, SINGLE_DEPTH_NESTING);
/* the newly created socket has to be in the same cgroup as its parent */
mptcp_attach_cgroup(sk, sf->sk);
diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
index 2cc6092..8776e75 100644
--- a/net/netfilter/nf_conntrack_netlink.c
+++ b/net/netfilter/nf_conntrack_netlink.c
@@ -322,11 +322,12 @@ ctnetlink_dump_timestamp(struct sk_buff *skb, const struct nf_conn *ct)
}
#ifdef CONFIG_NF_CONNTRACK_MARK
-static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct)
+static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct,
+ bool dump)
{
u32 mark = READ_ONCE(ct->mark);
- if (!mark)
+ if (!mark && !dump)
return 0;
if (nla_put_be32(skb, CTA_MARK, htonl(mark)))
@@ -337,7 +338,7 @@ static int ctnetlink_dump_mark(struct sk_buff *skb, const struct nf_conn *ct)
return -1;
}
#else
-#define ctnetlink_dump_mark(a, b) (0)
+#define ctnetlink_dump_mark(a, b, c) (0)
#endif
#ifdef CONFIG_NF_CONNTRACK_SECMARK
@@ -542,7 +543,7 @@ static int ctnetlink_dump_extinfo(struct sk_buff *skb,
static int ctnetlink_dump_info(struct sk_buff *skb, struct nf_conn *ct)
{
if (ctnetlink_dump_status(skb, ct) < 0 ||
- ctnetlink_dump_mark(skb, ct) < 0 ||
+ ctnetlink_dump_mark(skb, ct, true) < 0 ||
ctnetlink_dump_secctx(skb, ct) < 0 ||
ctnetlink_dump_id(skb, ct) < 0 ||
ctnetlink_dump_use(skb, ct) < 0 ||
@@ -825,8 +826,7 @@ ctnetlink_conntrack_event(unsigned int events, const struct nf_ct_event *item)
}
#ifdef CONFIG_NF_CONNTRACK_MARK
- if (events & (1 << IPCT_MARK) &&
- ctnetlink_dump_mark(skb, ct) < 0)
+ if (ctnetlink_dump_mark(skb, ct, events & (1 << IPCT_MARK)))
goto nla_put_failure;
#endif
nlmsg_end(skb, nlh);
@@ -2396,12 +2396,15 @@ ctnetlink_create_conntrack(struct net *net,
err = nf_conntrack_hash_check_insert(ct);
if (err < 0)
- goto err2;
+ goto err3;
rcu_read_unlock();
return ct;
+err3:
+ if (ct->master)
+ nf_ct_put(ct->master);
err2:
rcu_read_unlock();
err1:
@@ -2756,7 +2759,7 @@ static int __ctnetlink_glue_build(struct sk_buff *skb, struct nf_conn *ct)
goto nla_put_failure;
#ifdef CONFIG_NF_CONNTRACK_MARK
- if (ctnetlink_dump_mark(skb, ct) < 0)
+ if (ctnetlink_dump_mark(skb, ct, true) < 0)
goto nla_put_failure;
#endif
if (ctnetlink_dump_labels(skb, ct) < 0)
diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
index 5a93633..895e0ca 100644
--- a/net/netfilter/nf_conntrack_proto_sctp.c
+++ b/net/netfilter/nf_conntrack_proto_sctp.c
@@ -27,22 +27,16 @@
#include <net/netfilter/nf_conntrack_ecache.h>
#include <net/netfilter/nf_conntrack_timeout.h>
-/* FIXME: Examine ipfilter's timeouts and conntrack transitions more
- closely. They're more complex. --RR
-
- And so for me for SCTP :D -Kiran */
-
static const char *const sctp_conntrack_names[] = {
- "NONE",
- "CLOSED",
- "COOKIE_WAIT",
- "COOKIE_ECHOED",
- "ESTABLISHED",
- "SHUTDOWN_SENT",
- "SHUTDOWN_RECD",
- "SHUTDOWN_ACK_SENT",
- "HEARTBEAT_SENT",
- "HEARTBEAT_ACKED",
+ [SCTP_CONNTRACK_NONE] = "NONE",
+ [SCTP_CONNTRACK_CLOSED] = "CLOSED",
+ [SCTP_CONNTRACK_COOKIE_WAIT] = "COOKIE_WAIT",
+ [SCTP_CONNTRACK_COOKIE_ECHOED] = "COOKIE_ECHOED",
+ [SCTP_CONNTRACK_ESTABLISHED] = "ESTABLISHED",
+ [SCTP_CONNTRACK_SHUTDOWN_SENT] = "SHUTDOWN_SENT",
+ [SCTP_CONNTRACK_SHUTDOWN_RECD] = "SHUTDOWN_RECD",
+ [SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = "SHUTDOWN_ACK_SENT",
+ [SCTP_CONNTRACK_HEARTBEAT_SENT] = "HEARTBEAT_SENT",
};
#define SECS * HZ
@@ -54,12 +48,11 @@ static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = {
[SCTP_CONNTRACK_CLOSED] = 10 SECS,
[SCTP_CONNTRACK_COOKIE_WAIT] = 3 SECS,
[SCTP_CONNTRACK_COOKIE_ECHOED] = 3 SECS,
- [SCTP_CONNTRACK_ESTABLISHED] = 5 DAYS,
+ [SCTP_CONNTRACK_ESTABLISHED] = 210 SECS,
[SCTP_CONNTRACK_SHUTDOWN_SENT] = 300 SECS / 1000,
[SCTP_CONNTRACK_SHUTDOWN_RECD] = 300 SECS / 1000,
[SCTP_CONNTRACK_SHUTDOWN_ACK_SENT] = 3 SECS,
[SCTP_CONNTRACK_HEARTBEAT_SENT] = 30 SECS,
- [SCTP_CONNTRACK_HEARTBEAT_ACKED] = 210 SECS,
};
#define SCTP_FLAG_HEARTBEAT_VTAG_FAILED 1
@@ -73,7 +66,6 @@ static const unsigned int sctp_timeouts[SCTP_CONNTRACK_MAX] = {
#define sSR SCTP_CONNTRACK_SHUTDOWN_RECD
#define sSA SCTP_CONNTRACK_SHUTDOWN_ACK_SENT
#define sHS SCTP_CONNTRACK_HEARTBEAT_SENT
-#define sHA SCTP_CONNTRACK_HEARTBEAT_ACKED
#define sIV SCTP_CONNTRACK_MAX
/*
@@ -96,9 +88,6 @@ SHUTDOWN_ACK_SENT - We have seen a SHUTDOWN_ACK chunk in the direction opposite
CLOSED - We have seen a SHUTDOWN_COMPLETE chunk in the direction of
the SHUTDOWN chunk. Connection is closed.
HEARTBEAT_SENT - We have seen a HEARTBEAT in a new flow.
-HEARTBEAT_ACKED - We have seen a HEARTBEAT-ACK in the direction opposite to
- that of the HEARTBEAT chunk. Secondary connection is
- established.
*/
/* TODO
@@ -115,33 +104,33 @@ cookie echoed to closed.
static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
{
/* ORIGINAL */
-/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */
-/* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW, sHA},
-/* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA},
-/* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
-/* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL, sSS},
-/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA, sHA},
-/* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA},/* Can't have Stale cookie*/
-/* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL, sHA},/* 5.2.4 - Big TODO */
-/* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL, sHA},/* Can't come in orig dir */
-/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL, sHA},
-/* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA},
-/* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA}
+/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */
+/* init */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCW},
+/* init_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},
+/* abort */ {sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sCL},
+/* shutdown */ {sCL, sCL, sCW, sCE, sSS, sSS, sSR, sSA, sCL},
+/* shutdown_ack */ {sSA, sCL, sCW, sCE, sES, sSA, sSA, sSA, sSA},
+/* error */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't have Stale cookie*/
+/* cookie_echo */ {sCL, sCL, sCE, sCE, sES, sSS, sSR, sSA, sCL},/* 5.2.4 - Big TODO */
+/* cookie_ack */ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sCL},/* Can't come in orig dir */
+/* shutdown_comp*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sCL, sCL},
+/* heartbeat */ {sHS, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
+/* heartbeat_ack*/ {sCL, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
},
{
/* REPLY */
-/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA */
-/* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},/* INIT in sCL Big TODO */
-/* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},
-/* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV, sCL},
-/* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV, sSR},
-/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV, sHA},
-/* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV, sHA},
-/* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV, sHA},/* Can't come in reply dir */
-/* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV, sHA},
-/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV, sHA},
-/* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS, sHA},
-/* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHA, sHA}
+/* sNO, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS */
+/* init */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* INIT in sCL Big TODO */
+/* init_ack */ {sIV, sCW, sCW, sCE, sES, sSS, sSR, sSA, sIV},
+/* abort */ {sIV, sCL, sCL, sCL, sCL, sCL, sCL, sCL, sIV},
+/* shutdown */ {sIV, sCL, sCW, sCE, sSR, sSS, sSR, sSA, sIV},
+/* shutdown_ack */ {sIV, sCL, sCW, sCE, sES, sSA, sSA, sSA, sIV},
+/* error */ {sIV, sCL, sCW, sCL, sES, sSS, sSR, sSA, sIV},
+/* cookie_echo */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sIV},/* Can't come in reply dir */
+/* cookie_ack */ {sIV, sCL, sCW, sES, sES, sSS, sSR, sSA, sIV},
+/* shutdown_comp*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sCL, sIV},
+/* heartbeat */ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sHS},
+/* heartbeat_ack*/ {sIV, sCL, sCW, sCE, sES, sSS, sSR, sSA, sES},
}
};
@@ -412,22 +401,29 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
for_each_sctp_chunk (skb, sch, _sch, offset, dataoff, count) {
/* Special cases of Verification tag check (Sec 8.5.1) */
if (sch->type == SCTP_CID_INIT) {
- /* Sec 8.5.1 (A) */
+ /* (A) vtag MUST be zero */
if (sh->vtag != 0)
goto out_unlock;
} else if (sch->type == SCTP_CID_ABORT) {
- /* Sec 8.5.1 (B) */
- if (sh->vtag != ct->proto.sctp.vtag[dir] &&
- sh->vtag != ct->proto.sctp.vtag[!dir])
+ /* (B) vtag MUST match own vtag if T flag is unset OR
+ * MUST match peer's vtag if T flag is set
+ */
+ if ((!(sch->flags & SCTP_CHUNK_FLAG_T) &&
+ sh->vtag != ct->proto.sctp.vtag[dir]) ||
+ ((sch->flags & SCTP_CHUNK_FLAG_T) &&
+ sh->vtag != ct->proto.sctp.vtag[!dir]))
goto out_unlock;
} else if (sch->type == SCTP_CID_SHUTDOWN_COMPLETE) {
- /* Sec 8.5.1 (C) */
- if (sh->vtag != ct->proto.sctp.vtag[dir] &&
- sh->vtag != ct->proto.sctp.vtag[!dir] &&
- sch->flags & SCTP_CHUNK_FLAG_T)
+ /* (C) vtag MUST match own vtag if T flag is unset OR
+ * MUST match peer's vtag if T flag is set
+ */
+ if ((!(sch->flags & SCTP_CHUNK_FLAG_T) &&
+ sh->vtag != ct->proto.sctp.vtag[dir]) ||
+ ((sch->flags & SCTP_CHUNK_FLAG_T) &&
+ sh->vtag != ct->proto.sctp.vtag[!dir]))
goto out_unlock;
} else if (sch->type == SCTP_CID_COOKIE_ECHO) {
- /* Sec 8.5.1 (D) */
+ /* (D) vtag must be same as init_vtag as found in INIT_ACK */
if (sh->vtag != ct->proto.sctp.vtag[dir])
goto out_unlock;
} else if (sch->type == SCTP_CID_HEARTBEAT) {
@@ -501,8 +497,12 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
}
ct->proto.sctp.state = new_state;
- if (old_state != new_state)
+ if (old_state != new_state) {
nf_conntrack_event_cache(IPCT_PROTOINFO, ct);
+ if (new_state == SCTP_CONNTRACK_ESTABLISHED &&
+ !test_and_set_bit(IPS_ASSURED_BIT, &ct->status))
+ nf_conntrack_event_cache(IPCT_ASSURED, ct);
+ }
}
spin_unlock_bh(&ct->lock);
@@ -516,14 +516,6 @@ int nf_conntrack_sctp_packet(struct nf_conn *ct,
nf_ct_refresh_acct(ct, ctinfo, skb, timeouts[new_state]);
- if (old_state == SCTP_CONNTRACK_COOKIE_ECHOED &&
- dir == IP_CT_DIR_REPLY &&
- new_state == SCTP_CONNTRACK_ESTABLISHED) {
- pr_debug("Setting assured bit\n");
- set_bit(IPS_ASSURED_BIT, &ct->status);
- nf_conntrack_event_cache(IPCT_ASSURED, ct);
- }
-
return NF_ACCEPT;
out_unlock:
diff --git a/net/netfilter/nf_conntrack_standalone.c b/net/netfilter/nf_conntrack_standalone.c
index 4d85368..338f02a 100644
--- a/net/netfilter/nf_conntrack_standalone.c
+++ b/net/netfilter/nf_conntrack_standalone.c
@@ -599,7 +599,6 @@ enum nf_ct_sysctl_index {
NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_SHUTDOWN_RECD,
NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_SHUTDOWN_ACK_SENT,
NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_SENT,
- NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_ACKED,
#endif
#ifdef CONFIG_NF_CT_PROTO_DCCP
NF_SYSCTL_CT_PROTO_TIMEOUT_DCCP_REQUEST,
@@ -892,12 +891,6 @@ static struct ctl_table nf_ct_sysctl_table[] = {
.mode = 0644,
.proc_handler = proc_dointvec_jiffies,
},
- [NF_SYSCTL_CT_PROTO_TIMEOUT_SCTP_HEARTBEAT_ACKED] = {
- .procname = "nf_conntrack_sctp_timeout_heartbeat_acked",
- .maxlen = sizeof(unsigned int),
- .mode = 0644,
- .proc_handler = proc_dointvec_jiffies,
- },
#endif
#ifdef CONFIG_NF_CT_PROTO_DCCP
[NF_SYSCTL_CT_PROTO_TIMEOUT_DCCP_REQUEST] = {
@@ -1041,7 +1034,6 @@ static void nf_conntrack_standalone_init_sctp_sysctl(struct net *net,
XASSIGN(SHUTDOWN_RECD, sn);
XASSIGN(SHUTDOWN_ACK_SENT, sn);
XASSIGN(HEARTBEAT_SENT, sn);
- XASSIGN(HEARTBEAT_ACKED, sn);
#undef XASSIGN
#endif
}
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index 81bd13b..dc276b6 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -5342,7 +5342,7 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
int rem, err = 0;
table = nft_table_lookup(net, nla[NFTA_SET_ELEM_LIST_TABLE], family,
- genmask, NETLINK_CB(skb).portid);
+ genmask, 0);
if (IS_ERR(table)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_SET_ELEM_LIST_TABLE]);
return PTR_ERR(table);
@@ -6794,6 +6794,9 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
return -EOPNOTSUPP;
type = __nft_obj_type_get(objtype);
+ if (WARN_ON_ONCE(!type))
+ return -ENOENT;
+
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj);
diff --git a/net/netfilter/nft_masq.c b/net/netfilter/nft_masq.c
index 9953e80..1818dbf 100644
--- a/net/netfilter/nft_masq.c
+++ b/net/netfilter/nft_masq.c
@@ -43,7 +43,7 @@ static int nft_masq_init(const struct nft_ctx *ctx,
const struct nft_expr *expr,
const struct nlattr * const tb[])
{
- u32 plen = sizeof_field(struct nf_nat_range, min_addr.all);
+ u32 plen = sizeof_field(struct nf_nat_range, min_proto.all);
struct nft_masq *priv = nft_expr_priv(expr);
int err;
diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c
index db8f911..cd4eb49 100644
--- a/net/netfilter/nft_nat.c
+++ b/net/netfilter/nft_nat.c
@@ -226,7 +226,7 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
priv->flags |= NF_NAT_RANGE_MAP_IPS;
}
- plen = sizeof_field(struct nf_nat_range, min_addr.all);
+ plen = sizeof_field(struct nf_nat_range, min_proto.all);
if (tb[NFTA_NAT_REG_PROTO_MIN]) {
err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MIN],
&priv->sreg_proto_min, plen);
diff --git a/net/netfilter/nft_redir.c b/net/netfilter/nft_redir.c
index ba09890d..e64f531 100644
--- a/net/netfilter/nft_redir.c
+++ b/net/netfilter/nft_redir.c
@@ -48,7 +48,7 @@ static int nft_redir_init(const struct nft_ctx *ctx,
unsigned int plen;
int err;
- plen = sizeof_field(struct nf_nat_range, min_addr.all);
+ plen = sizeof_field(struct nf_nat_range, min_proto.all);
if (tb[NFTA_REDIR_REG_PROTO_MIN]) {
err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MIN],
&priv->sreg_proto_min, plen);
@@ -232,7 +232,7 @@ static struct nft_expr_type nft_redir_inet_type __read_mostly = {
.name = "redir",
.ops = &nft_redir_inet_ops,
.policy = nft_redir_policy,
- .maxattr = NFTA_MASQ_MAX,
+ .maxattr = NFTA_REDIR_MAX,
.owner = THIS_MODULE,
};
diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
index 7325bee..19ea4d3 100644
--- a/net/netfilter/nft_set_rbtree.c
+++ b/net/netfilter/nft_set_rbtree.c
@@ -38,10 +38,12 @@ static bool nft_rbtree_interval_start(const struct nft_rbtree_elem *rbe)
return !nft_rbtree_interval_end(rbe);
}
-static bool nft_rbtree_equal(const struct nft_set *set, const void *this,
- const struct nft_rbtree_elem *interval)
+static int nft_rbtree_cmp(const struct nft_set *set,
+ const struct nft_rbtree_elem *e1,
+ const struct nft_rbtree_elem *e2)
{
- return memcmp(this, nft_set_ext_key(&interval->ext), set->klen) == 0;
+ return memcmp(nft_set_ext_key(&e1->ext), nft_set_ext_key(&e2->ext),
+ set->klen);
}
static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
@@ -52,7 +54,6 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
const struct nft_rbtree_elem *rbe, *interval = NULL;
u8 genmask = nft_genmask_cur(net);
const struct rb_node *parent;
- const void *this;
int d;
parent = rcu_dereference_raw(priv->root.rb_node);
@@ -62,12 +63,11 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
rbe = rb_entry(parent, struct nft_rbtree_elem, node);
- this = nft_set_ext_key(&rbe->ext);
- d = memcmp(this, key, set->klen);
+ d = memcmp(nft_set_ext_key(&rbe->ext), key, set->klen);
if (d < 0) {
parent = rcu_dereference_raw(parent->rb_left);
if (interval &&
- nft_rbtree_equal(set, this, interval) &&
+ !nft_rbtree_cmp(set, rbe, interval) &&
nft_rbtree_interval_end(rbe) &&
nft_rbtree_interval_start(interval))
continue;
@@ -215,154 +215,216 @@ static void *nft_rbtree_get(const struct net *net, const struct nft_set *set,
return rbe;
}
+static int nft_rbtree_gc_elem(const struct nft_set *__set,
+ struct nft_rbtree *priv,
+ struct nft_rbtree_elem *rbe)
+{
+ struct nft_set *set = (struct nft_set *)__set;
+ struct rb_node *prev = rb_prev(&rbe->node);
+ struct nft_rbtree_elem *rbe_prev;
+ struct nft_set_gc_batch *gcb;
+
+ gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC);
+ if (!gcb)
+ return -ENOMEM;
+
+ /* search for expired end interval coming before this element. */
+ do {
+ rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
+ if (nft_rbtree_interval_end(rbe_prev))
+ break;
+
+ prev = rb_prev(prev);
+ } while (prev != NULL);
+
+ rb_erase(&rbe_prev->node, &priv->root);
+ rb_erase(&rbe->node, &priv->root);
+ atomic_sub(2, &set->nelems);
+
+ nft_set_gc_batch_add(gcb, rbe);
+ nft_set_gc_batch_complete(gcb);
+
+ return 0;
+}
+
+static bool nft_rbtree_update_first(const struct nft_set *set,
+ struct nft_rbtree_elem *rbe,
+ struct rb_node *first)
+{
+ struct nft_rbtree_elem *first_elem;
+
+ first_elem = rb_entry(first, struct nft_rbtree_elem, node);
+ /* this element is closest to where the new element is to be inserted:
+ * update the first element for the node list path.
+ */
+ if (nft_rbtree_cmp(set, rbe, first_elem) < 0)
+ return true;
+
+ return false;
+}
+
static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
struct nft_rbtree_elem *new,
struct nft_set_ext **ext)
{
- bool overlap = false, dup_end_left = false, dup_end_right = false;
+ struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
+ struct rb_node *node, *parent, **p, *first = NULL;
struct nft_rbtree *priv = nft_set_priv(set);
u8 genmask = nft_genmask_next(net);
- struct nft_rbtree_elem *rbe;
- struct rb_node *parent, **p;
- int d;
+ int d, err;
- /* Detect overlaps as we descend the tree. Set the flag in these cases:
- *
- * a1. _ _ __>| ?_ _ __| (insert end before existing end)
- * a2. _ _ ___| ?_ _ _>| (insert end after existing end)
- * a3. _ _ ___? >|_ _ __| (insert start before existing end)
- *
- * and clear it later on, as we eventually reach the points indicated by
- * '?' above, in the cases described below. We'll always meet these
- * later, locally, due to tree ordering, and overlaps for the intervals
- * that are the closest together are always evaluated last.
- *
- * b1. _ _ __>| !_ _ __| (insert end before existing start)
- * b2. _ _ ___| !_ _ _>| (insert end after existing start)
- * b3. _ _ ___! >|_ _ __| (insert start after existing end, as a leaf)
- * '--' no nodes falling in this range
- * b4. >|_ _ ! (insert start before existing start)
- *
- * Case a3. resolves to b3.:
- * - if the inserted start element is the leftmost, because the '0'
- * element in the tree serves as end element
- * - otherwise, if an existing end is found immediately to the left. If
- * there are existing nodes in between, we need to further descend the
- * tree before we can conclude the new start isn't causing an overlap
- *
- * or to b4., which, preceded by a3., means we already traversed one or
- * more existing intervals entirely, from the right.
- *
- * For a new, rightmost pair of elements, we'll hit cases b3. and b2.,
- * in that order.
- *
- * The flag is also cleared in two special cases:
- *
- * b5. |__ _ _!|<_ _ _ (insert start right before existing end)
- * b6. |__ _ >|!__ _ _ (insert end right after existing start)
- *
- * which always happen as last step and imply that no further
- * overlapping is possible.
- *
- * Another special case comes from the fact that start elements matching
- * an already existing start element are allowed: insertion is not
- * performed but we return -EEXIST in that case, and the error will be
- * cleared by the caller if NLM_F_EXCL is not present in the request.
- * This way, request for insertion of an exact overlap isn't reported as
- * error to userspace if not desired.
- *
- * However, if the existing start matches a pre-existing start, but the
- * end element doesn't match the corresponding pre-existing end element,
- * we need to report a partial overlap. This is a local condition that
- * can be noticed without need for a tracking flag, by checking for a
- * local duplicated end for a corresponding start, from left and right,
- * separately.
+ /* Descend the tree to search for an existing element greater than the
+ * key value to insert that is greater than the new element. This is the
+ * first element to walk the ordered elements to find possible overlap.
*/
-
parent = NULL;
p = &priv->root.rb_node;
while (*p != NULL) {
parent = *p;
rbe = rb_entry(parent, struct nft_rbtree_elem, node);
- d = memcmp(nft_set_ext_key(&rbe->ext),
- nft_set_ext_key(&new->ext),
- set->klen);
+ d = nft_rbtree_cmp(set, rbe, new);
+
if (d < 0) {
p = &parent->rb_left;
-
- if (nft_rbtree_interval_start(new)) {
- if (nft_rbtree_interval_end(rbe) &&
- nft_set_elem_active(&rbe->ext, genmask) &&
- !nft_set_elem_expired(&rbe->ext) && !*p)
- overlap = false;
- } else {
- if (dup_end_left && !*p)
- return -ENOTEMPTY;
-
- overlap = nft_rbtree_interval_end(rbe) &&
- nft_set_elem_active(&rbe->ext,
- genmask) &&
- !nft_set_elem_expired(&rbe->ext);
-
- if (overlap) {
- dup_end_right = true;
- continue;
- }
- }
} else if (d > 0) {
+ if (!first ||
+ nft_rbtree_update_first(set, rbe, first))
+ first = &rbe->node;
+
p = &parent->rb_right;
-
- if (nft_rbtree_interval_end(new)) {
- if (dup_end_right && !*p)
- return -ENOTEMPTY;
-
- overlap = nft_rbtree_interval_end(rbe) &&
- nft_set_elem_active(&rbe->ext,
- genmask) &&
- !nft_set_elem_expired(&rbe->ext);
-
- if (overlap) {
- dup_end_left = true;
- continue;
- }
- } else if (nft_set_elem_active(&rbe->ext, genmask) &&
- !nft_set_elem_expired(&rbe->ext)) {
- overlap = nft_rbtree_interval_end(rbe);
- }
} else {
- if (nft_rbtree_interval_end(rbe) &&
- nft_rbtree_interval_start(new)) {
+ if (nft_rbtree_interval_end(rbe))
p = &parent->rb_left;
-
- if (nft_set_elem_active(&rbe->ext, genmask) &&
- !nft_set_elem_expired(&rbe->ext))
- overlap = false;
- } else if (nft_rbtree_interval_start(rbe) &&
- nft_rbtree_interval_end(new)) {
+ else
p = &parent->rb_right;
-
- if (nft_set_elem_active(&rbe->ext, genmask) &&
- !nft_set_elem_expired(&rbe->ext))
- overlap = false;
- } else if (nft_set_elem_active(&rbe->ext, genmask) &&
- !nft_set_elem_expired(&rbe->ext)) {
- *ext = &rbe->ext;
- return -EEXIST;
- } else {
- overlap = false;
- if (nft_rbtree_interval_end(rbe))
- p = &parent->rb_left;
- else
- p = &parent->rb_right;
- }
}
-
- dup_end_left = dup_end_right = false;
}
- if (overlap)
+ if (!first)
+ first = rb_first(&priv->root);
+
+ /* Detect overlap by going through the list of valid tree nodes.
+ * Values stored in the tree are in reversed order, starting from
+ * highest to lowest value.
+ */
+ for (node = first; node != NULL; node = rb_next(node)) {
+ rbe = rb_entry(node, struct nft_rbtree_elem, node);
+
+ if (!nft_set_elem_active(&rbe->ext, genmask))
+ continue;
+
+ /* perform garbage collection to avoid bogus overlap reports. */
+ if (nft_set_elem_expired(&rbe->ext)) {
+ err = nft_rbtree_gc_elem(set, priv, rbe);
+ if (err < 0)
+ return err;
+
+ continue;
+ }
+
+ d = nft_rbtree_cmp(set, rbe, new);
+ if (d == 0) {
+ /* Matching end element: no need to look for an
+ * overlapping greater or equal element.
+ */
+ if (nft_rbtree_interval_end(rbe)) {
+ rbe_le = rbe;
+ break;
+ }
+
+ /* first element that is greater or equal to key value. */
+ if (!rbe_ge) {
+ rbe_ge = rbe;
+ continue;
+ }
+
+ /* this is a closer more or equal element, update it. */
+ if (nft_rbtree_cmp(set, rbe_ge, new) != 0) {
+ rbe_ge = rbe;
+ continue;
+ }
+
+ /* element is equal to key value, make sure flags are
+ * the same, an existing more or equal start element
+ * must not be replaced by more or equal end element.
+ */
+ if ((nft_rbtree_interval_start(new) &&
+ nft_rbtree_interval_start(rbe_ge)) ||
+ (nft_rbtree_interval_end(new) &&
+ nft_rbtree_interval_end(rbe_ge))) {
+ rbe_ge = rbe;
+ continue;
+ }
+ } else if (d > 0) {
+ /* annotate element greater than the new element. */
+ rbe_ge = rbe;
+ continue;
+ } else if (d < 0) {
+ /* annotate element less than the new element. */
+ rbe_le = rbe;
+ break;
+ }
+ }
+
+ /* - new start element matching existing start element: full overlap
+ * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.
+ */
+ if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) &&
+ nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) {
+ *ext = &rbe_ge->ext;
+ return -EEXIST;
+ }
+
+ /* - new end element matching existing end element: full overlap
+ * reported as -EEXIST, cleared by caller if NLM_F_EXCL is not given.
+ */
+ if (rbe_le && !nft_rbtree_cmp(set, new, rbe_le) &&
+ nft_rbtree_interval_end(rbe_le) == nft_rbtree_interval_end(new)) {
+ *ext = &rbe_le->ext;
+ return -EEXIST;
+ }
+
+ /* - new start element with existing closest, less or equal key value
+ * being a start element: partial overlap, reported as -ENOTEMPTY.
+ * Anonymous sets allow for two consecutive start element since they
+ * are constant, skip them to avoid bogus overlap reports.
+ */
+ if (!nft_set_is_anonymous(set) && rbe_le &&
+ nft_rbtree_interval_start(rbe_le) && nft_rbtree_interval_start(new))
return -ENOTEMPTY;
+ /* - new end element with existing closest, less or equal key value
+ * being a end element: partial overlap, reported as -ENOTEMPTY.
+ */
+ if (rbe_le &&
+ nft_rbtree_interval_end(rbe_le) && nft_rbtree_interval_end(new))
+ return -ENOTEMPTY;
+
+ /* - new end element with existing closest, greater or equal key value
+ * being an end element: partial overlap, reported as -ENOTEMPTY
+ */
+ if (rbe_ge &&
+ nft_rbtree_interval_end(rbe_ge) && nft_rbtree_interval_end(new))
+ return -ENOTEMPTY;
+
+ /* Accepted element: pick insertion point depending on key value */
+ parent = NULL;
+ p = &priv->root.rb_node;
+ while (*p != NULL) {
+ parent = *p;
+ rbe = rb_entry(parent, struct nft_rbtree_elem, node);
+ d = nft_rbtree_cmp(set, rbe, new);
+
+ if (d < 0)
+ p = &parent->rb_left;
+ else if (d > 0)
+ p = &parent->rb_right;
+ else if (nft_rbtree_interval_end(rbe))
+ p = &parent->rb_left;
+ else
+ p = &parent->rb_right;
+ }
+
rb_link_node_rcu(&new->node, parent, p);
rb_insert_color(&new->node, &priv->root);
return 0;
@@ -501,23 +563,37 @@ static void nft_rbtree_gc(struct work_struct *work)
struct nft_rbtree *priv;
struct rb_node *node;
struct nft_set *set;
+ struct net *net;
+ u8 genmask;
priv = container_of(work, struct nft_rbtree, gc_work.work);
set = nft_set_container_of(priv);
+ net = read_pnet(&set->net);
+ genmask = nft_genmask_cur(net);
write_lock_bh(&priv->lock);
write_seqcount_begin(&priv->count);
for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
rbe = rb_entry(node, struct nft_rbtree_elem, node);
+ if (!nft_set_elem_active(&rbe->ext, genmask))
+ continue;
+
+ /* elements are reversed in the rbtree for historical reasons,
+ * from highest to lowest value, that is why end element is
+ * always visited before the start element.
+ */
if (nft_rbtree_interval_end(rbe)) {
rbe_end = rbe;
continue;
}
if (!nft_set_elem_expired(&rbe->ext))
continue;
- if (nft_set_elem_mark_busy(&rbe->ext))
+
+ if (nft_set_elem_mark_busy(&rbe->ext)) {
+ rbe_end = NULL;
continue;
+ }
if (rbe_prev) {
rb_erase(&rbe_prev->node, &priv->root);
diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c
index b5b09a9..9fea90e 100644
--- a/net/netfilter/nft_tproxy.c
+++ b/net/netfilter/nft_tproxy.c
@@ -312,6 +312,13 @@ static int nft_tproxy_dump(struct sk_buff *skb,
return 0;
}
+static int nft_tproxy_validate(const struct nft_ctx *ctx,
+ const struct nft_expr *expr,
+ const struct nft_data **data)
+{
+ return nft_chain_validate_hooks(ctx->chain, 1 << NF_INET_PRE_ROUTING);
+}
+
static struct nft_expr_type nft_tproxy_type;
static const struct nft_expr_ops nft_tproxy_ops = {
.type = &nft_tproxy_type,
@@ -320,6 +327,7 @@ static const struct nft_expr_ops nft_tproxy_ops = {
.init = nft_tproxy_init,
.destroy = nft_tproxy_destroy,
.dump = nft_tproxy_dump,
+ .validate = nft_tproxy_validate,
};
static struct nft_expr_type nft_tproxy_type __read_mostly = {
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index 974d326..011ec7d 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -578,7 +578,9 @@ static int netlink_insert(struct sock *sk, u32 portid)
if (nlk_sk(sk)->bound)
goto err;
- nlk_sk(sk)->portid = portid;
+ /* portid can be read locklessly from netlink_getname(). */
+ WRITE_ONCE(nlk_sk(sk)->portid, portid);
+
sock_hold(sk);
err = __netlink_insert(table, sk);
@@ -1087,9 +1089,11 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
return -EINVAL;
if (addr->sa_family == AF_UNSPEC) {
- sk->sk_state = NETLINK_UNCONNECTED;
- nlk->dst_portid = 0;
- nlk->dst_group = 0;
+ /* paired with READ_ONCE() in netlink_getsockbyportid() */
+ WRITE_ONCE(sk->sk_state, NETLINK_UNCONNECTED);
+ /* dst_portid and dst_group can be read locklessly */
+ WRITE_ONCE(nlk->dst_portid, 0);
+ WRITE_ONCE(nlk->dst_group, 0);
return 0;
}
if (addr->sa_family != AF_NETLINK)
@@ -1110,9 +1114,11 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
err = netlink_autobind(sock);
if (err == 0) {
- sk->sk_state = NETLINK_CONNECTED;
- nlk->dst_portid = nladdr->nl_pid;
- nlk->dst_group = ffs(nladdr->nl_groups);
+ /* paired with READ_ONCE() in netlink_getsockbyportid() */
+ WRITE_ONCE(sk->sk_state, NETLINK_CONNECTED);
+ /* dst_portid and dst_group can be read locklessly */
+ WRITE_ONCE(nlk->dst_portid, nladdr->nl_pid);
+ WRITE_ONCE(nlk->dst_group, ffs(nladdr->nl_groups));
}
return err;
@@ -1129,10 +1135,12 @@ static int netlink_getname(struct socket *sock, struct sockaddr *addr,
nladdr->nl_pad = 0;
if (peer) {
- nladdr->nl_pid = nlk->dst_portid;
- nladdr->nl_groups = netlink_group_mask(nlk->dst_group);
+ /* Paired with WRITE_ONCE() in netlink_connect() */
+ nladdr->nl_pid = READ_ONCE(nlk->dst_portid);
+ nladdr->nl_groups = netlink_group_mask(READ_ONCE(nlk->dst_group));
} else {
- nladdr->nl_pid = nlk->portid;
+ /* Paired with WRITE_ONCE() in netlink_insert() */
+ nladdr->nl_pid = READ_ONCE(nlk->portid);
netlink_lock_table();
nladdr->nl_groups = nlk->groups ? nlk->groups[0] : 0;
netlink_unlock_table();
@@ -1159,8 +1167,9 @@ static struct sock *netlink_getsockbyportid(struct sock *ssk, u32 portid)
/* Don't bother queuing skb if kernel socket has no input function */
nlk = nlk_sk(sock);
- if (sock->sk_state == NETLINK_CONNECTED &&
- nlk->dst_portid != nlk_sk(ssk)->portid) {
+ /* dst_portid and sk_state can be changed in netlink_connect() */
+ if (READ_ONCE(sock->sk_state) == NETLINK_CONNECTED &&
+ READ_ONCE(nlk->dst_portid) != nlk_sk(ssk)->portid) {
sock_put(sock);
return ERR_PTR(-ECONNREFUSED);
}
@@ -1896,8 +1905,9 @@ static int netlink_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
goto out;
netlink_skb_flags |= NETLINK_SKB_DST;
} else {
- dst_portid = nlk->dst_portid;
- dst_group = nlk->dst_group;
+ /* Paired with WRITE_ONCE() in netlink_connect() */
+ dst_portid = READ_ONCE(nlk->dst_portid);
+ dst_group = READ_ONCE(nlk->dst_group);
}
/* Paired with WRITE_ONCE() in netlink_insert() */
diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
index e5c8a29..5c04da4 100644
--- a/net/netrom/af_netrom.c
+++ b/net/netrom/af_netrom.c
@@ -400,6 +400,11 @@ static int nr_listen(struct socket *sock, int backlog)
struct sock *sk = sock->sk;
lock_sock(sk);
+ if (sock->state != SS_UNCONNECTED) {
+ release_sock(sk);
+ return -EINVAL;
+ }
+
if (sk->sk_state != TCP_LISTEN) {
memset(&nr_sk(sk)->user_addr, 0, AX25_ADDR_LEN);
sk->sk_max_ack_backlog = backlog;
diff --git a/net/netrom/nr_timer.c b/net/netrom/nr_timer.c
index a8da88d..4e7c968 100644
--- a/net/netrom/nr_timer.c
+++ b/net/netrom/nr_timer.c
@@ -121,6 +121,7 @@ static void nr_heartbeat_expiry(struct timer_list *t)
is accepted() it isn't 'dead' so doesn't get removed. */
if (sock_flag(sk, SOCK_DESTROY) ||
(sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) {
+ sock_hold(sk);
bh_unlock_sock(sk);
nr_destroy_socket(sk);
goto out;
diff --git a/net/nfc/llcp_core.c b/net/nfc/llcp_core.c
index eaeb2b1..fd43e75a 100644
--- a/net/nfc/llcp_core.c
+++ b/net/nfc/llcp_core.c
@@ -159,6 +159,7 @@ static void local_cleanup(struct nfc_llcp_local *local)
cancel_work_sync(&local->rx_work);
cancel_work_sync(&local->timeout_work);
kfree_skb(local->rx_pending);
+ local->rx_pending = NULL;
del_timer_sync(&local->sdreq_timer);
cancel_work_sync(&local->sdreq_timeout_work);
nfc_llcp_free_sdp_tlv_list(&local->pending_sdreqs);
diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
index d928d5a..9bc0ab7 100644
--- a/net/nfc/netlink.c
+++ b/net/nfc/netlink.c
@@ -1442,8 +1442,12 @@ static int nfc_se_io(struct nfc_dev *dev, u32 se_idx,
rc = dev->ops->se_io(dev, se_idx, apdu,
apdu_length, cb, cb_context);
+ device_unlock(&dev->dev);
+ return rc;
+
error:
device_unlock(&dev->dev);
+ kfree(cb_context);
return rc;
}
diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 795a25e..0fc98e8 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -977,14 +977,14 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
key = kzalloc(sizeof(*key), GFP_KERNEL);
if (!key) {
error = -ENOMEM;
- goto err_kfree_key;
+ goto err_kfree_flow;
}
ovs_match_init(&match, key, false, &mask);
error = ovs_nla_get_match(net, &match, a[OVS_FLOW_ATTR_KEY],
a[OVS_FLOW_ATTR_MASK], log);
if (error)
- goto err_kfree_flow;
+ goto err_kfree_key;
ovs_flow_mask_key(&new_flow->key, key, true, &mask);
@@ -992,14 +992,14 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID],
key, log);
if (error)
- goto err_kfree_flow;
+ goto err_kfree_key;
/* Validate actions. */
error = ovs_nla_copy_actions(net, a[OVS_FLOW_ATTR_ACTIONS],
&new_flow->key, &acts, log);
if (error) {
OVS_NLERR(log, "Flow actions may not be safe on all matching packets.");
- goto err_kfree_flow;
+ goto err_kfree_key;
}
reply = ovs_flow_cmd_alloc_info(acts, &new_flow->id, info, false,
@@ -1099,10 +1099,10 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
kfree_skb(reply);
err_kfree_acts:
ovs_nla_free_flow_actions(acts);
-err_kfree_flow:
- ovs_flow_free(new_flow, false);
err_kfree_key:
kfree(key);
+err_kfree_flow:
+ ovs_flow_free(new_flow, false);
error:
return error;
}
diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c
index 896b8f5..67b471c 100644
--- a/net/openvswitch/meter.c
+++ b/net/openvswitch/meter.c
@@ -450,7 +450,7 @@ static int ovs_meter_cmd_set(struct sk_buff *skb, struct genl_info *info)
err = attach_meter(meter_tbl, meter);
if (err)
- goto exit_unlock;
+ goto exit_free_old_meter;
ovs_unlock();
@@ -473,6 +473,8 @@ static int ovs_meter_cmd_set(struct sk_buff *skb, struct genl_info *info)
genlmsg_end(reply, ovs_reply_header);
return genlmsg_reply(reply, info);
+exit_free_old_meter:
+ ovs_meter_free(old_meter);
exit_unlock:
ovs_unlock();
nlmsg_free(reply);
diff --git a/net/qrtr/Makefile b/net/qrtr/Makefile
index 1b1411d..8e0605f 100644
--- a/net/qrtr/Makefile
+++ b/net/qrtr/Makefile
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
-obj-$(CONFIG_QRTR) := qrtr.o ns.o
+obj-$(CONFIG_QRTR) += qrtr.o
+qrtr-y := af_qrtr.o ns.o
obj-$(CONFIG_QRTR_SMD) += qrtr-smd.o
qrtr-smd-y := smd.o
diff --git a/net/qrtr/qrtr.c b/net/qrtr/af_qrtr.c
similarity index 99%
rename from net/qrtr/qrtr.c
rename to net/qrtr/af_qrtr.c
index ec23225..6e88ba8 100644
--- a/net/qrtr/qrtr.c
+++ b/net/qrtr/af_qrtr.c
@@ -393,10 +393,12 @@ static struct qrtr_node *qrtr_node_lookup(unsigned int nid)
struct qrtr_node *node;
unsigned long flags;
+ mutex_lock(&qrtr_node_lock);
spin_lock_irqsave(&qrtr_nodes_lock, flags);
node = radix_tree_lookup(&qrtr_nodes, nid);
node = qrtr_node_acquire(node);
spin_unlock_irqrestore(&qrtr_nodes_lock, flags);
+ mutex_unlock(&qrtr_node_lock);
return node;
}
diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
index 1990d49..3e40a1b 100644
--- a/net/qrtr/ns.c
+++ b/net/qrtr/ns.c
@@ -83,7 +83,10 @@ static struct qrtr_node *node_get(unsigned int node_id)
node->id = node_id;
- radix_tree_insert(&nodes, node_id, node);
+ if (radix_tree_insert(&nodes, node_id, node)) {
+ kfree(node);
+ return NULL;
+ }
return node;
}
@@ -270,7 +273,7 @@ static struct qrtr_server *server_add(unsigned int service,
return NULL;
}
-static int server_del(struct qrtr_node *node, unsigned int port)
+static int server_del(struct qrtr_node *node, unsigned int port, bool bcast)
{
struct qrtr_lookup *lookup;
struct qrtr_server *srv;
@@ -283,7 +286,7 @@ static int server_del(struct qrtr_node *node, unsigned int port)
radix_tree_delete(&node->servers, port);
/* Broadcast the removal of local servers */
- if (srv->node == qrtr_ns.local_node)
+ if (srv->node == qrtr_ns.local_node && bcast)
service_announce_del(&qrtr_ns.bcast_sq, srv);
/* Announce the service's disappearance to observers */
@@ -369,7 +372,7 @@ static int ctrl_cmd_bye(struct sockaddr_qrtr *from)
}
slot = radix_tree_iter_resume(slot, &iter);
rcu_read_unlock();
- server_del(node, srv->port);
+ server_del(node, srv->port, true);
rcu_read_lock();
}
rcu_read_unlock();
@@ -455,10 +458,13 @@ static int ctrl_cmd_del_client(struct sockaddr_qrtr *from,
kfree(lookup);
}
- /* Remove the server belonging to this port */
+ /* Remove the server belonging to this port but don't broadcast
+ * DEL_SERVER. Neighbours would've already removed the server belonging
+ * to this port due to the DEL_CLIENT broadcast from qrtr_port_remove().
+ */
node = node_get(node_id);
if (node)
- server_del(node, port);
+ server_del(node, port, false);
/* Advertise the removal of this client to all local servers */
local_node = node_get(qrtr_ns.local_node);
@@ -563,7 +569,7 @@ static int ctrl_cmd_del_server(struct sockaddr_qrtr *from,
if (!node)
return -ENOENT;
- return server_del(node, port);
+ return server_del(node, port, true);
}
static int ctrl_cmd_new_lookup(struct sockaddr_qrtr *from,
diff --git a/net/rds/message.c b/net/rds/message.c
index 799034e..8fa3d19 100644
--- a/net/rds/message.c
+++ b/net/rds/message.c
@@ -104,9 +104,9 @@ static void rds_rm_zerocopy_callback(struct rds_sock *rs,
spin_lock_irqsave(&q->lock, flags);
head = &q->zcookie_head;
if (!list_empty(head)) {
- info = list_entry(head, struct rds_msg_zcopy_info,
- rs_zcookie_next);
- if (info && rds_zcookie_add(info, cookie)) {
+ info = list_first_entry(head, struct rds_msg_zcopy_info,
+ rs_zcookie_next);
+ if (rds_zcookie_add(info, cookie)) {
spin_unlock_irqrestore(&q->lock, flags);
kfree(rds_info_from_znotifier(znotif));
/* caller invokes rds_wake_sk_sleep() */
@@ -118,7 +118,7 @@ static void rds_rm_zerocopy_callback(struct rds_sock *rs,
ck = &info->zcookies;
memset(ck, 0, sizeof(*ck));
WARN_ON(!rds_zcookie_add(info, cookie));
- list_add_tail(&q->zcookie_head, &info->rs_zcookie_next);
+ list_add_tail(&info->rs_zcookie_next, &q->zcookie_head);
spin_unlock_irqrestore(&q->lock, flags);
/* caller invokes rds_wake_sk_sleep() */
diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
index 29a208ed..86c93cf 100644
--- a/net/rose/af_rose.c
+++ b/net/rose/af_rose.c
@@ -487,6 +487,12 @@ static int rose_listen(struct socket *sock, int backlog)
{
struct sock *sk = sock->sk;
+ lock_sock(sk);
+ if (sock->state != SS_UNCONNECTED) {
+ release_sock(sk);
+ return -EINVAL;
+ }
+
if (sk->sk_state != TCP_LISTEN) {
struct rose_sock *rose = rose_sk(sk);
@@ -496,8 +502,10 @@ static int rose_listen(struct socket *sock, int backlog)
memset(rose->dest_digis, 0, AX25_ADDR_LEN * ROSE_MAX_DIGIS);
sk->sk_max_ack_backlog = backlog;
sk->sk_state = TCP_LISTEN;
+ release_sock(sk);
return 0;
}
+ release_sock(sk);
return -EOPNOTSUPP;
}
diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c
index 5c36013..2a05bad 100644
--- a/net/sched/act_bpf.c
+++ b/net/sched/act_bpf.c
@@ -305,7 +305,7 @@ static int tcf_bpf_init(struct net *net, struct nlattr *nla,
ret = tcf_idr_check_alloc(tn, &index, act, bind);
if (!ret) {
ret = tcf_idr_create(tn, index, est, act,
- &act_bpf_ops, bind, true, 0);
+ &act_bpf_ops, bind, true, flags);
if (ret < 0) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c
index 032ef92..0deb4e9 100644
--- a/net/sched/act_connmark.c
+++ b/net/sched/act_connmark.c
@@ -124,7 +124,7 @@ static int tcf_connmark_init(struct net *net, struct nlattr *nla,
ret = tcf_idr_check_alloc(tn, &index, a, bind);
if (!ret) {
ret = tcf_idr_create(tn, index, est, a,
- &act_connmark_ops, bind, false, 0);
+ &act_connmark_ops, bind, false, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_ctinfo.c b/net/sched/act_ctinfo.c
index 2d75fe1..56e0a5e 100644
--- a/net/sched/act_ctinfo.c
+++ b/net/sched/act_ctinfo.c
@@ -92,7 +92,7 @@ static int tcf_ctinfo_act(struct sk_buff *skb, const struct tc_action *a,
cp = rcu_dereference_bh(ca->params);
tcf_lastuse_update(&ca->tcf_tm);
- bstats_update(&ca->tcf_bstats, skb);
+ tcf_action_update_bstats(&ca->common, skb);
action = READ_ONCE(ca->tcf_action);
wlen = skb_network_offset(skb);
@@ -211,8 +211,8 @@ static int tcf_ctinfo_init(struct net *net, struct nlattr *nla,
index = actparm->index;
err = tcf_idr_check_alloc(tn, &index, a, bind);
if (!err) {
- ret = tcf_idr_create(tn, index, est, a,
- &act_ctinfo_ops, bind, false, 0);
+ ret = tcf_idr_create_from_flags(tn, index, est, a,
+ &act_ctinfo_ops, bind, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_gate.c b/net/sched/act_gate.c
index 7df72a41..ac985c5 100644
--- a/net/sched/act_gate.c
+++ b/net/sched/act_gate.c
@@ -357,7 +357,7 @@ static int tcf_gate_init(struct net *net, struct nlattr *nla,
if (!err) {
ret = tcf_idr_create(tn, index, est, a,
- &act_gate_ops, bind, false, 0);
+ &act_gate_ops, bind, false, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_ife.c b/net/sched/act_ife.c
index 7064a365..ec987ec 100644
--- a/net/sched/act_ife.c
+++ b/net/sched/act_ife.c
@@ -553,7 +553,7 @@ static int tcf_ife_init(struct net *net, struct nlattr *nla,
if (!exists) {
ret = tcf_idr_create(tn, index, est, a, &act_ife_ops,
- bind, true, 0);
+ bind, true, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
kfree(p);
diff --git a/net/sched/act_ipt.c b/net/sched/act_ipt.c
index 265b144..2f3d507 100644
--- a/net/sched/act_ipt.c
+++ b/net/sched/act_ipt.c
@@ -145,7 +145,7 @@ static int __tcf_ipt_init(struct net *net, unsigned int id, struct nlattr *nla,
if (!exists) {
ret = tcf_idr_create(tn, index, est, a, ops, bind,
- false, 0);
+ false, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
index efc963a..6f39789 100644
--- a/net/sched/act_mirred.c
+++ b/net/sched/act_mirred.c
@@ -28,8 +28,8 @@
static LIST_HEAD(mirred_list);
static DEFINE_SPINLOCK(mirred_list_lock);
-#define MIRRED_RECURSION_LIMIT 4
-static DEFINE_PER_CPU(unsigned int, mirred_rec_level);
+#define MIRRED_NEST_LIMIT 4
+static DEFINE_PER_CPU(unsigned int, mirred_nest_level);
static bool tcf_mirred_is_act_redirect(int action)
{
@@ -204,12 +204,19 @@ static int tcf_mirred_init(struct net *net, struct nlattr *nla,
return err;
}
+static bool is_mirred_nested(void)
+{
+ return unlikely(__this_cpu_read(mirred_nest_level) > 1);
+}
+
static int tcf_mirred_forward(bool want_ingress, struct sk_buff *skb)
{
int err;
if (!want_ingress)
err = tcf_dev_queue_xmit(skb, dev_queue_xmit);
+ else if (is_mirred_nested())
+ err = netif_rx(skb);
else
err = netif_receive_skb(skb);
@@ -223,7 +230,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
struct sk_buff *skb2 = skb;
bool m_mac_header_xmit;
struct net_device *dev;
- unsigned int rec_level;
+ unsigned int nest_level;
int retval, err = 0;
bool use_reinsert;
bool want_ingress;
@@ -234,11 +241,11 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
int mac_len;
bool at_nh;
- rec_level = __this_cpu_inc_return(mirred_rec_level);
- if (unlikely(rec_level > MIRRED_RECURSION_LIMIT)) {
+ nest_level = __this_cpu_inc_return(mirred_nest_level);
+ if (unlikely(nest_level > MIRRED_NEST_LIMIT)) {
net_warn_ratelimited("Packet exceeded mirred recursion limit on dev %s\n",
netdev_name(skb->dev));
- __this_cpu_dec(mirred_rec_level);
+ __this_cpu_dec(mirred_nest_level);
return TC_ACT_SHOT;
}
@@ -308,7 +315,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
err = tcf_mirred_forward(res->ingress, skb);
if (err)
tcf_action_inc_overlimit_qstats(&m->common);
- __this_cpu_dec(mirred_rec_level);
+ __this_cpu_dec(mirred_nest_level);
return TC_ACT_CONSUMED;
}
}
@@ -320,7 +327,7 @@ static int tcf_mirred_act(struct sk_buff *skb, const struct tc_action *a,
if (tcf_mirred_is_act_redirect(m_eaction))
retval = TC_ACT_SHOT;
}
- __this_cpu_dec(mirred_rec_level);
+ __this_cpu_dec(mirred_nest_level);
return retval;
}
diff --git a/net/sched/act_mpls.c b/net/sched/act_mpls.c
index db0ef04..d010c5b 100644
--- a/net/sched/act_mpls.c
+++ b/net/sched/act_mpls.c
@@ -189,40 +189,67 @@ static int tcf_mpls_init(struct net *net, struct nlattr *nla,
parm = nla_data(tb[TCA_MPLS_PARMS]);
index = parm->index;
+ err = tcf_idr_check_alloc(tn, &index, a, bind);
+ if (err < 0)
+ return err;
+ exists = err;
+ if (exists && bind)
+ return 0;
+
+ if (!exists) {
+ ret = tcf_idr_create(tn, index, est, a, &act_mpls_ops, bind,
+ true, flags);
+ if (ret) {
+ tcf_idr_cleanup(tn, index);
+ return ret;
+ }
+
+ ret = ACT_P_CREATED;
+ } else if (!(flags & TCA_ACT_FLAGS_REPLACE)) {
+ tcf_idr_release(*a, bind);
+ return -EEXIST;
+ }
+
/* Verify parameters against action type. */
switch (parm->m_action) {
case TCA_MPLS_ACT_POP:
if (!tb[TCA_MPLS_PROTO]) {
NL_SET_ERR_MSG_MOD(extack, "Protocol must be set for MPLS pop");
- return -EINVAL;
+ err = -EINVAL;
+ goto release_idr;
}
if (!eth_proto_is_802_3(nla_get_be16(tb[TCA_MPLS_PROTO]))) {
NL_SET_ERR_MSG_MOD(extack, "Invalid protocol type for MPLS pop");
- return -EINVAL;
+ err = -EINVAL;
+ goto release_idr;
}
if (tb[TCA_MPLS_LABEL] || tb[TCA_MPLS_TTL] || tb[TCA_MPLS_TC] ||
tb[TCA_MPLS_BOS]) {
NL_SET_ERR_MSG_MOD(extack, "Label, TTL, TC or BOS cannot be used with MPLS pop");
- return -EINVAL;
+ err = -EINVAL;
+ goto release_idr;
}
break;
case TCA_MPLS_ACT_DEC_TTL:
if (tb[TCA_MPLS_PROTO] || tb[TCA_MPLS_LABEL] ||
tb[TCA_MPLS_TTL] || tb[TCA_MPLS_TC] || tb[TCA_MPLS_BOS]) {
NL_SET_ERR_MSG_MOD(extack, "Label, TTL, TC, BOS or protocol cannot be used with MPLS dec_ttl");
- return -EINVAL;
+ err = -EINVAL;
+ goto release_idr;
}
break;
case TCA_MPLS_ACT_PUSH:
case TCA_MPLS_ACT_MAC_PUSH:
if (!tb[TCA_MPLS_LABEL]) {
NL_SET_ERR_MSG_MOD(extack, "Label is required for MPLS push");
- return -EINVAL;
+ err = -EINVAL;
+ goto release_idr;
}
if (tb[TCA_MPLS_PROTO] &&
!eth_p_mpls(nla_get_be16(tb[TCA_MPLS_PROTO]))) {
NL_SET_ERR_MSG_MOD(extack, "Protocol must be an MPLS type for MPLS push");
- return -EPROTONOSUPPORT;
+ err = -EPROTONOSUPPORT;
+ goto release_idr;
}
/* Push needs a TTL - if not specified, set a default value. */
if (!tb[TCA_MPLS_TTL]) {
@@ -237,33 +264,14 @@ static int tcf_mpls_init(struct net *net, struct nlattr *nla,
case TCA_MPLS_ACT_MODIFY:
if (tb[TCA_MPLS_PROTO]) {
NL_SET_ERR_MSG_MOD(extack, "Protocol cannot be used with MPLS modify");
- return -EINVAL;
+ err = -EINVAL;
+ goto release_idr;
}
break;
default:
NL_SET_ERR_MSG_MOD(extack, "Unknown MPLS action");
- return -EINVAL;
- }
-
- err = tcf_idr_check_alloc(tn, &index, a, bind);
- if (err < 0)
- return err;
- exists = err;
- if (exists && bind)
- return 0;
-
- if (!exists) {
- ret = tcf_idr_create(tn, index, est, a,
- &act_mpls_ops, bind, true, 0);
- if (ret) {
- tcf_idr_cleanup(tn, index);
- return ret;
- }
-
- ret = ACT_P_CREATED;
- } else if (!(flags & TCA_ACT_FLAGS_REPLACE)) {
- tcf_idr_release(*a, bind);
- return -EEXIST;
+ err = -EINVAL;
+ goto release_idr;
}
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
diff --git a/net/sched/act_nat.c b/net/sched/act_nat.c
index 7dd6b58..2a39b37 100644
--- a/net/sched/act_nat.c
+++ b/net/sched/act_nat.c
@@ -61,7 +61,7 @@ static int tcf_nat_init(struct net *net, struct nlattr *nla, struct nlattr *est,
err = tcf_idr_check_alloc(tn, &index, a, bind);
if (!err) {
ret = tcf_idr_create(tn, index, est, a,
- &act_nat_ops, bind, false, 0);
+ &act_nat_ops, bind, false, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_pedit.c b/net/sched/act_pedit.c
index 1262a84..051cd20 100644
--- a/net/sched/act_pedit.c
+++ b/net/sched/act_pedit.c
@@ -134,6 +134,17 @@ static int tcf_pedit_key_ex_dump(struct sk_buff *skb,
return -EINVAL;
}
+static void tcf_pedit_cleanup_rcu(struct rcu_head *head)
+{
+ struct tcf_pedit_parms *parms =
+ container_of(head, struct tcf_pedit_parms, rcu);
+
+ kfree(parms->tcfp_keys_ex);
+ kfree(parms->tcfp_keys);
+
+ kfree(parms);
+}
+
static int tcf_pedit_init(struct net *net, struct nlattr *nla,
struct nlattr *est, struct tc_action **a,
struct tcf_proto *tp, u32 flags,
@@ -141,10 +152,9 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
{
struct tc_action_net *tn = net_generic(net, pedit_net_id);
bool bind = flags & TCA_ACT_FLAGS_BIND;
- struct nlattr *tb[TCA_PEDIT_MAX + 1];
struct tcf_chain *goto_ch = NULL;
- struct tc_pedit_key *keys = NULL;
- struct tcf_pedit_key_ex *keys_ex;
+ struct tcf_pedit_parms *oparms, *nparms;
+ struct nlattr *tb[TCA_PEDIT_MAX + 1];
struct tc_pedit *parm;
struct nlattr *pattr;
struct tcf_pedit *p;
@@ -171,109 +181,125 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
}
parm = nla_data(pattr);
- if (!parm->nkeys) {
- NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed");
- return -EINVAL;
- }
- ksize = parm->nkeys * sizeof(struct tc_pedit_key);
- if (nla_len(pattr) < sizeof(*parm) + ksize) {
- NL_SET_ERR_MSG_ATTR(extack, pattr, "Length of TCA_PEDIT_PARMS or TCA_PEDIT_PARMS_EX pedit attribute is invalid");
- return -EINVAL;
- }
-
- keys_ex = tcf_pedit_keys_ex_parse(tb[TCA_PEDIT_KEYS_EX], parm->nkeys);
- if (IS_ERR(keys_ex))
- return PTR_ERR(keys_ex);
index = parm->index;
err = tcf_idr_check_alloc(tn, &index, a, bind);
if (!err) {
- ret = tcf_idr_create(tn, index, est, a,
- &act_pedit_ops, bind, false, 0);
+ ret = tcf_idr_create_from_flags(tn, index, est, a,
+ &act_pedit_ops, bind, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
- goto out_free;
+ return ret;
}
ret = ACT_P_CREATED;
} else if (err > 0) {
if (bind)
- goto out_free;
+ return 0;
if (!(flags & TCA_ACT_FLAGS_REPLACE)) {
ret = -EEXIST;
goto out_release;
}
} else {
- ret = err;
+ return err;
+ }
+
+ if (!parm->nkeys) {
+ NL_SET_ERR_MSG_MOD(extack, "Pedit requires keys to be passed");
+ ret = -EINVAL;
+ goto out_release;
+ }
+ ksize = parm->nkeys * sizeof(struct tc_pedit_key);
+ if (nla_len(pattr) < sizeof(*parm) + ksize) {
+ NL_SET_ERR_MSG_ATTR(extack, pattr, "Length of TCA_PEDIT_PARMS or TCA_PEDIT_PARMS_EX pedit attribute is invalid");
+ ret = -EINVAL;
+ goto out_release;
+ }
+
+ nparms = kzalloc(sizeof(*nparms), GFP_KERNEL);
+ if (!nparms) {
+ ret = -ENOMEM;
+ goto out_release;
+ }
+
+ nparms->tcfp_keys_ex =
+ tcf_pedit_keys_ex_parse(tb[TCA_PEDIT_KEYS_EX], parm->nkeys);
+ if (IS_ERR(nparms->tcfp_keys_ex)) {
+ ret = PTR_ERR(nparms->tcfp_keys_ex);
goto out_free;
}
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
if (err < 0) {
ret = err;
- goto out_release;
+ goto out_free_ex;
}
- p = to_pedit(*a);
- spin_lock_bh(&p->tcf_lock);
- if (ret == ACT_P_CREATED ||
- (p->tcfp_nkeys && p->tcfp_nkeys != parm->nkeys)) {
- keys = kmalloc(ksize, GFP_ATOMIC);
- if (!keys) {
- spin_unlock_bh(&p->tcf_lock);
- ret = -ENOMEM;
- goto put_chain;
- }
- kfree(p->tcfp_keys);
- p->tcfp_keys = keys;
- p->tcfp_nkeys = parm->nkeys;
+ nparms->tcfp_off_max_hint = 0;
+ nparms->tcfp_flags = parm->flags;
+ nparms->tcfp_nkeys = parm->nkeys;
+
+ nparms->tcfp_keys = kmalloc(ksize, GFP_KERNEL);
+ if (!nparms->tcfp_keys) {
+ ret = -ENOMEM;
+ goto put_chain;
}
- memcpy(p->tcfp_keys, parm->keys, ksize);
- p->tcfp_off_max_hint = 0;
- for (i = 0; i < p->tcfp_nkeys; ++i) {
- u32 cur = p->tcfp_keys[i].off;
+
+ memcpy(nparms->tcfp_keys, parm->keys, ksize);
+
+ for (i = 0; i < nparms->tcfp_nkeys; ++i) {
+ u32 cur = nparms->tcfp_keys[i].off;
/* sanitize the shift value for any later use */
- p->tcfp_keys[i].shift = min_t(size_t, BITS_PER_TYPE(int) - 1,
- p->tcfp_keys[i].shift);
+ nparms->tcfp_keys[i].shift = min_t(size_t,
+ BITS_PER_TYPE(int) - 1,
+ nparms->tcfp_keys[i].shift);
/* The AT option can read a single byte, we can bound the actual
* value with uchar max.
*/
- cur += (0xff & p->tcfp_keys[i].offmask) >> p->tcfp_keys[i].shift;
+ cur += (0xff & nparms->tcfp_keys[i].offmask) >> nparms->tcfp_keys[i].shift;
/* Each key touches 4 bytes starting from the computed offset */
- p->tcfp_off_max_hint = max(p->tcfp_off_max_hint, cur + 4);
+ nparms->tcfp_off_max_hint =
+ max(nparms->tcfp_off_max_hint, cur + 4);
}
- p->tcfp_flags = parm->flags;
+ p = to_pedit(*a);
+
+ spin_lock_bh(&p->tcf_lock);
goto_ch = tcf_action_set_ctrlact(*a, parm->action, goto_ch);
-
- kfree(p->tcfp_keys_ex);
- p->tcfp_keys_ex = keys_ex;
-
+ oparms = rcu_replace_pointer(p->parms, nparms, 1);
spin_unlock_bh(&p->tcf_lock);
+
+ if (oparms)
+ call_rcu(&oparms->rcu, tcf_pedit_cleanup_rcu);
+
if (goto_ch)
tcf_chain_put_by_act(goto_ch);
+
return ret;
put_chain:
if (goto_ch)
tcf_chain_put_by_act(goto_ch);
+out_free_ex:
+ kfree(nparms->tcfp_keys_ex);
+out_free:
+ kfree(nparms);
out_release:
tcf_idr_release(*a, bind);
-out_free:
- kfree(keys_ex);
return ret;
-
}
static void tcf_pedit_cleanup(struct tc_action *a)
{
struct tcf_pedit *p = to_pedit(a);
- struct tc_pedit_key *keys = p->tcfp_keys;
+ struct tcf_pedit_parms *parms;
- kfree(keys);
- kfree(p->tcfp_keys_ex);
+ parms = rcu_dereference_protected(p->parms, 1);
+
+ if (parms)
+ call_rcu(&parms->rcu, tcf_pedit_cleanup_rcu);
}
static bool offset_valid(struct sk_buff *skb, int offset)
@@ -324,28 +350,30 @@ static int tcf_pedit_act(struct sk_buff *skb, const struct tc_action *a,
struct tcf_result *res)
{
struct tcf_pedit *p = to_pedit(a);
+ struct tcf_pedit_parms *parms;
u32 max_offset;
int i;
- spin_lock(&p->tcf_lock);
+ parms = rcu_dereference_bh(p->parms);
max_offset = (skb_transport_header_was_set(skb) ?
skb_transport_offset(skb) :
skb_network_offset(skb)) +
- p->tcfp_off_max_hint;
+ parms->tcfp_off_max_hint;
if (skb_ensure_writable(skb, min(skb->len, max_offset)))
- goto unlock;
+ goto done;
tcf_lastuse_update(&p->tcf_tm);
+ tcf_action_update_bstats(&p->common, skb);
- if (p->tcfp_nkeys > 0) {
- struct tc_pedit_key *tkey = p->tcfp_keys;
- struct tcf_pedit_key_ex *tkey_ex = p->tcfp_keys_ex;
+ if (parms->tcfp_nkeys > 0) {
+ struct tc_pedit_key *tkey = parms->tcfp_keys;
+ struct tcf_pedit_key_ex *tkey_ex = parms->tcfp_keys_ex;
enum pedit_header_type htype =
TCA_PEDIT_KEY_EX_HDR_TYPE_NETWORK;
enum pedit_cmd cmd = TCA_PEDIT_KEY_EX_CMD_SET;
- for (i = p->tcfp_nkeys; i > 0; i--, tkey++) {
+ for (i = parms->tcfp_nkeys; i > 0; i--, tkey++) {
u32 *ptr, hdata;
int offset = tkey->off;
int hoffset;
@@ -421,11 +449,10 @@ static int tcf_pedit_act(struct sk_buff *skb, const struct tc_action *a,
}
bad:
+ spin_lock(&p->tcf_lock);
p->tcf_qstats.overlimits++;
-done:
- bstats_update(&p->tcf_bstats, skb);
-unlock:
spin_unlock(&p->tcf_lock);
+done:
return p->tcf_action;
}
@@ -444,30 +471,33 @@ static int tcf_pedit_dump(struct sk_buff *skb, struct tc_action *a,
{
unsigned char *b = skb_tail_pointer(skb);
struct tcf_pedit *p = to_pedit(a);
+ struct tcf_pedit_parms *parms;
struct tc_pedit *opt;
struct tcf_t t;
int s;
- s = struct_size(opt, keys, p->tcfp_nkeys);
-
- /* netlink spinlocks held above us - must use ATOMIC */
- opt = kzalloc(s, GFP_ATOMIC);
- if (unlikely(!opt))
- return -ENOBUFS;
-
spin_lock_bh(&p->tcf_lock);
- memcpy(opt->keys, p->tcfp_keys, flex_array_size(opt, keys, p->tcfp_nkeys));
+ parms = rcu_dereference_protected(p->parms, 1);
+ s = struct_size(opt, keys, parms->tcfp_nkeys);
+
+ opt = kzalloc(s, GFP_ATOMIC);
+ if (unlikely(!opt)) {
+ spin_unlock_bh(&p->tcf_lock);
+ return -ENOBUFS;
+ }
+
+ memcpy(opt->keys, parms->tcfp_keys,
+ flex_array_size(opt, keys, parms->tcfp_nkeys));
opt->index = p->tcf_index;
- opt->nkeys = p->tcfp_nkeys;
- opt->flags = p->tcfp_flags;
+ opt->nkeys = parms->tcfp_nkeys;
+ opt->flags = parms->tcfp_flags;
opt->action = p->tcf_action;
opt->refcnt = refcount_read(&p->tcf_refcnt) - ref;
opt->bindcnt = atomic_read(&p->tcf_bindcnt) - bind;
- if (p->tcfp_keys_ex) {
- if (tcf_pedit_key_ex_dump(skb,
- p->tcfp_keys_ex,
- p->tcfp_nkeys))
+ if (parms->tcfp_keys_ex) {
+ if (tcf_pedit_key_ex_dump(skb, parms->tcfp_keys_ex,
+ parms->tcfp_nkeys))
goto nla_put_failure;
if (nla_put(skb, TCA_PEDIT_PARMS_EX, s, opt))
diff --git a/net/sched/act_police.c b/net/sched/act_police.c
index 5c0a3ea..d44b933 100644
--- a/net/sched/act_police.c
+++ b/net/sched/act_police.c
@@ -90,7 +90,7 @@ static int tcf_police_init(struct net *net, struct nlattr *nla,
if (!exists) {
ret = tcf_idr_create(tn, index, NULL, a,
- &act_police_ops, bind, true, 0);
+ &act_police_ops, bind, true, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
index 230501e..ca67d96 100644
--- a/net/sched/act_sample.c
+++ b/net/sched/act_sample.c
@@ -55,8 +55,8 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
sample_policy, NULL);
if (ret < 0)
return ret;
- if (!tb[TCA_SAMPLE_PARMS] || !tb[TCA_SAMPLE_RATE] ||
- !tb[TCA_SAMPLE_PSAMPLE_GROUP])
+
+ if (!tb[TCA_SAMPLE_PARMS])
return -EINVAL;
parm = nla_data(tb[TCA_SAMPLE_PARMS]);
@@ -70,7 +70,7 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
if (!exists) {
ret = tcf_idr_create(tn, index, est, a,
- &act_sample_ops, bind, true, 0);
+ &act_sample_ops, bind, true, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
@@ -80,6 +80,13 @@ static int tcf_sample_init(struct net *net, struct nlattr *nla,
tcf_idr_release(*a, bind);
return -EEXIST;
}
+
+ if (!tb[TCA_SAMPLE_RATE] || !tb[TCA_SAMPLE_PSAMPLE_GROUP]) {
+ NL_SET_ERR_MSG(extack, "sample rate and group are required");
+ err = -EINVAL;
+ goto release_idr;
+ }
+
err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
if (err < 0)
goto release_idr;
diff --git a/net/sched/act_simple.c b/net/sched/act_simple.c
index cbbe186..7885271 100644
--- a/net/sched/act_simple.c
+++ b/net/sched/act_simple.c
@@ -128,7 +128,7 @@ static int tcf_simp_init(struct net *net, struct nlattr *nla,
if (!exists) {
ret = tcf_idr_create(tn, index, est, a,
- &act_simp_ops, bind, false, 0);
+ &act_simp_ops, bind, false, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
index 6054185..6088cea 100644
--- a/net/sched/act_skbedit.c
+++ b/net/sched/act_skbedit.c
@@ -176,7 +176,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
if (!exists) {
ret = tcf_idr_create(tn, index, est, a,
- &act_skbedit_ops, bind, true, 0);
+ &act_skbedit_ops, bind, true, act_flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
index ecb9ee6..ee9cc0a 100644
--- a/net/sched/act_skbmod.c
+++ b/net/sched/act_skbmod.c
@@ -168,7 +168,7 @@ static int tcf_skbmod_init(struct net *net, struct nlattr *nla,
if (!exists) {
ret = tcf_idr_create(tn, index, est, a,
- &act_skbmod_ops, bind, true, 0);
+ &act_skbmod_ops, bind, true, flags);
if (ret) {
tcf_idr_cleanup(tn, index);
return ret;
diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c
index caabdaa..8ce999e 100644
--- a/net/sched/sch_htb.c
+++ b/net/sched/sch_htb.c
@@ -427,7 +427,10 @@ static void htb_activate_prios(struct htb_sched *q, struct htb_class *cl)
while (cl->cmode == HTB_MAY_BORROW && p && mask) {
m = mask;
while (m) {
- int prio = ffz(~m);
+ unsigned int prio = ffz(~m);
+
+ if (WARN_ON_ONCE(prio >= ARRAY_SIZE(p->inner.clprio)))
+ break;
m &= ~(1 << prio);
if (p->inner.clprio[prio].feed.rb_node)
@@ -1558,7 +1561,7 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
struct tc_htb_qopt_offload offload_opt;
struct netdev_queue *dev_queue;
struct Qdisc *q = cl->leaf.q;
- struct Qdisc *old = NULL;
+ struct Qdisc *old;
int err;
if (cl->level)
@@ -1566,14 +1569,17 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
WARN_ON(!q);
dev_queue = htb_offload_get_queue(cl);
- old = htb_graft_helper(dev_queue, NULL);
- if (destroying)
- /* Before HTB is destroyed, the kernel grafts noop_qdisc to
- * all queues.
+ /* When destroying, caller qdisc_graft grafts the new qdisc and invokes
+ * qdisc_put for the qdisc being destroyed. htb_destroy_class_offload
+ * does not need to graft or qdisc_put the qdisc being destroyed.
+ */
+ if (!destroying) {
+ old = htb_graft_helper(dev_queue, NULL);
+ /* Last qdisc grafted should be the same as cl->leaf.q when
+ * calling htb_delete.
*/
- WARN_ON(!(old->flags & TCQ_F_BUILTIN));
- else
WARN_ON(old != q);
+ }
if (cl->parent) {
cl->parent->bstats_bias.bytes += q->bstats.bytes;
@@ -1589,10 +1595,12 @@ static int htb_destroy_class_offload(struct Qdisc *sch, struct htb_class *cl,
};
err = htb_offload(qdisc_dev(sch), &offload_opt);
- if (!err || destroying)
- qdisc_put(old);
- else
- htb_graft_helper(dev_queue, old);
+ if (!destroying) {
+ if (!err)
+ qdisc_put(old);
+ else
+ htb_graft_helper(dev_queue, old);
+ }
if (last_child)
return err;
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
index bd10a8e..e203dea 100644
--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -1632,6 +1632,7 @@ static void taprio_reset(struct Qdisc *sch)
int i;
hrtimer_cancel(&q->advance_timer);
+
if (q->qdiscs) {
for (i = 0; i < dev->num_tx_queues; i++)
if (q->qdiscs[i])
@@ -1653,6 +1654,7 @@ static void taprio_destroy(struct Qdisc *sch)
* happens in qdisc_create(), after taprio_init() has been called.
*/
hrtimer_cancel(&q->advance_timer);
+ qdisc_synchronize(sch);
taprio_disable_offload(dev, q, NULL);
@@ -1948,14 +1950,12 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl)
{
- struct taprio_sched *q = qdisc_priv(sch);
- struct net_device *dev = qdisc_dev(sch);
- unsigned int ntx = cl - 1;
+ struct netdev_queue *dev_queue = taprio_queue_get(sch, cl);
- if (ntx >= dev->num_tx_queues)
+ if (!dev_queue)
return NULL;
- return q->qdiscs[ntx];
+ return dev_queue->qdisc_sleeping;
}
static unsigned long taprio_find(struct Qdisc *sch, u32 classid)
diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
index 59e653b..6b95d3b 100644
--- a/net/sctp/bind_addr.c
+++ b/net/sctp/bind_addr.c
@@ -73,6 +73,12 @@ int sctp_bind_addr_copy(struct net *net, struct sctp_bind_addr *dest,
}
}
+ /* If somehow no addresses were found that can be used with this
+ * scope, it's an error.
+ */
+ if (list_empty(&dest->address_list))
+ error = -ENETUNREACH;
+
out:
if (error)
sctp_bind_addr_clean(dest);
diff --git a/net/sctp/diag.c b/net/sctp/diag.c
index d9c6d8f..b0ce108 100644
--- a/net/sctp/diag.c
+++ b/net/sctp/diag.c
@@ -343,11 +343,9 @@ static int sctp_sock_filter(struct sctp_endpoint *ep, struct sctp_transport *tsp
struct sctp_comm_param *commp = p;
struct sock *sk = ep->base.sk;
const struct inet_diag_req_v2 *r = commp->r;
- struct sctp_association *assoc =
- list_entry(ep->asocs.next, struct sctp_association, asocs);
/* find the ep only once through the transports by this condition */
- if (tsp->asoc != assoc)
+ if (!list_is_first(&tsp->asoc->asocs, &ep->asocs))
return 0;
if (r->sdiag_family != AF_UNSPEC && sk->sk_family != r->sdiag_family)
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 5f6e6a6..a5344fd 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -1831,6 +1831,10 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len);
if (err)
goto err;
+ if (unlikely(sinfo->sinfo_stream >= asoc->stream.outcnt)) {
+ err = -EINVAL;
+ goto err;
+ }
}
if (sctp_state(asoc, CLOSED)) {
diff --git a/net/sctp/stream_sched_prio.c b/net/sctp/stream_sched_prio.c
index 4fc9f29..7dd9f8b 100644
--- a/net/sctp/stream_sched_prio.c
+++ b/net/sctp/stream_sched_prio.c
@@ -25,6 +25,18 @@
static void sctp_sched_prio_unsched_all(struct sctp_stream *stream);
+static struct sctp_stream_priorities *sctp_sched_prio_head_get(struct sctp_stream_priorities *p)
+{
+ p->users++;
+ return p;
+}
+
+static void sctp_sched_prio_head_put(struct sctp_stream_priorities *p)
+{
+ if (p && --p->users == 0)
+ kfree(p);
+}
+
static struct sctp_stream_priorities *sctp_sched_prio_new_head(
struct sctp_stream *stream, int prio, gfp_t gfp)
{
@@ -38,6 +50,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_new_head(
INIT_LIST_HEAD(&p->active);
p->next = NULL;
p->prio = prio;
+ p->users = 1;
return p;
}
@@ -53,7 +66,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_get_head(
*/
list_for_each_entry(p, &stream->prio_list, prio_sched) {
if (p->prio == prio)
- return p;
+ return sctp_sched_prio_head_get(p);
if (p->prio > prio)
break;
}
@@ -70,7 +83,7 @@ static struct sctp_stream_priorities *sctp_sched_prio_get_head(
*/
break;
if (p->prio == prio)
- return p;
+ return sctp_sched_prio_head_get(p);
}
/* If not even there, allocate a new one. */
@@ -154,32 +167,21 @@ static int sctp_sched_prio_set(struct sctp_stream *stream, __u16 sid,
struct sctp_stream_out_ext *soute = sout->ext;
struct sctp_stream_priorities *prio_head, *old;
bool reschedule = false;
- int i;
+
+ old = soute->prio_head;
+ if (old && old->prio == prio)
+ return 0;
prio_head = sctp_sched_prio_get_head(stream, prio, gfp);
if (!prio_head)
return -ENOMEM;
reschedule = sctp_sched_prio_unsched(soute);
- old = soute->prio_head;
soute->prio_head = prio_head;
if (reschedule)
sctp_sched_prio_sched(stream, soute);
- if (!old)
- /* Happens when we set the priority for the first time */
- return 0;
-
- for (i = 0; i < stream->outcnt; i++) {
- soute = SCTP_SO(stream, i)->ext;
- if (soute && soute->prio_head == old)
- /* It's still in use, nothing else to do here. */
- return 0;
- }
-
- /* No hits, we are good to free it. */
- kfree(old);
-
+ sctp_sched_prio_head_put(old);
return 0;
}
@@ -206,20 +208,8 @@ static int sctp_sched_prio_init_sid(struct sctp_stream *stream, __u16 sid,
static void sctp_sched_prio_free_sid(struct sctp_stream *stream, __u16 sid)
{
- struct sctp_stream_priorities *prio = SCTP_SO(stream, sid)->ext->prio_head;
- int i;
-
- if (!prio)
- return;
-
+ sctp_sched_prio_head_put(SCTP_SO(stream, sid)->ext->prio_head);
SCTP_SO(stream, sid)->ext->prio_head = NULL;
- for (i = 0; i < stream->outcnt; i++) {
- if (SCTP_SO(stream, i)->ext &&
- SCTP_SO(stream, i)->ext->prio_head == prio)
- return;
- }
-
- kfree(prio);
}
static void sctp_sched_prio_free(struct sctp_stream *stream)
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index d5ddf28..9cdb7df 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -2172,16 +2172,14 @@ static int smc_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
{
struct sock *sk = sock->sk;
struct smc_sock *smc;
- int rc = -EPIPE;
+ int rc;
smc = smc_sk(sk);
lock_sock(sk);
- if ((sk->sk_state != SMC_ACTIVE) &&
- (sk->sk_state != SMC_APPCLOSEWAIT1) &&
- (sk->sk_state != SMC_INIT))
- goto out;
+ /* SMC does not support connect with fastopen */
if (msg->msg_flags & MSG_FASTOPEN) {
+ /* not connected yet, fallback */
if (sk->sk_state == SMC_INIT && !smc->connect_nonblock) {
rc = smc_switch_to_fallback(smc, SMC_CLC_DECL_OPTUNSUPP);
if (rc)
@@ -2190,6 +2188,11 @@ static int smc_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
rc = -EINVAL;
goto out;
}
+ } else if ((sk->sk_state != SMC_ACTIVE) &&
+ (sk->sk_state != SMC_APPCLOSEWAIT1) &&
+ (sk->sk_state != SMC_INIT)) {
+ rc = -EPIPE;
+ goto out;
}
if (smc->use_fallback) {
diff --git a/net/smc/smc_cdc.c b/net/smc/smc_cdc.c
index 5d180d2..41b23f7 100644
--- a/net/smc/smc_cdc.c
+++ b/net/smc/smc_cdc.c
@@ -104,6 +104,9 @@ int smc_cdc_msg_send(struct smc_connection *conn,
union smc_host_cursor cfed;
int rc;
+ if (unlikely(!READ_ONCE(conn->sndbuf_desc)))
+ return -ENOBUFS;
+
smc_cdc_add_pending_send(conn, pend);
conn->tx_cdc_seq++;
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 2eafefa..f08fcc5 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -1297,7 +1297,7 @@ static void __smc_lgr_terminate(struct smc_link_group *lgr, bool soft)
if (lgr->terminating)
return; /* lgr already terminating */
/* cancel free_work sync, will terminate when lgr->freeing is set */
- cancel_delayed_work_sync(&lgr->free_work);
+ cancel_delayed_work(&lgr->free_work);
lgr->terminating = 1;
/* kill remaining link group connections */
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index bbeb80e..ad3e9a4 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -3117,6 +3117,8 @@ rpc_clnt_swap_activate_callback(struct rpc_clnt *clnt,
int
rpc_clnt_swap_activate(struct rpc_clnt *clnt)
{
+ while (clnt != clnt->cl_parent)
+ clnt = clnt->cl_parent;
if (atomic_inc_return(&clnt->cl_swapper) == 1)
return rpc_clnt_iterate_for_each_xprt(clnt,
rpc_clnt_swap_activate_callback, NULL);
@@ -3136,6 +3138,8 @@ rpc_clnt_swap_deactivate_callback(struct rpc_clnt *clnt,
void
rpc_clnt_swap_deactivate(struct rpc_clnt *clnt)
{
+ while (clnt != clnt->cl_parent)
+ clnt = clnt->cl_parent;
if (atomic_dec_if_positive(&clnt->cl_swapper) == 0)
rpc_clnt_iterate_for_each_xprt(clnt,
rpc_clnt_swap_deactivate_callback, NULL);
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 08ca797..74a1c91 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -806,6 +806,7 @@ EXPORT_SYMBOL_GPL(svc_set_num_threads);
static int
svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
{
+ struct svc_rqst *rqstp;
struct task_struct *task;
unsigned int state = serv->sv_nrthreads-1;
@@ -814,7 +815,10 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
task = choose_victim(serv, pool, &state);
if (task == NULL)
break;
- kthread_stop(task);
+ rqstp = kthread_data(task);
+ /* Did we lose a race to svo_function threadfn? */
+ if (kthread_stop(task) == -EINTR)
+ svc_exit_thread(rqstp);
nrservs++;
} while (nrservs < 0);
return 0;
diff --git a/net/sunrpc/svcauth_unix.c b/net/sunrpc/svcauth_unix.c
index d7ed7d4..a7d1071 100644
--- a/net/sunrpc/svcauth_unix.c
+++ b/net/sunrpc/svcauth_unix.c
@@ -415,14 +415,23 @@ static int unix_gid_hash(kuid_t uid)
return hash_long(from_kuid(&init_user_ns, uid), GID_HASHBITS);
}
+static void unix_gid_free(struct rcu_head *rcu)
+{
+ struct unix_gid *ug = container_of(rcu, struct unix_gid, rcu);
+ struct cache_head *item = &ug->h;
+
+ if (test_bit(CACHE_VALID, &item->flags) &&
+ !test_bit(CACHE_NEGATIVE, &item->flags))
+ put_group_info(ug->gi);
+ kfree(ug);
+}
+
static void unix_gid_put(struct kref *kref)
{
struct cache_head *item = container_of(kref, struct cache_head, ref);
struct unix_gid *ug = container_of(item, struct unix_gid, h);
- if (test_bit(CACHE_VALID, &item->flags) &&
- !test_bit(CACHE_NEGATIVE, &item->flags))
- put_group_info(ug->gi);
- kfree_rcu(ug, rcu);
+
+ call_rcu(&ug->rcu, unix_gid_free);
}
static int unix_gid_match(struct cache_head *corig, struct cache_head *cnew)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index 897dfce..bf801adf 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -2140,6 +2140,7 @@ static void xs_tcp_shutdown(struct rpc_xprt *xprt)
switch (skst) {
case TCP_FIN_WAIT1:
case TCP_FIN_WAIT2:
+ case TCP_LAST_ACK:
break;
case TCP_ESTABLISHED:
case TCP_CLOSE_WAIT:
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index c0fea67..bc897ff 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -950,7 +950,9 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
MSG_CMSG_COMPAT))
return -EOPNOTSUPP;
- mutex_lock(&tls_ctx->tx_lock);
+ ret = mutex_lock_interruptible(&tls_ctx->tx_lock);
+ if (ret)
+ return ret;
lock_sock(sk);
if (unlikely(msg->msg_controllen)) {
@@ -1284,7 +1286,9 @@ int tls_sw_sendpage(struct sock *sk, struct page *page,
MSG_SENDPAGE_NOTLAST | MSG_SENDPAGE_NOPOLICY))
return -EOPNOTSUPP;
- mutex_lock(&tls_ctx->tx_lock);
+ ret = mutex_lock_interruptible(&tls_ctx->tx_lock);
+ if (ret)
+ return ret;
lock_sock(sk);
ret = tls_sw_do_sendpage(sk, page, offset, size, flags);
release_sock(sk);
@@ -2284,11 +2288,19 @@ static void tx_work_handler(struct work_struct *work)
if (!test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask))
return;
- mutex_lock(&tls_ctx->tx_lock);
- lock_sock(sk);
- tls_tx_records(sk, -1);
- release_sock(sk);
- mutex_unlock(&tls_ctx->tx_lock);
+
+ if (mutex_trylock(&tls_ctx->tx_lock)) {
+ lock_sock(sk);
+ tls_tx_records(sk, -1);
+ release_sock(sk);
+ mutex_unlock(&tls_ctx->tx_lock);
+ } else if (!test_and_set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
+ /* Someone is holding the tx_lock, they will likely run Tx
+ * and cancel the work on their way out of the lock section.
+ * Schedule a long delay just in case.
+ */
+ schedule_delayed_work(&ctx->tx_work.work, msecs_to_jiffies(10));
+ }
}
void tls_sw_write_space(struct sock *sk, struct tls_context *ctx)
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 0a59a00..a96026d 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -1969,8 +1969,9 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
*/
#define UNIX_SKB_FRAGS_SZ (PAGE_SIZE << get_order(32768))
-#if (IS_ENABLED(CONFIG_AF_UNIX_OOB))
-static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other)
+#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
+static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other,
+ struct scm_cookie *scm, bool fds_sent)
{
struct unix_sock *ousk = unix_sk(other);
struct sk_buff *skb;
@@ -1981,6 +1982,11 @@ static int queue_oob(struct socket *sock, struct msghdr *msg, struct sock *other
if (!skb)
return err;
+ err = unix_scm_to_skb(scm, skb, !fds_sent);
+ if (err < 0) {
+ kfree_skb(skb);
+ return err;
+ }
skb_put(skb, 1);
err = skb_copy_datagram_from_iter(skb, 0, &msg->msg_iter, 1);
@@ -2035,7 +2041,7 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
err = -EOPNOTSUPP;
if (msg->msg_flags & MSG_OOB) {
-#if (IS_ENABLED(CONFIG_AF_UNIX_OOB))
+#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
if (len)
len--;
else
@@ -2106,9 +2112,9 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
sent += size;
}
-#if (IS_ENABLED(CONFIG_AF_UNIX_OOB))
+#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
if (msg->msg_flags & MSG_OOB) {
- err = queue_oob(sock, msg, other);
+ err = queue_oob(sock, msg, other, &scm, fds_sent);
if (err)
goto out_err;
sent++;
diff --git a/net/unix/unix_bpf.c b/net/unix/unix_bpf.c
index 452376c..5919d61 100644
--- a/net/unix/unix_bpf.c
+++ b/net/unix/unix_bpf.c
@@ -55,6 +55,9 @@ static int unix_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
struct sk_psock *psock;
int copied;
+ if (!len)
+ return 0;
+
psock = sk_psock_get(sk);
if (unlikely(!psock))
return __unix_recvmsg(sk, msg, len, flags);
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index bb46a6a3..1b91a9c 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -12922,7 +12922,7 @@ static int nl80211_set_rekey_data(struct sk_buff *skb, struct genl_info *info)
return -ERANGE;
if (nla_len(tb[NL80211_REKEY_DATA_KCK]) != NL80211_KCK_LEN &&
!(rdev->wiphy.flags & WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK &&
- nla_len(tb[NL80211_REKEY_DATA_KEK]) == NL80211_KCK_EXT_LEN))
+ nla_len(tb[NL80211_REKEY_DATA_KCK]) == NL80211_KCK_EXT_LEN))
return -ERANGE;
rekey_data.kek = nla_data(tb[NL80211_REKEY_DATA_KEK]);
diff --git a/net/wireless/sme.c b/net/wireless/sme.c
index 08a70b4..1591cd6 100644
--- a/net/wireless/sme.c
+++ b/net/wireless/sme.c
@@ -268,6 +268,15 @@ void cfg80211_conn_work(struct work_struct *work)
wiphy_unlock(&rdev->wiphy);
}
+static void cfg80211_step_auth_next(struct cfg80211_conn *conn,
+ struct cfg80211_bss *bss)
+{
+ memcpy(conn->bssid, bss->bssid, ETH_ALEN);
+ conn->params.bssid = conn->bssid;
+ conn->params.channel = bss->channel;
+ conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
+}
+
/* Returned bss is reference counted and must be cleaned up appropriately. */
static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
{
@@ -285,10 +294,7 @@ static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
if (!bss)
return NULL;
- memcpy(wdev->conn->bssid, bss->bssid, ETH_ALEN);
- wdev->conn->params.bssid = wdev->conn->bssid;
- wdev->conn->params.channel = bss->channel;
- wdev->conn->state = CFG80211_CONN_AUTHENTICATE_NEXT;
+ cfg80211_step_auth_next(wdev->conn, bss);
schedule_work(&rdev->conn_work);
return bss;
@@ -567,7 +573,12 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
wdev->conn->params.ssid_len = wdev->ssid_len;
/* see if we have the bss already */
- bss = cfg80211_get_conn_bss(wdev);
+ bss = cfg80211_get_bss(wdev->wiphy, wdev->conn->params.channel,
+ wdev->conn->params.bssid,
+ wdev->conn->params.ssid,
+ wdev->conn->params.ssid_len,
+ wdev->conn_bss_type,
+ IEEE80211_PRIVACY(wdev->conn->params.privacy));
if (prev_bssid) {
memcpy(wdev->conn->prev_bssid, prev_bssid, ETH_ALEN);
@@ -578,6 +589,7 @@ static int cfg80211_sme_connect(struct wireless_dev *wdev,
if (bss) {
enum nl80211_timeout_reason treason;
+ cfg80211_step_auth_next(wdev->conn, bss);
err = cfg80211_conn_do_work(wdev, &treason);
cfg80211_put_bss(wdev->wiphy, bss);
} else {
@@ -1244,6 +1256,13 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
} else {
if (WARN_ON(connkeys))
return -EINVAL;
+
+ /* connect can point to wdev->wext.connect which
+ * can hold key data from a previous connection
+ */
+ connect->key = NULL;
+ connect->key_len = 0;
+ connect->key_idx = 0;
}
wdev->connect_keys = connkeys;
diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
index 3a17182..07f6206 100644
--- a/net/x25/af_x25.c
+++ b/net/x25/af_x25.c
@@ -482,6 +482,12 @@ static int x25_listen(struct socket *sock, int backlog)
int rc = -EOPNOTSUPP;
lock_sock(sk);
+ if (sock->state != SS_UNCONNECTED) {
+ rc = -EINVAL;
+ release_sock(sk);
+ return rc;
+ }
+
if (sk->sk_state != TCP_LISTEN) {
memset(&x25_sk(sk)->dest_addr, 0, X25_ADDR_LEN);
sk->sk_max_ack_backlog = backlog;
diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
index f01ef6b..65f918d 100644
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@ -152,10 +152,11 @@ static int xdp_umem_account_pages(struct xdp_umem *umem)
static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
{
- u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom;
bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
- u64 npgs, addr = mr->addr, size = mr->len;
- unsigned int chunks, chunks_rem;
+ u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
+ u64 addr = mr->addr, size = mr->len;
+ u32 chunks_rem, npgs_rem;
+ u64 chunks, npgs;
int err;
if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
@@ -190,8 +191,8 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
if (npgs > U32_MAX)
return -EINVAL;
- chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem);
- if (chunks == 0)
+ chunks = div_u64_rem(size, chunk_size, &chunks_rem);
+ if (!chunks || chunks > U32_MAX)
return -EINVAL;
if (!unaligned_chunks && chunks_rem)
@@ -204,7 +205,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
umem->headroom = headroom;
umem->chunk_size = chunk_size;
umem->chunks = chunks;
- umem->npgs = (u32)npgs;
+ umem->npgs = npgs;
umem->pgs = NULL;
umem->user = NULL;
umem->flags = mr->flags;
diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
index a0f62fa..8cbf45a 100644
--- a/net/xfrm/xfrm_compat.c
+++ b/net/xfrm/xfrm_compat.c
@@ -5,6 +5,7 @@
* Based on code and translator idea by: Florian Westphal <fw@strlen.de>
*/
#include <linux/compat.h>
+#include <linux/nospec.h>
#include <linux/xfrm.h>
#include <net/xfrm.h>
@@ -302,7 +303,7 @@ static int xfrm_xlate64(struct sk_buff *dst, const struct nlmsghdr *nlh_src)
nla_for_each_attr(nla, attrs, len, remaining) {
int err;
- switch (type) {
+ switch (nlh_src->nlmsg_type) {
case XFRM_MSG_NEWSPDINFO:
err = xfrm_nla_cpy(dst, nla, nla_len(nla));
break;
@@ -437,6 +438,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
NL_SET_ERR_MSG(extack, "Bad attribute");
return -EOPNOTSUPP;
}
+ type = array_index_nospec(type, XFRMA_MAX + 1);
if (nla_len(nla) < compat_policy[type].len) {
NL_SET_ERR_MSG(extack, "Attribute bad length");
return -EOPNOTSUPP;
diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
index 3d8668d..7c5958a 100644
--- a/net/xfrm/xfrm_input.c
+++ b/net/xfrm/xfrm_input.c
@@ -278,8 +278,7 @@ static int xfrm6_remove_tunnel_encap(struct xfrm_state *x, struct sk_buff *skb)
goto out;
if (x->props.flags & XFRM_STATE_DECAP_DSCP)
- ipv6_copy_dscp(ipv6_get_dsfield(ipv6_hdr(skb)),
- ipipv6_hdr(skb));
+ ipv6_copy_dscp(XFRM_MODE_SKB_CB(skb)->tos, ipipv6_hdr(skb));
if (!(x->props.flags & XFRM_STATE_NOECN))
ipip6_ecn_decapsulate(skb);
diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
index 1e8b26e..694eec6 100644
--- a/net/xfrm/xfrm_interface.c
+++ b/net/xfrm/xfrm_interface.c
@@ -207,6 +207,52 @@ static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet)
skb->mark = 0;
}
+static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi,
+ int encap_type, unsigned short family)
+{
+ struct sec_path *sp;
+
+ sp = skb_sec_path(skb);
+ if (sp && (sp->len || sp->olen) &&
+ !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family))
+ goto discard;
+
+ XFRM_SPI_SKB_CB(skb)->family = family;
+ if (family == AF_INET) {
+ XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr);
+ XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL;
+ } else {
+ XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr);
+ XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL;
+ }
+
+ return xfrm_input(skb, nexthdr, spi, encap_type);
+discard:
+ kfree_skb(skb);
+ return 0;
+}
+
+static int xfrmi4_rcv(struct sk_buff *skb)
+{
+ return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET);
+}
+
+static int xfrmi6_rcv(struct sk_buff *skb)
+{
+ return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff],
+ 0, 0, AF_INET6);
+}
+
+static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+{
+ return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET);
+}
+
+static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+{
+ return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6);
+}
+
static int xfrmi_rcv_cb(struct sk_buff *skb, int err)
{
const struct xfrm_mode *inner_mode;
@@ -774,8 +820,8 @@ static struct pernet_operations xfrmi_net_ops = {
};
static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = {
- .handler = xfrm6_rcv,
- .input_handler = xfrm_input,
+ .handler = xfrmi6_rcv,
+ .input_handler = xfrmi6_input,
.cb_handler = xfrmi_rcv_cb,
.err_handler = xfrmi6_err,
.priority = 10,
@@ -825,8 +871,8 @@ static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = {
#endif
static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = {
- .handler = xfrm4_rcv,
- .input_handler = xfrm_input,
+ .handler = xfrmi4_rcv,
+ .input_handler = xfrmi4_input,
.cb_handler = xfrmi_rcv_cb,
.err_handler = xfrmi4_err,
.priority = 10,
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index ba58b96..0540e9f 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -3669,6 +3669,9 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
goto reject;
}
+ if (if_id)
+ secpath_reset(skb);
+
xfrm_pols_put(pols, npols);
return 1;
}
diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
index 15132b0..60f3ea5 100644
--- a/net/xfrm/xfrm_state.c
+++ b/net/xfrm/xfrm_state.c
@@ -2643,9 +2643,6 @@ int __xfrm_init_state(struct xfrm_state *x, bool init_replay, bool offload)
if (inner_mode == NULL)
goto error;
- if (!(inner_mode->flags & XFRM_MODE_FLAG_TUNNEL))
- goto error;
-
x->inner_mode = *inner_mode;
if (x->props.family == AF_INET)
diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
index 5fba827..eb0952d 100644
--- a/net/xfrm/xfrm_user.c
+++ b/net/xfrm/xfrm_user.c
@@ -906,7 +906,9 @@ static int copy_to_user_aead(struct xfrm_algo_aead *aead, struct sk_buff *skb)
return -EMSGSIZE;
ap = nla_data(nla);
- memcpy(ap, aead, sizeof(*aead));
+ strscpy_pad(ap->alg_name, aead->alg_name, sizeof(ap->alg_name));
+ ap->alg_key_len = aead->alg_key_len;
+ ap->alg_icv_len = aead->alg_icv_len;
if (redact_secret && aead->alg_key_len)
memset(ap->alg_key, 0, (aead->alg_key_len + 7) / 8);
@@ -926,7 +928,8 @@ static int copy_to_user_ealg(struct xfrm_algo *ealg, struct sk_buff *skb)
return -EMSGSIZE;
ap = nla_data(nla);
- memcpy(ap, ealg, sizeof(*ealg));
+ strscpy_pad(ap->alg_name, ealg->alg_name, sizeof(ap->alg_name));
+ ap->alg_key_len = ealg->alg_key_len;
if (redact_secret && ealg->alg_key_len)
memset(ap->alg_key, 0, (ealg->alg_key_len + 7) / 8);
@@ -937,6 +940,40 @@ static int copy_to_user_ealg(struct xfrm_algo *ealg, struct sk_buff *skb)
return 0;
}
+static int copy_to_user_calg(struct xfrm_algo *calg, struct sk_buff *skb)
+{
+ struct nlattr *nla = nla_reserve(skb, XFRMA_ALG_COMP, sizeof(*calg));
+ struct xfrm_algo *ap;
+
+ if (!nla)
+ return -EMSGSIZE;
+
+ ap = nla_data(nla);
+ strscpy_pad(ap->alg_name, calg->alg_name, sizeof(ap->alg_name));
+ ap->alg_key_len = 0;
+
+ return 0;
+}
+
+static int copy_to_user_encap(struct xfrm_encap_tmpl *ep, struct sk_buff *skb)
+{
+ struct nlattr *nla = nla_reserve(skb, XFRMA_ENCAP, sizeof(*ep));
+ struct xfrm_encap_tmpl *uep;
+
+ if (!nla)
+ return -EMSGSIZE;
+
+ uep = nla_data(nla);
+ memset(uep, 0, sizeof(*uep));
+
+ uep->encap_type = ep->encap_type;
+ uep->encap_sport = ep->encap_sport;
+ uep->encap_dport = ep->encap_dport;
+ uep->encap_oa = ep->encap_oa;
+
+ return 0;
+}
+
static int xfrm_smark_put(struct sk_buff *skb, struct xfrm_mark *m)
{
int ret = 0;
@@ -992,12 +1029,12 @@ static int copy_to_user_state_extra(struct xfrm_state *x,
goto out;
}
if (x->calg) {
- ret = nla_put(skb, XFRMA_ALG_COMP, sizeof(*(x->calg)), x->calg);
+ ret = copy_to_user_calg(x->calg, skb);
if (ret)
goto out;
}
if (x->encap) {
- ret = nla_put(skb, XFRMA_ENCAP, sizeof(*x->encap), x->encap);
+ ret = copy_to_user_encap(x->encap, skb);
if (ret)
goto out;
}
diff --git a/scripts/Makefile b/scripts/Makefile
index 9adb6d2..e2a2398 100644
--- a/scripts/Makefile
+++ b/scripts/Makefile
@@ -3,8 +3,8 @@
# scripts contains sources for various helper programs used throughout
# the kernel for the build process.
-CRYPTO_LIBS = $(shell pkg-config --libs libcrypto 2> /dev/null || echo -lcrypto)
-CRYPTO_CFLAGS = $(shell pkg-config --cflags libcrypto 2> /dev/null)
+CRYPTO_LIBS = $(shell $(HOSTPKG_CONFIG) --libs libcrypto 2> /dev/null || echo -lcrypto)
+CRYPTO_CFLAGS = $(shell $(HOSTPKG_CONFIG) --cflags libcrypto 2> /dev/null)
hostprogs-always-$(CONFIG_BUILD_BIN2C) += bin2c
hostprogs-always-$(CONFIG_KALLSYMS) += kallsyms
diff --git a/scripts/checkkconfigsymbols.py b/scripts/checkkconfigsymbols.py
index 217d21a..36c920e 100755
--- a/scripts/checkkconfigsymbols.py
+++ b/scripts/checkkconfigsymbols.py
@@ -115,7 +115,7 @@
return args
-def main():
+def print_undefined_symbols():
"""Main function of this module."""
args = parse_options()
@@ -467,5 +467,16 @@
return defined, references
+def main():
+ try:
+ print_undefined_symbols()
+ except BrokenPipeError:
+ # Python flushes standard streams on exit; redirect remaining output
+ # to devnull to avoid another BrokenPipeError at shutdown
+ devnull = os.open(os.devnull, os.O_WRONLY)
+ os.dup2(devnull, sys.stdout.fileno())
+ sys.exit(1) # Python exits with error code 1 on EPIPE
+
+
if __name__ == "__main__":
main()
diff --git a/scripts/clang-tools/run-clang-tools.py b/scripts/clang-tools/run-clang-tools.py
index f754415a..f426991 100755
--- a/scripts/clang-tools/run-clang-tools.py
+++ b/scripts/clang-tools/run-clang-tools.py
@@ -60,14 +60,21 @@
def main():
- args = parse_arguments()
+ try:
+ args = parse_arguments()
- lock = multiprocessing.Lock()
- pool = multiprocessing.Pool(initializer=init, initargs=(lock, args))
- # Read JSON data into the datastore variable
- with open(args.path, "r") as f:
- datastore = json.load(f)
- pool.map(run_analysis, datastore)
+ lock = multiprocessing.Lock()
+ pool = multiprocessing.Pool(initializer=init, initargs=(lock, args))
+ # Read JSON data into the datastore variable
+ with open(args.path, "r") as f:
+ datastore = json.load(f)
+ pool.map(run_analysis, datastore)
+ except BrokenPipeError:
+ # Python flushes standard streams on exit; redirect remaining output
+ # to devnull to avoid another BrokenPipeError at shutdown
+ devnull = os.open(os.devnull, os.O_WRONLY)
+ os.dup2(devnull, sys.stdout.fileno())
+ sys.exit(1) # Python exits with error code 1 on EPIPE
if __name__ == "__main__":
diff --git a/scripts/diffconfig b/scripts/diffconfig
index d5da5fa..43f0f3d 100755
--- a/scripts/diffconfig
+++ b/scripts/diffconfig
@@ -65,7 +65,7 @@
else:
print(" %s %s -> %s" % (config, value, new_value))
-def main():
+def show_diff():
global merge_style
# parse command line args
@@ -129,4 +129,16 @@
for config in new:
print_config("+", config, None, b[config])
-main()
+def main():
+ try:
+ show_diff()
+ except BrokenPipeError:
+ # Python flushes standard streams on exit; redirect remaining output
+ # to devnull to avoid another BrokenPipeError at shutdown
+ devnull = os.open(os.devnull, os.O_WRONLY)
+ os.dup2(devnull, sys.stdout.fileno())
+ sys.exit(1) # Python exits with error code 1 on EPIPE
+
+
+if __name__ == '__main__':
+ main()
diff --git a/scripts/dtc/Makefile b/scripts/dtc/Makefile
index 1cba78e..2d5f274 100644
--- a/scripts/dtc/Makefile
+++ b/scripts/dtc/Makefile
@@ -18,7 +18,7 @@
# Source files need to get at the userspace version of libfdt_env.h to compile
HOST_EXTRACFLAGS += -I $(srctree)/$(src)/libfdt
-ifeq ($(shell pkg-config --exists yaml-0.1 2>/dev/null && echo yes),)
+ifeq ($(shell $(HOSTPKG_CONFIG) --exists yaml-0.1 2>/dev/null && echo yes),)
ifneq ($(CHECK_DT_BINDING)$(CHECK_DTBS),)
$(error dtc needs libyaml for DT schema validation support. \
Install the necessary libyaml development package.)
@@ -27,9 +27,9 @@
else
dtc-objs += yamltree.o
# To include <yaml.h> installed in a non-default path
-HOSTCFLAGS_yamltree.o := $(shell pkg-config --cflags yaml-0.1)
+HOSTCFLAGS_yamltree.o := $(shell $(HOSTPKG_CONFIG) --cflags yaml-0.1)
# To link libyaml installed in a non-default path
-HOSTLDLIBS_dtc := $(shell pkg-config --libs yaml-0.1)
+HOSTLDLIBS_dtc := $(shell $(HOSTPKG_CONFIG) --libs yaml-0.1)
endif
# Generated files need one more search path to include headers in source tree
diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
index 4a828bc..797c8ba 100644
--- a/scripts/kconfig/confdata.c
+++ b/scripts/kconfig/confdata.c
@@ -1124,10 +1124,12 @@ static void (*conf_changed_callback)(void);
void conf_set_changed(bool val)
{
- if (conf_changed_callback && conf_changed != val)
- conf_changed_callback();
+ bool changed = conf_changed != val;
conf_changed = val;
+
+ if (conf_changed_callback && changed)
+ conf_changed_callback();
}
bool conf_get_changed(void)
diff --git a/scripts/kconfig/gconf-cfg.sh b/scripts/kconfig/gconf-cfg.sh
index 480ecd8..cbd90c2 100755
--- a/scripts/kconfig/gconf-cfg.sh
+++ b/scripts/kconfig/gconf-cfg.sh
@@ -3,14 +3,14 @@
PKG="gtk+-2.0 gmodule-2.0 libglade-2.0"
-if [ -z "$(command -v pkg-config)" ]; then
+if [ -z "$(command -v ${HOSTPKG_CONFIG})" ]; then
echo >&2 "*"
- echo >&2 "* 'make gconfig' requires 'pkg-config'. Please install it."
+ echo >&2 "* 'make gconfig' requires '${HOSTPKG_CONFIG}'. Please install it."
echo >&2 "*"
exit 1
fi
-if ! pkg-config --exists $PKG; then
+if ! ${HOSTPKG_CONFIG} --exists $PKG; then
echo >&2 "*"
echo >&2 "* Unable to find the GTK+ installation. Please make sure that"
echo >&2 "* the GTK+ 2.0 development package is correctly installed."
@@ -19,12 +19,12 @@
exit 1
fi
-if ! pkg-config --atleast-version=2.0.0 gtk+-2.0; then
+if ! ${HOSTPKG_CONFIG} --atleast-version=2.0.0 gtk+-2.0; then
echo >&2 "*"
echo >&2 "* GTK+ is present but version >= 2.0.0 is required."
echo >&2 "*"
exit 1
fi
-echo cflags=\"$(pkg-config --cflags $PKG)\"
-echo libs=\"$(pkg-config --libs $PKG)\"
+echo cflags=\"$(${HOSTPKG_CONFIG} --cflags $PKG)\"
+echo libs=\"$(${HOSTPKG_CONFIG} --libs $PKG)\"
diff --git a/scripts/kconfig/mconf-cfg.sh b/scripts/kconfig/mconf-cfg.sh
index b520e40..025b565 100755
--- a/scripts/kconfig/mconf-cfg.sh
+++ b/scripts/kconfig/mconf-cfg.sh
@@ -4,16 +4,16 @@
PKG="ncursesw"
PKG2="ncurses"
-if [ -n "$(command -v pkg-config)" ]; then
- if pkg-config --exists $PKG; then
- echo cflags=\"$(pkg-config --cflags $PKG)\"
- echo libs=\"$(pkg-config --libs $PKG)\"
+if [ -n "$(command -v ${HOSTPKG_CONFIG})" ]; then
+ if ${HOSTPKG_CONFIG} --exists $PKG; then
+ echo cflags=\"$(${HOSTPKG_CONFIG} --cflags $PKG)\"
+ echo libs=\"$(${HOSTPKG_CONFIG} --libs $PKG)\"
exit 0
fi
- if pkg-config --exists $PKG2; then
- echo cflags=\"$(pkg-config --cflags $PKG2)\"
- echo libs=\"$(pkg-config --libs $PKG2)\"
+ if ${HOSTPKG_CONFIG} --exists $PKG2; then
+ echo cflags=\"$(${HOSTPKG_CONFIG} --cflags $PKG2)\"
+ echo libs=\"$(${HOSTPKG_CONFIG} --libs $PKG2)\"
exit 0
fi
fi
@@ -46,7 +46,7 @@
echo >&2 "* Install ncurses (ncurses-devel or libncurses-dev"
echo >&2 "* depending on your distribution)."
echo >&2 "*"
-echo >&2 "* You may also need to install pkg-config to find the"
+echo >&2 "* You may also need to install ${HOSTPKG_CONFIG} to find the"
echo >&2 "* ncurses installed in a non-default location."
echo >&2 "*"
exit 1
diff --git a/scripts/kconfig/nconf-cfg.sh b/scripts/kconfig/nconf-cfg.sh
index c212255..3a10bac2 100755
--- a/scripts/kconfig/nconf-cfg.sh
+++ b/scripts/kconfig/nconf-cfg.sh
@@ -4,16 +4,16 @@
PKG="ncursesw menuw panelw"
PKG2="ncurses menu panel"
-if [ -n "$(command -v pkg-config)" ]; then
- if pkg-config --exists $PKG; then
- echo cflags=\"$(pkg-config --cflags $PKG)\"
- echo libs=\"$(pkg-config --libs $PKG)\"
+if [ -n "$(command -v ${HOSTPKG_CONFIG})" ]; then
+ if ${HOSTPKG_CONFIG} --exists $PKG; then
+ echo cflags=\"$(${HOSTPKG_CONFIG} --cflags $PKG)\"
+ echo libs=\"$(${HOSTPKG_CONFIG} --libs $PKG)\"
exit 0
fi
- if pkg-config --exists $PKG2; then
- echo cflags=\"$(pkg-config --cflags $PKG2)\"
- echo libs=\"$(pkg-config --libs $PKG2)\"
+ if ${HOSTPKG_CONFIG} --exists $PKG2; then
+ echo cflags=\"$(${HOSTPKG_CONFIG} --cflags $PKG2)\"
+ echo libs=\"$(${HOSTPKG_CONFIG} --libs $PKG2)\"
exit 0
fi
fi
@@ -44,7 +44,7 @@
echo >&2 "* Install ncurses (ncurses-devel or libncurses-dev"
echo >&2 "* depending on your distribution)."
echo >&2 "*"
-echo >&2 "* You may also need to install pkg-config to find the"
+echo >&2 "* You may also need to install ${HOSTPKG_CONFIG} to find the"
echo >&2 "* ncurses installed in a non-default location."
echo >&2 "*"
exit 1
diff --git a/scripts/kconfig/qconf-cfg.sh b/scripts/kconfig/qconf-cfg.sh
index fa564cd..9b695e5 100755
--- a/scripts/kconfig/qconf-cfg.sh
+++ b/scripts/kconfig/qconf-cfg.sh
@@ -3,22 +3,22 @@
PKG="Qt5Core Qt5Gui Qt5Widgets"
-if [ -z "$(command -v pkg-config)" ]; then
+if [ -z "$(command -v ${HOSTPKG_CONFIG})" ]; then
echo >&2 "*"
- echo >&2 "* 'make xconfig' requires 'pkg-config'. Please install it."
+ echo >&2 "* 'make xconfig' requires '${HOSTPKG_CONFIG}'. Please install it."
echo >&2 "*"
exit 1
fi
-if pkg-config --exists $PKG; then
- echo cflags=\"-std=c++11 -fPIC $(pkg-config --cflags $PKG)\"
- echo libs=\"$(pkg-config --libs $PKG)\"
- echo moc=\"$(pkg-config --variable=host_bins Qt5Core)/moc\"
+if ${HOSTPKG_CONFIG} --exists $PKG; then
+ echo cflags=\"-std=c++11 -fPIC $(${HOSTPKG_CONFIG} --cflags $PKG)\"
+ echo libs=\"$(${HOSTPKG_CONFIG} --libs $PKG)\"
+ echo moc=\"$(${HOSTPKG_CONFIG} --variable=host_bins Qt5Core)/moc\"
exit 0
fi
echo >&2 "*"
-echo >&2 "* Could not find Qt5 via pkg-config."
+echo >&2 "* Could not find Qt5 via ${HOSTPKG_CONFIG}."
echo >&2 "* Please install Qt5 and make sure it's in PKG_CONFIG_PATH"
echo >&2 "*"
exit 1
diff --git a/scripts/package/mkdebian b/scripts/package/mkdebian
index 60a2a63..32d528a 100755
--- a/scripts/package/mkdebian
+++ b/scripts/package/mkdebian
@@ -236,7 +236,7 @@
KBUILD_BUILD_VERSION=${revision} -f \$(srctree)/Makefile intdeb-pkg
clean:
- rm -rf debian/*tmp debian/files
+ rm -rf debian/files debian/linux-*
\$(MAKE) clean
binary: binary-arch
diff --git a/scripts/pahole-flags.sh b/scripts/pahole-flags.sh
index 7acee32..d38fa6d 100755
--- a/scripts/pahole-flags.sh
+++ b/scripts/pahole-flags.sh
@@ -7,7 +7,7 @@
exit 0
fi
-pahole_ver=$(${PAHOLE} --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/')
+pahole_ver=$($(dirname $0)/pahole-version.sh ${PAHOLE})
if [ "${pahole_ver}" -ge "118" ] && [ "${pahole_ver}" -le "121" ]; then
# pahole 1.18 through 1.21 can't handle zero-sized per-CPU vars
diff --git a/scripts/pahole-version.sh b/scripts/pahole-version.sh
new file mode 100755
index 0000000..f8a32ab
--- /dev/null
+++ b/scripts/pahole-version.sh
@@ -0,0 +1,13 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+#
+# Usage: $ ./pahole-version.sh pahole
+#
+# Prints pahole's version in a 3-digit form, such as 119 for v1.19.
+
+if [ ! -x "$(command -v "$@")" ]; then
+ echo 0
+ exit 1
+fi
+
+"$@" --version | sed -E 's/v([0-9]+)\.([0-9]+)/\1\2/'
diff --git a/scripts/tags.sh b/scripts/tags.sh
index db8ba41..91413d4 100755
--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -95,10 +95,13 @@
all_compiled_sources()
{
- realpath -es $([ -z "$KBUILD_ABS_SRCTREE" ] && echo --relative-to=.) \
- include/generated/autoconf.h $(find $ignore -name "*.cmd" -exec \
- grep -Poh '(?(?=^source_.* \K).*|(?=^ \K\S).*(?= \\))' {} \+ |
- awk '!a[$0]++') | sort -u
+ {
+ echo include/generated/autoconf.h
+ find $ignore -name "*.cmd" -exec \
+ sed -n -E 's/^source_.* (.*)/\1/p; s/^ (\S.*) \\/\1/p' {} \+ |
+ awk '!a[$0]++'
+ } | xargs realpath -es $([ -z "$KBUILD_ABS_SRCTREE" ] && echo --relative-to=.) |
+ sort -u
}
all_target_sources()
diff --git a/scripts/tracing/ftrace-bisect.sh b/scripts/tracing/ftrace-bisect.sh
index 9267011..bb4f592 100755
--- a/scripts/tracing/ftrace-bisect.sh
+++ b/scripts/tracing/ftrace-bisect.sh
@@ -12,7 +12,7 @@
# (note, if this is a problem with function_graph tracing, then simply
# replace "function" with "function_graph" in the following steps).
#
-# # cd /sys/kernel/debug/tracing
+# # cd /sys/kernel/tracing
# # echo schedule > set_ftrace_filter
# # echo function > current_tracer
#
@@ -20,22 +20,40 @@
#
# # echo nop > current_tracer
#
-# # cat available_filter_functions > ~/full-file
+# Starting with v5.1 this can be done with numbers, making it much faster:
+#
+# The old (slow) way, for kernels before v5.1.
+#
+# [old-way] # cat available_filter_functions > ~/full-file
+#
+# [old-way] *** Note *** this process will take several minutes to update the
+# [old-way] filters. Setting multiple functions is an O(n^2) operation, and we
+# [old-way] are dealing with thousands of functions. So go have coffee, talk
+# [old-way] with your coworkers, read facebook. And eventually, this operation
+# [old-way] will end.
+#
+# The new way (using numbers) is an O(n) operation, and usually takes less than a second.
+#
+# seq `wc -l available_filter_functions | cut -d' ' -f1` > ~/full-file
+#
+# This will create a sequence of numbers that match the functions in
+# available_filter_functions, and when echoing in a number into the
+# set_ftrace_filter file, it will enable the corresponding function in
+# O(1) time. Making enabling all functions O(n) where n is the number of
+# functions to enable.
+#
+# For either the new or old way, the rest of the operations remain the same.
+#
# # ftrace-bisect ~/full-file ~/test-file ~/non-test-file
# # cat ~/test-file > set_ftrace_filter
#
-# *** Note *** this will take several minutes. Setting multiple functions is
-# an O(n^2) operation, and we are dealing with thousands of functions. So go
-# have coffee, talk with your coworkers, read facebook. And eventually, this
-# operation will end.
-#
# # echo function > current_tracer
#
# If it crashes, we know that ~/test-file has a bad function.
#
# Reboot back to test kernel.
#
-# # cd /sys/kernel/debug/tracing
+# # cd /sys/kernel/tracing
# # mv ~/test-file ~/full-file
#
# If it didn't crash.
diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
index 4658654..e9a3611 100644
--- a/security/integrity/ima/ima_main.c
+++ b/security/integrity/ima/ima_main.c
@@ -395,7 +395,9 @@ static int process_measurement(struct file *file, const struct cred *cred,
/**
* ima_file_mmap - based on policy, collect/store measurement.
* @file: pointer to the file to be measured (May be NULL)
- * @prot: contains the protection that will be applied by the kernel.
+ * @reqprot: protection requested by the application
+ * @prot: protection that will be applied by the kernel
+ * @flags: operational flags
*
* Measure files being mmapped executable based on the ima_must_measure()
* policy decision.
@@ -403,7 +405,8 @@ static int process_measurement(struct file *file, const struct cred *cred,
* On success return 0. On integrity appraisal error, assuming the file
* is in policy and IMA-appraisal is in enforcing mode, return -EACCES.
*/
-int ima_file_mmap(struct file *file, unsigned long prot)
+int ima_file_mmap(struct file *file, unsigned long reqprot,
+ unsigned long prot, unsigned long flags)
{
u32 secid;
diff --git a/security/keys/request_key.c b/security/keys/request_key.c
index 2da4404..07a0ef2 100644
--- a/security/keys/request_key.c
+++ b/security/keys/request_key.c
@@ -38,9 +38,12 @@ static void cache_requested_key(struct key *key)
#ifdef CONFIG_KEYS_REQUEST_CACHE
struct task_struct *t = current;
- key_put(t->cached_requested_key);
- t->cached_requested_key = key_get(key);
- set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
+ /* Do not cache key if it is a kernel thread */
+ if (!(t->flags & PF_KTHREAD)) {
+ key_put(t->cached_requested_key);
+ t->cached_requested_key = key_get(key);
+ set_tsk_thread_flag(t, TIF_NOTIFY_RESUME);
+ }
#endif
}
diff --git a/security/security.c b/security/security.c
index 778ba03f..75a2daf 100644
--- a/security/security.c
+++ b/security/security.c
@@ -1597,12 +1597,13 @@ static inline unsigned long mmap_prot(struct file *file, unsigned long prot)
int security_mmap_file(struct file *file, unsigned long prot,
unsigned long flags)
{
+ unsigned long prot_adj = mmap_prot(file, prot);
int ret;
- ret = call_int_hook(mmap_file, 0, file, prot,
- mmap_prot(file, prot), flags);
+
+ ret = call_int_hook(mmap_file, 0, file, prot, prot_adj, flags);
if (ret)
return ret;
- return ima_file_mmap(file, prot);
+ return ima_file_mmap(file, prot, prot_adj, flags);
}
int security_mmap_addr(unsigned long addr)
diff --git a/security/tomoyo/Makefile b/security/tomoyo/Makefile
index cca5a30..221eaad 100644
--- a/security/tomoyo/Makefile
+++ b/security/tomoyo/Makefile
@@ -10,7 +10,7 @@
quiet_cmd_policy = POLICY $@
cmd_policy = ($(call do_policy,profile); $(call do_policy,exception_policy); $(call do_policy,domain_policy); $(call do_policy,manager); $(call do_policy,stat)) >$@
-$(obj)/builtin-policy.h: $(wildcard $(obj)/policy/*.conf $(src)/policy/*.conf.default) FORCE
+$(obj)/builtin-policy.h: $(wildcard $(obj)/policy/*.conf $(srctree)/$(src)/policy/*.conf.default) FORCE
$(call if_changed,policy)
$(obj)/common.o: $(obj)/builtin-policy.h
diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
index 491064f5..8947c98 100644
--- a/sound/core/pcm_lib.c
+++ b/sound/core/pcm_lib.c
@@ -2137,6 +2137,8 @@ int pcm_lib_apply_appl_ptr(struct snd_pcm_substream *substream,
ret = substream->ops->ack(substream);
if (ret < 0) {
runtime->control->appl_ptr = old_appl_ptr;
+ if (ret == -EPIPE)
+ __snd_pcm_xrun(substream);
return ret;
}
}
diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
index 4208fa8..513eadc 100644
--- a/sound/hda/intel-dsp-config.c
+++ b/sound/hda/intel-dsp-config.c
@@ -376,6 +376,15 @@ static const struct config_entry config_table[] = {
},
#endif
+/* Meteor Lake */
+#if IS_ENABLED(CONFIG_SND_SOC_SOF_METEORLAKE)
+ /* Meteorlake-P */
+ {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = 0x7e28,
+ },
+#endif
+
};
static const struct config_entry *snd_intel_dsp_find_config
diff --git a/sound/pci/asihpi/hpi6205.c b/sound/pci/asihpi/hpi6205.c
index 3d6914c..4cdaeef 100644
--- a/sound/pci/asihpi/hpi6205.c
+++ b/sound/pci/asihpi/hpi6205.c
@@ -430,7 +430,7 @@ void HPI_6205(struct hpi_message *phm, struct hpi_response *phr)
pao = hpi_find_adapter(phm->adapter_index);
} else {
/* subsys messages don't address an adapter */
- _HPI_6205(NULL, phm, phr);
+ phr->error = HPI_ERROR_INVALID_OBJ_INDEX;
return;
}
diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
index 7af2515..8e35009 100644
--- a/sound/pci/hda/hda_bind.c
+++ b/sound/pci/hda/hda_bind.c
@@ -144,6 +144,7 @@ static int hda_codec_driver_probe(struct device *dev)
error:
snd_hda_codec_cleanup_for_unbind(codec);
+ codec->preset = NULL;
return err;
}
@@ -166,6 +167,7 @@ static int hda_codec_driver_remove(struct device *dev)
if (codec->patch_ops.free)
codec->patch_ops.free(codec);
snd_hda_codec_cleanup_for_unbind(codec);
+ codec->preset = NULL;
module_put(dev->driver->owner);
return 0;
}
diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
index f552785..19be60b 100644
--- a/sound/pci/hda/hda_codec.c
+++ b/sound/pci/hda/hda_codec.c
@@ -791,7 +791,6 @@ void snd_hda_codec_cleanup_for_unbind(struct hda_codec *codec)
snd_array_free(&codec->cvt_setups);
snd_array_free(&codec->spdif_out);
snd_array_free(&codec->verbs);
- codec->preset = NULL;
codec->follower_dig_outs = NULL;
codec->spdif_status_reset = 0;
snd_array_free(&codec->mixers);
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
index c8042eb..5fce1ca 100644
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -331,14 +331,15 @@ enum {
#define needs_eld_notify_link(chip) false
#endif
-#define CONTROLLER_IN_GPU(pci) (((pci)->device == 0x0a0c) || \
+#define CONTROLLER_IN_GPU(pci) (((pci)->vendor == 0x8086) && \
+ (((pci)->device == 0x0a0c) || \
((pci)->device == 0x0c0c) || \
((pci)->device == 0x0d0c) || \
((pci)->device == 0x160c) || \
((pci)->device == 0x490d) || \
((pci)->device == 0x4f90) || \
((pci)->device == 0x4f91) || \
- ((pci)->device == 0x4f92))
+ ((pci)->device == 0x4f92)))
#define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
index 801dd8d..2646663 100644
--- a/sound/pci/hda/patch_ca0132.c
+++ b/sound/pci/hda/patch_ca0132.c
@@ -2455,7 +2455,7 @@ static int dspio_set_uint_param(struct hda_codec *codec, int mod_id,
static int dspio_alloc_dma_chan(struct hda_codec *codec, unsigned int *dma_chan)
{
int status = 0;
- unsigned int size = sizeof(dma_chan);
+ unsigned int size = sizeof(*dma_chan);
codec_dbg(codec, " dspio_alloc_dma_chan() -- begin\n");
status = dspio_scp(codec, MASTERCONTROL, 0x20,
@@ -4231,8 +4231,10 @@ static int tuning_ctl_set(struct hda_codec *codec, hda_nid_t nid,
for (i = 0; i < TUNING_CTLS_COUNT; i++)
if (nid == ca0132_tuning_ctls[i].nid)
- break;
+ goto found;
+ return -EINVAL;
+found:
snd_hda_power_up(codec);
dspio_set_param(codec, ca0132_tuning_ctls[i].mid, 0x20,
ca0132_tuning_ctls[i].req,
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index 2bc9274..deff4a3 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -980,7 +980,10 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x17aa, 0x3905, "Lenovo G50-30", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC),
- SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_PINCFG_LENOVO_NOTEBOOK),
+ /* NOTE: we'd need to extend the quirk for 17aa:3977 as the same
+ * PCI SSID is used on multiple Lenovo models
+ */
+ SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo G50-70", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
@@ -1003,6 +1006,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
{ .id = CXT_FIXUP_MUTE_LED_GPIO, .name = "mute-led-gpio" },
{ .id = CXT_FIXUP_HP_ZBOOK_MUTE_LED, .name = "hp-zbook-mute-led" },
{ .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" },
+ { .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
{}
};
@@ -1124,6 +1128,7 @@ static const struct hda_device_id snd_hda_id_conexant[] = {
HDA_CODEC_ENTRY(0x14f11f86, "CX8070", patch_conexant_auto),
HDA_CODEC_ENTRY(0x14f12008, "CX8200", patch_conexant_auto),
HDA_CODEC_ENTRY(0x14f120d0, "CX11970", patch_conexant_auto),
+ HDA_CODEC_ENTRY(0x14f120d1, "SN6180", patch_conexant_auto),
HDA_CODEC_ENTRY(0x14f15045, "CX20549 (Venice)", patch_conexant_auto),
HDA_CODEC_ENTRY(0x14f15047, "CX20551 (Waikiki)", patch_conexant_auto),
HDA_CODEC_ENTRY(0x14f15051, "CX20561 (Hermosa)", patch_conexant_auto),
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index c7321f5..86d07d0 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -826,7 +826,7 @@ static int alc_subsystem_id(struct hda_codec *codec, const hda_nid_t *ports)
alc_setup_gpio(codec, 0x02);
break;
case 7:
- alc_setup_gpio(codec, 0x03);
+ alc_setup_gpio(codec, 0x04);
break;
case 5:
default:
@@ -2618,6 +2618,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
+ SND_PCI_QUIRK(0x1558, 0x3702, "Clevo X370SN[VW]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x65d2, "Clevo PB51R[CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
@@ -2625,6 +2626,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1558, 0x65e5, "Clevo PC50D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x65f1, "Clevo PC50HS", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x65f5, "Clevo PD50PN[NRT]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ SND_PCI_QUIRK(0x1558, 0x66a2, "Clevo PE60RNE", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
@@ -2645,6 +2647,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
SND_PCI_QUIRK(0x1558, 0x96e1, "Clevo P960[ER][CDFN]-K", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1558, 0x97e2, "Clevo P970RC-M", ALC1220_FIXUP_CLEVO_P950),
+ SND_PCI_QUIRK(0x1558, 0xd502, "Clevo PD50SNE", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
@@ -8873,6 +8876,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1534, "Acer Predator PH315-54", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X),
SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS),
@@ -9071,11 +9075,15 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x89aa, "HP EliteBook 630 G9", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89c3, "HP", ALC285_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x89ca, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x89d3, "HP EliteBook 645 G9 (MB 89D2)", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
SND_PCI_QUIRK(0x103c, 0x8a78, "HP Dev One", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x103c, 0x8aa0, "HP ProBook 440 G9 (MB 8A9E)", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8aa3, "HP ProBook 450 G9 (MB 8AA1)", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8aa8, "HP EliteBook 640 G9 (MB 8AA6)", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x8aab, "HP EliteBook 650 G9 (MB 8AA9)", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8b5d, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8b5e, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8bf0, "HP", ALC236_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
@@ -9160,6 +9168,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x144d, 0xc812, "Samsung Notebook Pen S (NT950SBE-X58)", ALC298_FIXUP_SAMSUNG_AMP),
SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc868, "Samsung Galaxy Book2 Pro (NP930XED)", ALC298_FIXUP_SAMSUNG_AMP),
SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),
@@ -9194,6 +9204,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1558, 0x5101, "Clevo S510WU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x5157, "Clevo W517GU1", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x51a1, "Clevo NS50MU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0x5630, "Clevo NP50RNJS", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x70a1, "Clevo NB70T[HJK]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x70b3, "Clevo NK70SB", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0x70f2, "Clevo NH79EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
@@ -9228,6 +9239,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1558, 0x971d, "Clevo N970T[CDF]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0xa500, "Clevo NL5[03]RU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL50NU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1558, 0xb022, "Clevo NH77D[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
@@ -9321,6 +9333,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x17aa, 0x9e56, "Lenovo ZhaoYang CF4620Z", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
SND_PCI_QUIRK(0x1849, 0xa233, "Positivo Master C6300", ALC269_FIXUP_HEADSET_MIC),
SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
@@ -9331,6 +9344,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
SND_PCI_QUIRK(0x1c06, 0x2015, "Lemote A190X", ALC269_FIXUP_LEMOTE_A190X),
+ SND_PCI_QUIRK(0x1c6c, 0x1251, "Positivo N14KP6-TG", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1d05, 0x1132, "TongFang PHxTxX1", ALC256_FIXUP_SET_COEF_DEFAULTS),
SND_PCI_QUIRK(0x1d05, 0x1096, "TongFang GMxMRxx", ALC269_FIXUP_NO_SHUTUP),
SND_PCI_QUIRK(0x1d05, 0x1100, "TongFang GKxNRxx", ALC269_FIXUP_NO_SHUTUP),
@@ -11229,6 +11243,7 @@ static const struct snd_pci_quirk alc662_fixup_tbl[] = {
SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ SND_PCI_QUIRK(0x103c, 0x870c, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB),
SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2),
SND_PCI_QUIRK(0x103c, 0x877e, "HP 288 Pro G6", ALC671_FIXUP_HP_HEADSET_MIC2),
diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
index a188901..29abc96 100644
--- a/sound/pci/hda/patch_via.c
+++ b/sound/pci/hda/patch_via.c
@@ -821,6 +821,9 @@ static int add_secret_dac_path(struct hda_codec *codec)
return 0;
nums = snd_hda_get_connections(codec, spec->gen.mixer_nid, conn,
ARRAY_SIZE(conn) - 1);
+ if (nums < 0)
+ return nums;
+
for (i = 0; i < nums; i++) {
if (get_wcaps_type(get_wcaps(codec, conn[i])) == AC_WID_AUD_OUT)
return 0;
diff --git a/sound/pci/ice1712/aureon.c b/sound/pci/ice1712/aureon.c
index 9a30f6d..40a0e00 100644
--- a/sound/pci/ice1712/aureon.c
+++ b/sound/pci/ice1712/aureon.c
@@ -1892,6 +1892,7 @@ static int aureon_add_controls(struct snd_ice1712 *ice)
unsigned char id;
snd_ice1712_save_gpio_status(ice);
id = aureon_cs8415_get(ice, CS8415_ID);
+ snd_ice1712_restore_gpio_status(ice);
if (id != 0x41)
dev_info(ice->card->dev,
"No CS8415 chip. Skipping CS8415 controls.\n");
@@ -1909,7 +1910,6 @@ static int aureon_add_controls(struct snd_ice1712 *ice)
kctl->id.device = ice->pcm->device;
}
}
- snd_ice1712_restore_gpio_status(ice);
}
return 0;
diff --git a/sound/pci/lx6464es/lx_core.c b/sound/pci/lx6464es/lx_core.c
index d3f58a3..b5b0d43 100644
--- a/sound/pci/lx6464es/lx_core.c
+++ b/sound/pci/lx6464es/lx_core.c
@@ -493,12 +493,11 @@ int lx_buffer_ask(struct lx6464es *chip, u32 pipe, int is_capture,
dev_dbg(chip->card->dev,
"CMD_08_ASK_BUFFERS: needed %d, freed %d\n",
*r_needed, *r_freed);
- for (i = 0; i < MAX_STREAM_BUFFER; ++i) {
- for (i = 0; i != chip->rmh.stat_len; ++i)
- dev_dbg(chip->card->dev,
- " stat[%d]: %x, %x\n", i,
- chip->rmh.stat[i],
- chip->rmh.stat[i] & MASK_DATA_SIZE);
+ for (i = 0; i < MAX_STREAM_BUFFER && i < chip->rmh.stat_len;
+ ++i) {
+ dev_dbg(chip->card->dev, " stat[%d]: %x, %x\n", i,
+ chip->rmh.stat[i],
+ chip->rmh.stat[i] & MASK_DATA_SIZE);
}
}
diff --git a/sound/pci/ymfpci/ymfpci.c b/sound/pci/ymfpci/ymfpci.c
index 1e198e4..82d4e0f 100644
--- a/sound/pci/ymfpci/ymfpci.c
+++ b/sound/pci/ymfpci/ymfpci.c
@@ -170,7 +170,7 @@ static int snd_card_ymfpci_probe(struct pci_dev *pci,
return -ENOENT;
}
- err = snd_card_new(&pci->dev, index[dev], id[dev], THIS_MODULE,
+ err = snd_devm_card_new(&pci->dev, index[dev], id[dev], THIS_MODULE,
sizeof(*chip), &card);
if (err < 0)
return err;
diff --git a/sound/pci/ymfpci/ymfpci_main.c b/sound/pci/ymfpci/ymfpci_main.c
index c80114c..b492c32 100644
--- a/sound/pci/ymfpci/ymfpci_main.c
+++ b/sound/pci/ymfpci/ymfpci_main.c
@@ -2165,7 +2165,7 @@ static int snd_ymfpci_memalloc(struct snd_ymfpci *chip)
chip->work_base = ptr;
chip->work_base_addr = ptr_addr;
- snd_BUG_ON(ptr + chip->work_size !=
+ snd_BUG_ON(ptr + PAGE_ALIGN(chip->work_size) !=
chip->work_ptr->area + chip->work_ptr->bytes);
snd_ymfpci_writel(chip, YDSXGR_PLAYCTRLBASE, chip->bank_base_playback_addr);
diff --git a/sound/soc/atmel/mchp-spdifrx.c b/sound/soc/atmel/mchp-spdifrx.c
index 2a62d9a..39a3c2a 100644
--- a/sound/soc/atmel/mchp-spdifrx.c
+++ b/sound/soc/atmel/mchp-spdifrx.c
@@ -217,7 +217,6 @@ struct mchp_spdifrx_ch_stat {
struct mchp_spdifrx_user_data {
unsigned char data[SPDIFRX_UD_BITS / 8];
struct completion done;
- spinlock_t lock; /* protect access to user data */
};
struct mchp_spdifrx_mixer_control {
@@ -231,13 +230,13 @@ struct mchp_spdifrx_mixer_control {
struct mchp_spdifrx_dev {
struct snd_dmaengine_dai_dma_data capture;
struct mchp_spdifrx_mixer_control control;
- spinlock_t blockend_lock; /* protect access to blockend_refcount */
- int blockend_refcount;
+ struct mutex mlock;
struct device *dev;
struct regmap *regmap;
struct clk *pclk;
struct clk *gclk;
unsigned int fmt;
+ unsigned int trigger_enabled;
unsigned int gclk_enabled:1;
};
@@ -275,37 +274,11 @@ static void mchp_spdifrx_channel_user_data_read(struct mchp_spdifrx_dev *dev,
}
}
-/* called from non-atomic context only */
-static void mchp_spdifrx_isr_blockend_en(struct mchp_spdifrx_dev *dev)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&dev->blockend_lock, flags);
- dev->blockend_refcount++;
- /* don't enable BLOCKEND interrupt if it's already enabled */
- if (dev->blockend_refcount == 1)
- regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_BLOCKEND);
- spin_unlock_irqrestore(&dev->blockend_lock, flags);
-}
-
-/* called from atomic/non-atomic context */
-static void mchp_spdifrx_isr_blockend_dis(struct mchp_spdifrx_dev *dev)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&dev->blockend_lock, flags);
- dev->blockend_refcount--;
- /* don't enable BLOCKEND interrupt if it's already enabled */
- if (dev->blockend_refcount == 0)
- regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
- spin_unlock_irqrestore(&dev->blockend_lock, flags);
-}
-
static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
{
struct mchp_spdifrx_dev *dev = dev_id;
struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
- u32 sr, imr, pending, idr = 0;
+ u32 sr, imr, pending;
irqreturn_t ret = IRQ_NONE;
int ch;
@@ -320,13 +293,10 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
if (pending & SPDIFRX_IR_BLOCKEND) {
for (ch = 0; ch < SPDIFRX_CHANNELS; ch++) {
- spin_lock(&ctrl->user_data[ch].lock);
mchp_spdifrx_channel_user_data_read(dev, ch);
- spin_unlock(&ctrl->user_data[ch].lock);
-
complete(&ctrl->user_data[ch].done);
}
- mchp_spdifrx_isr_blockend_dis(dev);
+ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
ret = IRQ_HANDLED;
}
@@ -334,7 +304,7 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
if (pending & SPDIFRX_IR_CSC(ch)) {
mchp_spdifrx_channel_status_read(dev, ch);
complete(&ctrl->ch_stat[ch].done);
- idr |= SPDIFRX_IR_CSC(ch);
+ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_CSC(ch));
ret = IRQ_HANDLED;
}
}
@@ -344,8 +314,6 @@ static irqreturn_t mchp_spdif_interrupt(int irq, void *dev_id)
ret = IRQ_HANDLED;
}
- regmap_write(dev->regmap, SPDIFRX_IDR, idr);
-
return ret;
}
@@ -353,47 +321,40 @@ static int mchp_spdifrx_trigger(struct snd_pcm_substream *substream, int cmd,
struct snd_soc_dai *dai)
{
struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
- u32 mr;
- int running;
- int ret;
-
- regmap_read(dev->regmap, SPDIFRX_MR, &mr);
- running = !!(mr & SPDIFRX_MR_RXEN_ENABLE);
+ int ret = 0;
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
case SNDRV_PCM_TRIGGER_RESUME:
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
- if (!running) {
- mr &= ~SPDIFRX_MR_RXEN_MASK;
- mr |= SPDIFRX_MR_RXEN_ENABLE;
- /* enable overrun interrupts */
- regmap_write(dev->regmap, SPDIFRX_IER,
- SPDIFRX_IR_OVERRUN);
- }
+ mutex_lock(&dev->mlock);
+ /* Enable overrun interrupts */
+ regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_OVERRUN);
+
+ /* Enable receiver. */
+ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
+ SPDIFRX_MR_RXEN_ENABLE);
+ dev->trigger_enabled = true;
+ mutex_unlock(&dev->mlock);
break;
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_SUSPEND:
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
- if (running) {
- mr &= ~SPDIFRX_MR_RXEN_MASK;
- mr |= SPDIFRX_MR_RXEN_DISABLE;
- /* disable overrun interrupts */
- regmap_write(dev->regmap, SPDIFRX_IDR,
- SPDIFRX_IR_OVERRUN);
- }
+ mutex_lock(&dev->mlock);
+ /* Disable overrun interrupts */
+ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_OVERRUN);
+
+ /* Disable receiver. */
+ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
+ SPDIFRX_MR_RXEN_DISABLE);
+ dev->trigger_enabled = false;
+ mutex_unlock(&dev->mlock);
break;
default:
- return -EINVAL;
+ ret = -EINVAL;
}
- ret = regmap_write(dev->regmap, SPDIFRX_MR, mr);
- if (ret) {
- dev_err(dev->dev, "unable to enable/disable RX: %d\n", ret);
- return ret;
- }
-
- return 0;
+ return ret;
}
static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
@@ -401,7 +362,7 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
struct snd_soc_dai *dai)
{
struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
- u32 mr;
+ u32 mr = 0;
int ret;
dev_dbg(dev->dev, "%s() rate=%u format=%#x width=%u channels=%u\n",
@@ -413,13 +374,6 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
return -EINVAL;
}
- regmap_read(dev->regmap, SPDIFRX_MR, &mr);
-
- if (mr & SPDIFRX_MR_RXEN_ENABLE) {
- dev_err(dev->dev, "PCM already running\n");
- return -EBUSY;
- }
-
if (params_channels(params) != SPDIFRX_CHANNELS) {
dev_err(dev->dev, "unsupported number of channels: %d\n",
params_channels(params));
@@ -445,6 +399,13 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
return -EINVAL;
}
+ mutex_lock(&dev->mlock);
+ if (dev->trigger_enabled) {
+ dev_err(dev->dev, "PCM already running\n");
+ ret = -EBUSY;
+ goto unlock;
+ }
+
if (dev->gclk_enabled) {
clk_disable_unprepare(dev->gclk);
dev->gclk_enabled = 0;
@@ -455,19 +416,24 @@ static int mchp_spdifrx_hw_params(struct snd_pcm_substream *substream,
dev_err(dev->dev,
"unable to set gclk min rate: rate %u * ratio %u + 1\n",
params_rate(params), SPDIFRX_GCLK_RATIO_MIN);
- return ret;
+ goto unlock;
}
ret = clk_prepare_enable(dev->gclk);
if (ret) {
dev_err(dev->dev, "unable to enable gclk: %d\n", ret);
- return ret;
+ goto unlock;
}
dev->gclk_enabled = 1;
dev_dbg(dev->dev, "GCLK range min set to %d\n",
params_rate(params) * SPDIFRX_GCLK_RATIO_MIN + 1);
- return regmap_write(dev->regmap, SPDIFRX_MR, mr);
+ ret = regmap_write(dev->regmap, SPDIFRX_MR, mr);
+
+unlock:
+ mutex_unlock(&dev->mlock);
+
+ return ret;
}
static int mchp_spdifrx_hw_free(struct snd_pcm_substream *substream,
@@ -475,10 +441,12 @@ static int mchp_spdifrx_hw_free(struct snd_pcm_substream *substream,
{
struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
+ mutex_lock(&dev->mlock);
if (dev->gclk_enabled) {
clk_disable_unprepare(dev->gclk);
dev->gclk_enabled = 0;
}
+ mutex_unlock(&dev->mlock);
return 0;
}
@@ -515,22 +483,51 @@ static int mchp_spdifrx_cs_get(struct mchp_spdifrx_dev *dev,
{
struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
struct mchp_spdifrx_ch_stat *ch_stat = &ctrl->ch_stat[channel];
- int ret;
+ int ret = 0;
- regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_CSC(channel));
- /* check for new data available */
- ret = wait_for_completion_interruptible_timeout(&ch_stat->done,
- msecs_to_jiffies(100));
- /* IP might not be started or valid stream might not be present */
- if (ret < 0) {
- dev_dbg(dev->dev, "channel status for channel %d timeout\n",
- channel);
+ mutex_lock(&dev->mlock);
+
+ /*
+ * We may reach this point with both clocks enabled but the receiver
+ * still disabled. To void waiting for completion and return with
+ * timeout check the dev->trigger_enabled.
+ *
+ * To retrieve data:
+ * - if the receiver is enabled CSC IRQ will update the data in software
+ * caches (ch_stat->data)
+ * - otherwise we just update it here the software caches with latest
+ * available information and return it; in this case we don't need
+ * spin locking as the IRQ is disabled and will not be raised from
+ * anywhere else.
+ */
+
+ if (dev->trigger_enabled) {
+ reinit_completion(&ch_stat->done);
+ regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_CSC(channel));
+ /* Check for new data available */
+ ret = wait_for_completion_interruptible_timeout(&ch_stat->done,
+ msecs_to_jiffies(100));
+ /* Valid stream might not be present */
+ if (ret <= 0) {
+ dev_dbg(dev->dev, "channel status for channel %d timeout\n",
+ channel);
+ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_CSC(channel));
+ ret = ret ? : -ETIMEDOUT;
+ goto unlock;
+ } else {
+ ret = 0;
+ }
+ } else {
+ /* Update software cache with latest channel status. */
+ mchp_spdifrx_channel_status_read(dev, channel);
}
memcpy(uvalue->value.iec958.status, ch_stat->data,
sizeof(ch_stat->data));
- return 0;
+unlock:
+ mutex_unlock(&dev->mlock);
+ return ret;
}
static int mchp_spdifrx_cs1_get(struct snd_kcontrol *kcontrol,
@@ -564,29 +561,49 @@ static int mchp_spdifrx_subcode_ch_get(struct mchp_spdifrx_dev *dev,
int channel,
struct snd_ctl_elem_value *uvalue)
{
- unsigned long flags;
struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
struct mchp_spdifrx_user_data *user_data = &ctrl->user_data[channel];
- int ret;
+ int ret = 0;
- reinit_completion(&user_data->done);
- mchp_spdifrx_isr_blockend_en(dev);
- ret = wait_for_completion_interruptible_timeout(&user_data->done,
- msecs_to_jiffies(100));
- /* IP might not be started or valid stream might not be present */
- if (ret <= 0) {
- dev_dbg(dev->dev, "user data for channel %d timeout\n",
- channel);
- mchp_spdifrx_isr_blockend_dis(dev);
- return ret;
+ mutex_lock(&dev->mlock);
+
+ /*
+ * We may reach this point with both clocks enabled but the receiver
+ * still disabled. To void waiting for completion to just timeout we
+ * check here the dev->trigger_enabled flag.
+ *
+ * To retrieve data:
+ * - if the receiver is enabled we need to wait for blockend IRQ to read
+ * data to and update it for us in software caches
+ * - otherwise reading the SPDIFRX_CHUD() registers is enough.
+ */
+
+ if (dev->trigger_enabled) {
+ reinit_completion(&user_data->done);
+ regmap_write(dev->regmap, SPDIFRX_IER, SPDIFRX_IR_BLOCKEND);
+ ret = wait_for_completion_interruptible_timeout(&user_data->done,
+ msecs_to_jiffies(100));
+ /* Valid stream might not be present. */
+ if (ret <= 0) {
+ dev_dbg(dev->dev, "user data for channel %d timeout\n",
+ channel);
+ regmap_write(dev->regmap, SPDIFRX_IDR, SPDIFRX_IR_BLOCKEND);
+ ret = ret ? : -ETIMEDOUT;
+ goto unlock;
+ } else {
+ ret = 0;
+ }
+ } else {
+ /* Update software cache with last available data. */
+ mchp_spdifrx_channel_user_data_read(dev, channel);
}
- spin_lock_irqsave(&user_data->lock, flags);
memcpy(uvalue->value.iec958.subcode, user_data->data,
sizeof(user_data->data));
- spin_unlock_irqrestore(&user_data->lock, flags);
- return 0;
+unlock:
+ mutex_unlock(&dev->mlock);
+ return ret;
}
static int mchp_spdifrx_subcode_ch1_get(struct snd_kcontrol *kcontrol,
@@ -627,10 +644,24 @@ static int mchp_spdifrx_ulock_get(struct snd_kcontrol *kcontrol,
u32 val;
bool ulock_old = ctrl->ulock;
- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
- ctrl->ulock = !(val & SPDIFRX_RSR_ULOCK);
+ mutex_lock(&dev->mlock);
+
+ /*
+ * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
+ * and the receiver is disabled. Thus we take into account the
+ * dev->trigger_enabled here to return a real status.
+ */
+ if (dev->trigger_enabled) {
+ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+ ctrl->ulock = !(val & SPDIFRX_RSR_ULOCK);
+ } else {
+ ctrl->ulock = 0;
+ }
+
uvalue->value.integer.value[0] = ctrl->ulock;
+ mutex_unlock(&dev->mlock);
+
return ulock_old != ctrl->ulock;
}
@@ -643,8 +674,22 @@ static int mchp_spdifrx_badf_get(struct snd_kcontrol *kcontrol,
u32 val;
bool badf_old = ctrl->badf;
- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
- ctrl->badf = !!(val & SPDIFRX_RSR_BADF);
+ mutex_lock(&dev->mlock);
+
+ /*
+ * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
+ * and the receiver is disabled. Thus we take into account the
+ * dev->trigger_enabled here to return a real status.
+ */
+ if (dev->trigger_enabled) {
+ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+ ctrl->badf = !!(val & SPDIFRX_RSR_BADF);
+ } else {
+ ctrl->badf = 0;
+ }
+
+ mutex_unlock(&dev->mlock);
+
uvalue->value.integer.value[0] = ctrl->badf;
return badf_old != ctrl->badf;
@@ -656,11 +701,48 @@ static int mchp_spdifrx_signal_get(struct snd_kcontrol *kcontrol,
struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
struct mchp_spdifrx_mixer_control *ctrl = &dev->control;
- u32 val;
+ u32 val = ~0U, loops = 10;
+ int ret;
bool signal_old = ctrl->signal;
- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
- ctrl->signal = !(val & SPDIFRX_RSR_NOSIGNAL);
+ mutex_lock(&dev->mlock);
+
+ /*
+ * To get the signal we need to have receiver enabled. This
+ * could be enabled also from trigger() function thus we need to
+ * take care of not disabling the receiver when it runs.
+ */
+ if (!dev->trigger_enabled) {
+ ret = clk_prepare_enable(dev->gclk);
+ if (ret)
+ goto unlock;
+
+ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
+ SPDIFRX_MR_RXEN_ENABLE);
+
+ /* Wait for RSR.ULOCK bit. */
+ while (--loops) {
+ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+ if (!(val & SPDIFRX_RSR_ULOCK))
+ break;
+ usleep_range(100, 150);
+ }
+
+ regmap_update_bits(dev->regmap, SPDIFRX_MR, SPDIFRX_MR_RXEN_MASK,
+ SPDIFRX_MR_RXEN_DISABLE);
+
+ clk_disable_unprepare(dev->gclk);
+ } else {
+ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+ }
+
+unlock:
+ mutex_unlock(&dev->mlock);
+
+ if (!(val & SPDIFRX_RSR_ULOCK))
+ ctrl->signal = !(val & SPDIFRX_RSR_NOSIGNAL);
+ else
+ ctrl->signal = 0;
uvalue->value.integer.value[0] = ctrl->signal;
return signal_old != ctrl->signal;
@@ -685,18 +767,32 @@ static int mchp_spdifrx_rate_get(struct snd_kcontrol *kcontrol,
u32 val;
int rate;
- regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+ mutex_lock(&dev->mlock);
- /* if the receiver is not locked, ISF data is invalid */
- if (val & SPDIFRX_RSR_ULOCK || !(val & SPDIFRX_RSR_IFS_MASK)) {
+ /*
+ * The RSR.ULOCK has wrong value if both pclk and gclk are enabled
+ * and the receiver is disabled. Thus we take into account the
+ * dev->trigger_enabled here to return a real status.
+ */
+ if (dev->trigger_enabled) {
+ regmap_read(dev->regmap, SPDIFRX_RSR, &val);
+ /* If the receiver is not locked, ISF data is invalid. */
+ if (val & SPDIFRX_RSR_ULOCK || !(val & SPDIFRX_RSR_IFS_MASK)) {
+ ucontrol->value.integer.value[0] = 0;
+ goto unlock;
+ }
+ } else {
+ /* Reveicer is not locked, IFS data is invalid. */
ucontrol->value.integer.value[0] = 0;
- return 0;
+ goto unlock;
}
rate = clk_get_rate(dev->gclk);
ucontrol->value.integer.value[0] = rate / (32 * SPDIFRX_RSR_IFS(val));
+unlock:
+ mutex_unlock(&dev->mlock);
return 0;
}
@@ -808,11 +904,9 @@ static int mchp_spdifrx_dai_probe(struct snd_soc_dai *dai)
SPDIFRX_MR_AUTORST_NOACTION |
SPDIFRX_MR_PACK_DISABLED);
- dev->blockend_refcount = 0;
for (ch = 0; ch < SPDIFRX_CHANNELS; ch++) {
init_completion(&ctrl->ch_stat[ch].done);
init_completion(&ctrl->user_data[ch].done);
- spin_lock_init(&ctrl->user_data[ch].lock);
}
/* Add controls */
@@ -827,7 +921,7 @@ static int mchp_spdifrx_dai_remove(struct snd_soc_dai *dai)
struct mchp_spdifrx_dev *dev = snd_soc_dai_get_drvdata(dai);
/* Disable interrupts */
- regmap_write(dev->regmap, SPDIFRX_IDR, 0xFF);
+ regmap_write(dev->regmap, SPDIFRX_IDR, GENMASK(14, 0));
clk_disable_unprepare(dev->pclk);
@@ -912,7 +1006,17 @@ static int mchp_spdifrx_probe(struct platform_device *pdev)
"failed to get the PMC generated clock: %d\n", err);
return err;
}
- spin_lock_init(&dev->blockend_lock);
+
+ /*
+ * Signal control need a valid rate on gclk. hw_params() configures
+ * it propertly but requesting signal before any hw_params() has been
+ * called lead to invalid value returned for signal. Thus, configure
+ * gclk at a valid rate, here, in initialization, to simplify the
+ * control path.
+ */
+ clk_set_min_rate(dev->gclk, 48000 * SPDIFRX_GCLK_RATIO_MIN + 1);
+
+ mutex_init(&dev->mlock);
dev->dev = &pdev->dev;
dev->regmap = regmap;
diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
index d59a7e9..c3deb82 100644
--- a/sound/soc/codecs/Kconfig
+++ b/sound/soc/codecs/Kconfig
@@ -1830,7 +1830,7 @@
config SND_SOC_ZL38060
tristate "Microsemi ZL38060 Connected Home Audio Processor"
depends on SPI_MASTER
- select GPIOLIB
+ depends on GPIOLIB
select REGMAP
help
Support for ZL38060 Connected Home Audio Processor from Microsemi,
diff --git a/sound/soc/codecs/adau7118.c b/sound/soc/codecs/adau7118.c
index 841229d..305f294 100644
--- a/sound/soc/codecs/adau7118.c
+++ b/sound/soc/codecs/adau7118.c
@@ -445,22 +445,6 @@ static const struct snd_soc_component_driver adau7118_component_driver = {
.non_legacy_dai_naming = 1,
};
-static void adau7118_regulator_disable(void *data)
-{
- struct adau7118_data *st = data;
- int ret;
- /*
- * If we fail to disable DVDD, don't bother in trying IOVDD. We
- * actually don't want to be left in the situation where DVDD
- * is enabled and IOVDD is disabled.
- */
- ret = regulator_disable(st->dvdd);
- if (ret)
- return;
-
- regulator_disable(st->iovdd);
-}
-
static int adau7118_regulator_setup(struct adau7118_data *st)
{
st->iovdd = devm_regulator_get(st->dev, "iovdd");
@@ -482,8 +466,7 @@ static int adau7118_regulator_setup(struct adau7118_data *st)
regcache_cache_only(st->map, true);
}
- return devm_add_action_or_reset(st->dev, adau7118_regulator_disable,
- st);
+ return 0;
}
static int adau7118_parset_dt(const struct adau7118_data *st)
diff --git a/sound/soc/codecs/cs42l56.c b/sound/soc/codecs/cs42l56.c
index b39c254..f0af8c1 100644
--- a/sound/soc/codecs/cs42l56.c
+++ b/sound/soc/codecs/cs42l56.c
@@ -1193,18 +1193,12 @@ static int cs42l56_i2c_probe(struct i2c_client *i2c_client,
if (pdata) {
cs42l56->pdata = *pdata;
} else {
- pdata = devm_kzalloc(&i2c_client->dev, sizeof(*pdata),
- GFP_KERNEL);
- if (!pdata)
- return -ENOMEM;
-
if (i2c_client->dev.of_node) {
ret = cs42l56_handle_of_data(i2c_client,
&cs42l56->pdata);
if (ret != 0)
return ret;
}
- cs42l56->pdata = *pdata;
}
if (cs42l56->pdata.gpio_nreset) {
diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
index 66408a9..1acd82f 100644
--- a/sound/soc/codecs/hdac_hdmi.c
+++ b/sound/soc/codecs/hdac_hdmi.c
@@ -436,23 +436,28 @@ static int hdac_hdmi_setup_audio_infoframe(struct hdac_device *hdev,
return 0;
}
-static int hdac_hdmi_set_tdm_slot(struct snd_soc_dai *dai,
- unsigned int tx_mask, unsigned int rx_mask,
- int slots, int slot_width)
+static int hdac_hdmi_set_stream(struct snd_soc_dai *dai,
+ void *stream, int direction)
{
struct hdac_hdmi_priv *hdmi = snd_soc_dai_get_drvdata(dai);
struct hdac_device *hdev = hdmi->hdev;
struct hdac_hdmi_dai_port_map *dai_map;
struct hdac_hdmi_pcm *pcm;
+ struct hdac_stream *hstream;
- dev_dbg(&hdev->dev, "%s: strm_tag: %d\n", __func__, tx_mask);
+ if (!stream)
+ return -EINVAL;
+
+ hstream = (struct hdac_stream *)stream;
+
+ dev_dbg(&hdev->dev, "%s: strm_tag: %d\n", __func__, hstream->stream_tag);
dai_map = &hdmi->dai_map[dai->id];
pcm = hdac_hdmi_get_pcm_from_cvt(hdmi, dai_map->cvt);
if (pcm)
- pcm->stream_tag = (tx_mask << 4);
+ pcm->stream_tag = (hstream->stream_tag << 4);
return 0;
}
@@ -1544,7 +1549,7 @@ static const struct snd_soc_dai_ops hdmi_dai_ops = {
.startup = hdac_hdmi_pcm_open,
.shutdown = hdac_hdmi_pcm_close,
.hw_params = hdac_hdmi_set_hw_params,
- .set_tdm_slot = hdac_hdmi_set_tdm_slot,
+ .set_stream = hdac_hdmi_set_stream,
};
/*
diff --git a/sound/soc/codecs/lpass-rx-macro.c b/sound/soc/codecs/lpass-rx-macro.c
index 2345290..72a0db0 100644
--- a/sound/soc/codecs/lpass-rx-macro.c
+++ b/sound/soc/codecs/lpass-rx-macro.c
@@ -363,7 +363,7 @@
#define CDC_RX_DSD1_CFG2 (0x0F8C)
#define RX_MAX_OFFSET (0x0F8C)
-#define MCLK_FREQ 9600000
+#define MCLK_FREQ 19200000
#define RX_MACRO_RATES (SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_16000 |\
SNDRV_PCM_RATE_32000 | SNDRV_PCM_RATE_48000 |\
@@ -608,7 +608,11 @@ struct rx_macro {
int softclip_clk_users;
struct regmap *regmap;
- struct clk_bulk_data clks[RX_NUM_CLKS_MAX];
+ struct clk *mclk;
+ struct clk *npl;
+ struct clk *macro;
+ struct clk *dcodec;
+ struct clk *fsgen;
struct clk_hw hw;
};
#define to_rx_macro(_hw) container_of(_hw, struct rx_macro, hw)
@@ -3479,17 +3483,16 @@ static const struct clk_ops swclk_gate_ops = {
};
-static struct clk *rx_macro_register_mclk_output(struct rx_macro *rx)
+static int rx_macro_register_mclk_output(struct rx_macro *rx)
{
struct device *dev = rx->dev;
- struct device_node *np = dev->of_node;
const char *parent_clk_name = NULL;
const char *clk_name = "lpass-rx-mclk";
struct clk_hw *hw;
struct clk_init_data init;
int ret;
- parent_clk_name = __clk_get_name(rx->clks[2].clk);
+ parent_clk_name = __clk_get_name(rx->mclk);
init.name = clk_name;
init.ops = &swclk_gate_ops;
@@ -3498,13 +3501,11 @@ static struct clk *rx_macro_register_mclk_output(struct rx_macro *rx)
init.num_parents = 1;
rx->hw.init = &init;
hw = &rx->hw;
- ret = clk_hw_register(rx->dev, hw);
+ ret = devm_clk_hw_register(rx->dev, hw);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- of_clk_add_provider(np, of_clk_src_simple_get, hw->clk);
-
- return NULL;
+ return devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, hw);
}
static const struct snd_soc_component_driver rx_macro_component_drv = {
@@ -3529,17 +3530,25 @@ static int rx_macro_probe(struct platform_device *pdev)
if (!rx)
return -ENOMEM;
- rx->clks[0].id = "macro";
- rx->clks[1].id = "dcodec";
- rx->clks[2].id = "mclk";
- rx->clks[3].id = "npl";
- rx->clks[4].id = "fsgen";
+ rx->macro = devm_clk_get_optional(dev, "macro");
+ if (IS_ERR(rx->macro))
+ return PTR_ERR(rx->macro);
- ret = devm_clk_bulk_get(dev, RX_NUM_CLKS_MAX, rx->clks);
- if (ret) {
- dev_err(dev, "Error getting RX Clocks (%d)\n", ret);
- return ret;
- }
+ rx->dcodec = devm_clk_get_optional(dev, "dcodec");
+ if (IS_ERR(rx->dcodec))
+ return PTR_ERR(rx->dcodec);
+
+ rx->mclk = devm_clk_get(dev, "mclk");
+ if (IS_ERR(rx->mclk))
+ return PTR_ERR(rx->mclk);
+
+ rx->npl = devm_clk_get(dev, "npl");
+ if (IS_ERR(rx->npl))
+ return PTR_ERR(rx->npl);
+
+ rx->fsgen = devm_clk_get(dev, "fsgen");
+ if (IS_ERR(rx->fsgen))
+ return PTR_ERR(rx->fsgen);
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base))
@@ -3555,21 +3564,52 @@ static int rx_macro_probe(struct platform_device *pdev)
rx->dev = dev;
/* set MCLK and NPL rates */
- clk_set_rate(rx->clks[2].clk, MCLK_FREQ);
- clk_set_rate(rx->clks[3].clk, 2 * MCLK_FREQ);
+ clk_set_rate(rx->mclk, MCLK_FREQ);
+ clk_set_rate(rx->npl, MCLK_FREQ);
- ret = clk_bulk_prepare_enable(RX_NUM_CLKS_MAX, rx->clks);
+ ret = clk_prepare_enable(rx->macro);
if (ret)
- return ret;
+ goto err;
- rx_macro_register_mclk_output(rx);
+ ret = clk_prepare_enable(rx->dcodec);
+ if (ret)
+ goto err_dcodec;
+
+ ret = clk_prepare_enable(rx->mclk);
+ if (ret)
+ goto err_mclk;
+
+ ret = clk_prepare_enable(rx->npl);
+ if (ret)
+ goto err_npl;
+
+ ret = clk_prepare_enable(rx->fsgen);
+ if (ret)
+ goto err_fsgen;
+
+ ret = rx_macro_register_mclk_output(rx);
+ if (ret)
+ goto err_clkout;
ret = devm_snd_soc_register_component(dev, &rx_macro_component_drv,
rx_macro_dai,
ARRAY_SIZE(rx_macro_dai));
if (ret)
- clk_bulk_disable_unprepare(RX_NUM_CLKS_MAX, rx->clks);
+ goto err_clkout;
+ return 0;
+
+err_clkout:
+ clk_disable_unprepare(rx->fsgen);
+err_fsgen:
+ clk_disable_unprepare(rx->npl);
+err_npl:
+ clk_disable_unprepare(rx->mclk);
+err_mclk:
+ clk_disable_unprepare(rx->dcodec);
+err_dcodec:
+ clk_disable_unprepare(rx->macro);
+err:
return ret;
}
@@ -3577,8 +3617,12 @@ static int rx_macro_remove(struct platform_device *pdev)
{
struct rx_macro *rx = dev_get_drvdata(&pdev->dev);
- of_clk_del_provider(pdev->dev.of_node);
- clk_bulk_disable_unprepare(RX_NUM_CLKS_MAX, rx->clks);
+ clk_disable_unprepare(rx->mclk);
+ clk_disable_unprepare(rx->npl);
+ clk_disable_unprepare(rx->fsgen);
+ clk_disable_unprepare(rx->macro);
+ clk_disable_unprepare(rx->dcodec);
+
return 0;
}
diff --git a/sound/soc/codecs/lpass-tx-macro.c b/sound/soc/codecs/lpass-tx-macro.c
index feafb8a..35d148e 100644
--- a/sound/soc/codecs/lpass-tx-macro.c
+++ b/sound/soc/codecs/lpass-tx-macro.c
@@ -6,6 +6,7 @@
#include <linux/clk.h>
#include <linux/io.h>
#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
#include <linux/regmap.h>
#include <sound/soc.h>
#include <sound/soc-dapm.h>
@@ -199,7 +200,7 @@
#define TX_MACRO_AMIC_UNMUTE_DELAY_MS 100
#define TX_MACRO_DMIC_HPF_DELAY_MS 300
#define TX_MACRO_AMIC_HPF_DELAY_MS 300
-#define MCLK_FREQ 9600000
+#define MCLK_FREQ 19200000
enum {
TX_MACRO_AIF_INVALID = 0,
@@ -238,7 +239,7 @@ enum {
struct tx_mute_work {
struct tx_macro *tx;
- u32 decimator;
+ u8 decimator;
struct delayed_work dwork;
};
@@ -258,7 +259,11 @@ struct tx_macro {
unsigned long active_ch_cnt[TX_MACRO_MAX_DAIS];
unsigned long active_decimator[TX_MACRO_MAX_DAIS];
struct regmap *regmap;
- struct clk_bulk_data clks[TX_NUM_CLKS_MAX];
+ struct clk *mclk;
+ struct clk *npl;
+ struct clk *macro;
+ struct clk *dcodec;
+ struct clk *fsgen;
struct clk_hw hw;
bool dec_active[NUM_DECIMATORS];
bool reset_swr;
@@ -627,7 +632,7 @@ static int tx_macro_mclk_enable(struct tx_macro *tx,
return 0;
}
-static bool is_amic_enabled(struct snd_soc_component *component, int decimator)
+static bool is_amic_enabled(struct snd_soc_component *component, u8 decimator)
{
u16 adc_mux_reg, adc_reg, adc_n;
@@ -838,7 +843,7 @@ static int tx_macro_enable_dec(struct snd_soc_dapm_widget *w,
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
- unsigned int decimator;
+ u8 decimator;
u16 tx_vol_ctl_reg, dec_cfg_reg, hpf_gate_reg, tx_gain_ctl_reg;
u8 hpf_cut_off_freq;
int hpf_delay = TX_MACRO_DMIC_HPF_DELAY_MS;
@@ -1053,7 +1058,8 @@ static int tx_macro_hw_params(struct snd_pcm_substream *substream,
struct snd_soc_dai *dai)
{
struct snd_soc_component *component = dai->component;
- u32 decimator, sample_rate;
+ u32 sample_rate;
+ u8 decimator;
int tx_fs_rate;
struct tx_macro *tx = snd_soc_component_get_drvdata(component);
@@ -1117,7 +1123,7 @@ static int tx_macro_digital_mute(struct snd_soc_dai *dai, int mute, int stream)
{
struct snd_soc_component *component = dai->component;
struct tx_macro *tx = snd_soc_component_get_drvdata(component);
- u16 decimator;
+ u8 decimator;
decimator = tx->active_decimator[dai->id];
@@ -1745,17 +1751,16 @@ static const struct clk_ops swclk_gate_ops = {
};
-static struct clk *tx_macro_register_mclk_output(struct tx_macro *tx)
+static int tx_macro_register_mclk_output(struct tx_macro *tx)
{
struct device *dev = tx->dev;
- struct device_node *np = dev->of_node;
const char *parent_clk_name = NULL;
const char *clk_name = "lpass-tx-mclk";
struct clk_hw *hw;
struct clk_init_data init;
int ret;
- parent_clk_name = __clk_get_name(tx->clks[2].clk);
+ parent_clk_name = __clk_get_name(tx->mclk);
init.name = clk_name;
init.ops = &swclk_gate_ops;
@@ -1764,13 +1769,11 @@ static struct clk *tx_macro_register_mclk_output(struct tx_macro *tx)
init.num_parents = 1;
tx->hw.init = &init;
hw = &tx->hw;
- ret = clk_hw_register(tx->dev, hw);
+ ret = devm_clk_hw_register(dev, hw);
if (ret)
- return ERR_PTR(ret);
+ return ret;
- of_clk_add_provider(np, of_clk_src_simple_get, hw->clk);
-
- return NULL;
+ return devm_of_clk_add_hw_provider(dev, of_clk_hw_simple_get, hw);
}
static const struct snd_soc_component_driver tx_macro_component_drv = {
@@ -1795,17 +1798,25 @@ static int tx_macro_probe(struct platform_device *pdev)
if (!tx)
return -ENOMEM;
- tx->clks[0].id = "macro";
- tx->clks[1].id = "dcodec";
- tx->clks[2].id = "mclk";
- tx->clks[3].id = "npl";
- tx->clks[4].id = "fsgen";
+ tx->macro = devm_clk_get_optional(dev, "macro");
+ if (IS_ERR(tx->macro))
+ return PTR_ERR(tx->macro);
- ret = devm_clk_bulk_get(dev, TX_NUM_CLKS_MAX, tx->clks);
- if (ret) {
- dev_err(dev, "Error getting RX Clocks (%d)\n", ret);
- return ret;
- }
+ tx->dcodec = devm_clk_get_optional(dev, "dcodec");
+ if (IS_ERR(tx->dcodec))
+ return PTR_ERR(tx->dcodec);
+
+ tx->mclk = devm_clk_get(dev, "mclk");
+ if (IS_ERR(tx->mclk))
+ return PTR_ERR(tx->mclk);
+
+ tx->npl = devm_clk_get(dev, "npl");
+ if (IS_ERR(tx->npl))
+ return PTR_ERR(tx->npl);
+
+ tx->fsgen = devm_clk_get(dev, "fsgen");
+ if (IS_ERR(tx->fsgen))
+ return PTR_ERR(tx->fsgen);
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base))
@@ -1821,24 +1832,52 @@ static int tx_macro_probe(struct platform_device *pdev)
tx->dev = dev;
/* set MCLK and NPL rates */
- clk_set_rate(tx->clks[2].clk, MCLK_FREQ);
- clk_set_rate(tx->clks[3].clk, 2 * MCLK_FREQ);
+ clk_set_rate(tx->mclk, MCLK_FREQ);
+ clk_set_rate(tx->npl, MCLK_FREQ);
- ret = clk_bulk_prepare_enable(TX_NUM_CLKS_MAX, tx->clks);
+ ret = clk_prepare_enable(tx->macro);
if (ret)
- return ret;
+ goto err;
- tx_macro_register_mclk_output(tx);
+ ret = clk_prepare_enable(tx->dcodec);
+ if (ret)
+ goto err_dcodec;
+
+ ret = clk_prepare_enable(tx->mclk);
+ if (ret)
+ goto err_mclk;
+
+ ret = clk_prepare_enable(tx->npl);
+ if (ret)
+ goto err_npl;
+
+ ret = clk_prepare_enable(tx->fsgen);
+ if (ret)
+ goto err_fsgen;
+
+ ret = tx_macro_register_mclk_output(tx);
+ if (ret)
+ goto err_clkout;
ret = devm_snd_soc_register_component(dev, &tx_macro_component_drv,
tx_macro_dai,
ARRAY_SIZE(tx_macro_dai));
if (ret)
- goto err;
- return ret;
-err:
- clk_bulk_disable_unprepare(TX_NUM_CLKS_MAX, tx->clks);
+ goto err_clkout;
+ return 0;
+
+err_clkout:
+ clk_disable_unprepare(tx->fsgen);
+err_fsgen:
+ clk_disable_unprepare(tx->npl);
+err_npl:
+ clk_disable_unprepare(tx->mclk);
+err_mclk:
+ clk_disable_unprepare(tx->dcodec);
+err_dcodec:
+ clk_disable_unprepare(tx->macro);
+err:
return ret;
}
@@ -1846,9 +1885,11 @@ static int tx_macro_remove(struct platform_device *pdev)
{
struct tx_macro *tx = dev_get_drvdata(&pdev->dev);
- of_clk_del_provider(pdev->dev.of_node);
-
- clk_bulk_disable_unprepare(TX_NUM_CLKS_MAX, tx->clks);
+ clk_disable_unprepare(tx->macro);
+ clk_disable_unprepare(tx->dcodec);
+ clk_disable_unprepare(tx->mclk);
+ clk_disable_unprepare(tx->npl);
+ clk_disable_unprepare(tx->fsgen);
return 0;
}
diff --git a/sound/soc/codecs/lpass-va-macro.c b/sound/soc/codecs/lpass-va-macro.c
index 08702a2..9b9bae9 100644
--- a/sound/soc/codecs/lpass-va-macro.c
+++ b/sound/soc/codecs/lpass-va-macro.c
@@ -1408,7 +1408,7 @@ static int va_macro_probe(struct platform_device *pdev)
va->clks[1].id = "dcodec";
va->clks[2].id = "mclk";
- ret = devm_clk_bulk_get(dev, VA_NUM_CLKS_MAX, va->clks);
+ ret = devm_clk_bulk_get_optional(dev, VA_NUM_CLKS_MAX, va->clks);
if (ret) {
dev_err(dev, "Error getting VA Clocks (%d)\n", ret);
return ret;
diff --git a/sound/soc/codecs/rt715-sdca-sdw.c b/sound/soc/codecs/rt715-sdca-sdw.c
index 0f4354e..85abf80 100644
--- a/sound/soc/codecs/rt715-sdca-sdw.c
+++ b/sound/soc/codecs/rt715-sdca-sdw.c
@@ -167,7 +167,7 @@ static int rt715_sdca_read_prop(struct sdw_slave *slave)
}
/* set the timeout values */
- prop->clk_stop_timeout = 20;
+ prop->clk_stop_timeout = 200;
return 0;
}
diff --git a/sound/soc/codecs/tlv320adcx140.c b/sound/soc/codecs/tlv320adcx140.c
index 32b120d..06d2502b 100644
--- a/sound/soc/codecs/tlv320adcx140.c
+++ b/sound/soc/codecs/tlv320adcx140.c
@@ -870,7 +870,7 @@ static int adcx140_configure_gpio(struct adcx140_priv *adcx140)
gpio_count = device_property_count_u32(adcx140->dev,
"ti,gpio-config");
- if (gpio_count == 0)
+ if (gpio_count <= 0)
return 0;
if (gpio_count != ADCX140_NUM_GPIO_CFGS)
diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
index c72a156..5000d77 100644
--- a/sound/soc/fsl/fsl-asoc-card.c
+++ b/sound/soc/fsl/fsl-asoc-card.c
@@ -120,11 +120,11 @@ static const struct snd_soc_dapm_route audio_map[] = {
static const struct snd_soc_dapm_route audio_map_ac97[] = {
/* 1st half -- Normal DAPM routes */
- {"Playback", NULL, "AC97 Playback"},
- {"AC97 Capture", NULL, "Capture"},
+ {"AC97 Playback", NULL, "CPU AC97 Playback"},
+ {"CPU AC97 Capture", NULL, "AC97 Capture"},
/* 2nd half -- ASRC DAPM routes */
- {"AC97 Playback", NULL, "ASRC-Playback"},
- {"ASRC-Capture", NULL, "AC97 Capture"},
+ {"CPU AC97 Playback", NULL, "ASRC-Playback"},
+ {"ASRC-Capture", NULL, "CPU AC97 Capture"},
};
static const struct snd_soc_dapm_route audio_map_tx[] = {
diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
index d1cd104..38d4d1b 100644
--- a/sound/soc/fsl/fsl_micfil.c
+++ b/sound/soc/fsl/fsl_micfil.c
@@ -88,21 +88,21 @@ static DECLARE_TLV_DB_SCALE(gain_tlv, 0, 100, 0);
static const struct snd_kcontrol_new fsl_micfil_snd_controls[] = {
SOC_SINGLE_SX_TLV("CH0 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(0), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(0), 0x8, 0xF, gain_tlv),
SOC_SINGLE_SX_TLV("CH1 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(1), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(1), 0x8, 0xF, gain_tlv),
SOC_SINGLE_SX_TLV("CH2 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(2), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(2), 0x8, 0xF, gain_tlv),
SOC_SINGLE_SX_TLV("CH3 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(3), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(3), 0x8, 0xF, gain_tlv),
SOC_SINGLE_SX_TLV("CH4 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(4), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(4), 0x8, 0xF, gain_tlv),
SOC_SINGLE_SX_TLV("CH5 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(5), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(5), 0x8, 0xF, gain_tlv),
SOC_SINGLE_SX_TLV("CH6 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(6), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(6), 0x8, 0xF, gain_tlv),
SOC_SINGLE_SX_TLV("CH7 Volume", REG_MICFIL_OUT_CTRL,
- MICFIL_OUTGAIN_CHX_SHIFT(7), 0xF, 0x7, gain_tlv),
+ MICFIL_OUTGAIN_CHX_SHIFT(7), 0x8, 0xF, gain_tlv),
SOC_ENUM_EXT("MICFIL Quality Select",
fsl_micfil_quality_enum,
snd_soc_get_enum_double, snd_soc_put_enum_double),
diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
index 5ba06df..6a12cbd 100644
--- a/sound/soc/fsl/fsl_sai.c
+++ b/sound/soc/fsl/fsl_sai.c
@@ -231,6 +231,7 @@ static int fsl_sai_set_dai_fmt_tr(struct snd_soc_dai *cpu_dai,
if (!sai->is_lsb_first)
val_cr4 |= FSL_SAI_CR4_MF;
+ sai->is_dsp_mode = false;
/* DAI mode */
switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
case SND_SOC_DAIFMT_I2S:
@@ -297,23 +298,23 @@ static int fsl_sai_set_dai_fmt_tr(struct snd_soc_dai *cpu_dai,
return -EINVAL;
}
- /* DAI clock master masks */
- switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
- case SND_SOC_DAIFMT_CBS_CFS:
+ /* DAI clock provider masks */
+ switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) {
+ case SND_SOC_DAIFMT_CBC_CFC:
val_cr2 |= FSL_SAI_CR2_BCD_MSTR;
val_cr4 |= FSL_SAI_CR4_FSD_MSTR;
- sai->is_slave_mode = false;
+ sai->is_consumer_mode = false;
break;
- case SND_SOC_DAIFMT_CBM_CFM:
- sai->is_slave_mode = true;
+ case SND_SOC_DAIFMT_CBP_CFP:
+ sai->is_consumer_mode = true;
break;
- case SND_SOC_DAIFMT_CBS_CFM:
+ case SND_SOC_DAIFMT_CBC_CFP:
val_cr2 |= FSL_SAI_CR2_BCD_MSTR;
- sai->is_slave_mode = false;
+ sai->is_consumer_mode = false;
break;
- case SND_SOC_DAIFMT_CBM_CFS:
+ case SND_SOC_DAIFMT_CBP_CFC:
val_cr4 |= FSL_SAI_CR4_FSD_MSTR;
- sai->is_slave_mode = true;
+ sai->is_consumer_mode = true;
break;
default:
return -EINVAL;
@@ -356,8 +357,8 @@ static int fsl_sai_set_bclk(struct snd_soc_dai *dai, bool tx, u32 freq)
u32 id;
int ret = 0;
- /* Don't apply to slave mode */
- if (sai->is_slave_mode)
+ /* Don't apply to consumer mode */
+ if (sai->is_consumer_mode)
return 0;
/*
@@ -462,7 +463,7 @@ static int fsl_sai_hw_params(struct snd_pcm_substream *substream,
pins = DIV_ROUND_UP(channels, slots);
- if (!sai->is_slave_mode) {
+ if (!sai->is_consumer_mode) {
if (sai->bclk_ratio)
ret = fsl_sai_set_bclk(cpu_dai, tx,
sai->bclk_ratio *
@@ -502,12 +503,12 @@ static int fsl_sai_hw_params(struct snd_pcm_substream *substream,
val_cr4 |= FSL_SAI_CR4_CHMOD;
/*
- * For SAI master mode, when Tx(Rx) sync with Rx(Tx) clock, Rx(Tx) will
+ * For SAI provider mode, when Tx(Rx) sync with Rx(Tx) clock, Rx(Tx) will
* generate bclk and frame clock for Tx(Rx), we should set RCR4(TCR4),
* RCR5(TCR5) for playback(capture), or there will be sync error.
*/
- if (!sai->is_slave_mode && fsl_sai_dir_is_synced(sai, adir)) {
+ if (!sai->is_consumer_mode && fsl_sai_dir_is_synced(sai, adir)) {
regmap_update_bits(sai->regmap, FSL_SAI_xCR4(!tx, ofs),
FSL_SAI_CR4_SYWD_MASK | FSL_SAI_CR4_FRSZ_MASK |
FSL_SAI_CR4_CHMOD_MASK,
@@ -543,7 +544,7 @@ static int fsl_sai_hw_free(struct snd_pcm_substream *substream,
regmap_update_bits(sai->regmap, FSL_SAI_xCR3(tx, ofs),
FSL_SAI_CR3_TRCE_MASK, 0);
- if (!sai->is_slave_mode &&
+ if (!sai->is_consumer_mode &&
sai->mclk_streams & BIT(substream->stream)) {
clk_disable_unprepare(sai->mclk_clk[sai->mclk_id[tx]]);
sai->mclk_streams &= ~BIT(substream->stream);
@@ -577,7 +578,7 @@ static void fsl_sai_config_disable(struct fsl_sai *sai, int dir)
* This is a hardware bug, and will be fix in the
* next sai version.
*/
- if (!sai->is_slave_mode) {
+ if (!sai->is_consumer_mode) {
/* Software Reset */
regmap_write(sai->regmap, FSL_SAI_xCSR(tx, ofs), FSL_SAI_CSR_SR);
/* Clear SR bit to finish the reset */
diff --git a/sound/soc/fsl/fsl_sai.h b/sound/soc/fsl/fsl_sai.h
index f471467..93da860 100644
--- a/sound/soc/fsl/fsl_sai.h
+++ b/sound/soc/fsl/fsl_sai.h
@@ -259,7 +259,7 @@ struct fsl_sai {
struct clk *bus_clk;
struct clk *mclk_clk[FSL_SAI_MCLK_MAX];
- bool is_slave_mode;
+ bool is_consumer_mode;
bool is_lsb_first;
bool is_dsp_mode;
bool synchronous[2];
diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c
index ecbc1c3..0c73c2e 100644
--- a/sound/soc/fsl/fsl_ssi.c
+++ b/sound/soc/fsl/fsl_ssi.c
@@ -1160,14 +1160,14 @@ static struct snd_soc_dai_driver fsl_ssi_ac97_dai = {
.symmetric_channels = 1,
.probe = fsl_ssi_dai_probe,
.playback = {
- .stream_name = "AC97 Playback",
+ .stream_name = "CPU AC97 Playback",
.channels_min = 2,
.channels_max = 2,
.rates = SNDRV_PCM_RATE_8000_48000,
.formats = SNDRV_PCM_FMTBIT_S16 | SNDRV_PCM_FMTBIT_S20,
},
.capture = {
- .stream_name = "AC97 Capture",
+ .stream_name = "CPU AC97 Capture",
.channels_min = 2,
.channels_max = 2,
.rates = SNDRV_PCM_RATE_48000,
diff --git a/sound/soc/intel/boards/bdw-rt5650.c b/sound/soc/intel/boards/bdw-rt5650.c
index c5122d3..7c8c255 100644
--- a/sound/soc/intel/boards/bdw-rt5650.c
+++ b/sound/soc/intel/boards/bdw-rt5650.c
@@ -299,7 +299,7 @@ static int bdw_rt5650_probe(struct platform_device *pdev)
if (!bdw_rt5650)
return -ENOMEM;
- /* override plaform name, if required */
+ /* override platform name, if required */
mach = pdev->dev.platform_data;
ret = snd_soc_fixup_dai_links_platform_name(&bdw_rt5650_card,
mach->mach_params.platform);
diff --git a/sound/soc/intel/boards/bdw-rt5677.c b/sound/soc/intel/boards/bdw-rt5677.c
index e01b7a9..e990940 100644
--- a/sound/soc/intel/boards/bdw-rt5677.c
+++ b/sound/soc/intel/boards/bdw-rt5677.c
@@ -426,7 +426,7 @@ static int bdw_rt5677_probe(struct platform_device *pdev)
if (!bdw_rt5677)
return -ENOMEM;
- /* override plaform name, if required */
+ /* override platform name, if required */
mach = pdev->dev.platform_data;
ret = snd_soc_fixup_dai_links_platform_name(&bdw_rt5677_card,
mach->mach_params.platform);
diff --git a/sound/soc/intel/boards/broadwell.c b/sound/soc/intel/boards/broadwell.c
index 3c3aff9..f18dcda 100644
--- a/sound/soc/intel/boards/broadwell.c
+++ b/sound/soc/intel/boards/broadwell.c
@@ -292,7 +292,7 @@ static int broadwell_audio_probe(struct platform_device *pdev)
broadwell_rt286.dev = &pdev->dev;
- /* override plaform name, if required */
+ /* override platform name, if required */
mach = pdev->dev.platform_data;
ret = snd_soc_fixup_dai_links_platform_name(&broadwell_rt286,
mach->mach_params.platform);
diff --git a/sound/soc/intel/boards/bxt_da7219_max98357a.c b/sound/soc/intel/boards/bxt_da7219_max98357a.c
index e67ddfb..e49c64f 100644
--- a/sound/soc/intel/boards/bxt_da7219_max98357a.c
+++ b/sound/soc/intel/boards/bxt_da7219_max98357a.c
@@ -825,7 +825,7 @@ static int broxton_audio_probe(struct platform_device *pdev)
}
}
- /* override plaform name, if required */
+ /* override platform name, if required */
mach = pdev->dev.platform_data;
platform_name = mach->mach_params.platform;
diff --git a/sound/soc/intel/boards/bxt_rt298.c b/sound/soc/intel/boards/bxt_rt298.c
index 47f6b15..0d1df37 100644
--- a/sound/soc/intel/boards/bxt_rt298.c
+++ b/sound/soc/intel/boards/bxt_rt298.c
@@ -628,7 +628,7 @@ static int broxton_audio_probe(struct platform_device *pdev)
card->dev = &pdev->dev;
snd_soc_card_set_drvdata(card, ctx);
- /* override plaform name, if required */
+ /* override platform name, if required */
mach = pdev->dev.platform_data;
platform_name = mach->mach_params.platform;
diff --git a/sound/soc/intel/boards/bytcht_cx2072x.c b/sound/soc/intel/boards/bytcht_cx2072x.c
index a9e51bb..0fc57db 100644
--- a/sound/soc/intel/boards/bytcht_cx2072x.c
+++ b/sound/soc/intel/boards/bytcht_cx2072x.c
@@ -257,7 +257,7 @@ static int snd_byt_cht_cx2072x_probe(struct platform_device *pdev)
byt_cht_cx2072x_dais[dai_index].codecs->name = codec_name;
}
- /* override plaform name, if required */
+ /* override platform name, if required */
ret = snd_soc_fixup_dai_links_platform_name(&byt_cht_cx2072x_card,
mach->mach_params.platform);
if (ret)
diff --git a/sound/soc/intel/boards/bytcht_da7213.c b/sound/soc/intel/boards/bytcht_da7213.c
index a28773f..21b6beb 100644
--- a/sound/soc/intel/boards/bytcht_da7213.c
+++ b/sound/soc/intel/boards/bytcht_da7213.c
@@ -260,7 +260,7 @@ static int bytcht_da7213_probe(struct platform_device *pdev)
dailink[dai_index].codecs->name = codec_name;
}
- /* override plaform name, if required */
+ /* override platform name, if required */
platform_name = mach->mach_params.platform;
ret_val = snd_soc_fixup_dai_links_platform_name(card, platform_name);
diff --git a/sound/soc/intel/boards/bytcht_es8316.c b/sound/soc/intel/boards/bytcht_es8316.c
index 950457b..b5c97d3 100644
--- a/sound/soc/intel/boards/bytcht_es8316.c
+++ b/sound/soc/intel/boards/bytcht_es8316.c
@@ -497,21 +497,28 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
if (adev) {
snprintf(codec_name, sizeof(codec_name),
"i2c-%s", acpi_dev_name(adev));
- put_device(&adev->dev);
byt_cht_es8316_dais[dai_index].codecs->name = codec_name;
} else {
dev_err(dev, "Error cannot find '%s' dev\n", mach->id);
return -ENXIO;
}
- /* override plaform name, if required */
+ codec_dev = acpi_get_first_physical_node(adev);
+ acpi_dev_put(adev);
+ if (!codec_dev)
+ return -EPROBE_DEFER;
+ priv->codec_dev = get_device(codec_dev);
+
+ /* override platform name, if required */
byt_cht_es8316_card.dev = dev;
platform_name = mach->mach_params.platform;
ret = snd_soc_fixup_dai_links_platform_name(&byt_cht_es8316_card,
platform_name);
- if (ret)
+ if (ret) {
+ put_device(codec_dev);
return ret;
+ }
/* Check for BYTCR or other platform and setup quirks */
dmi_id = dmi_first_match(byt_cht_es8316_quirk_table);
@@ -539,14 +546,10 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
/* get the clock */
priv->mclk = devm_clk_get(dev, "pmc_plt_clk_3");
- if (IS_ERR(priv->mclk))
+ if (IS_ERR(priv->mclk)) {
+ put_device(codec_dev);
return dev_err_probe(dev, PTR_ERR(priv->mclk), "clk_get pmc_plt_clk_3 failed\n");
-
- /* get speaker enable GPIO */
- codec_dev = acpi_get_first_physical_node(adev);
- if (!codec_dev)
- return -EPROBE_DEFER;
- priv->codec_dev = get_device(codec_dev);
+ }
if (quirk & BYT_CHT_ES8316_JD_INVERTED)
props[cnt++] = PROPERTY_ENTRY_BOOL("everest,jack-detect-inverted");
@@ -568,6 +571,7 @@ static int snd_byt_cht_es8316_mc_probe(struct platform_device *pdev)
}
}
+ /* get speaker enable GPIO */
devm_acpi_dev_add_driver_gpios(codec_dev, byt_cht_es8316_gpios);
priv->speaker_en_gpio =
gpiod_get_optional(codec_dev, "speaker-enable",
diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
index 888e04c..5f6e2bb 100644
--- a/sound/soc/intel/boards/bytcr_rt5640.c
+++ b/sound/soc/intel/boards/bytcr_rt5640.c
@@ -1561,13 +1561,18 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
if (adev) {
snprintf(byt_rt5640_codec_name, sizeof(byt_rt5640_codec_name),
"i2c-%s", acpi_dev_name(adev));
- put_device(&adev->dev);
byt_rt5640_dais[dai_index].codecs->name = byt_rt5640_codec_name;
} else {
dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
return -ENXIO;
}
+ codec_dev = acpi_get_first_physical_node(adev);
+ acpi_dev_put(adev);
+ if (!codec_dev)
+ return -EPROBE_DEFER;
+ priv->codec_dev = get_device(codec_dev);
+
/*
* swap SSP0 if bytcr is detected
* (will be overridden if DMI quirk is detected)
@@ -1642,11 +1647,6 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
byt_rt5640_quirk = quirk_override;
}
- codec_dev = acpi_get_first_physical_node(adev);
- if (!codec_dev)
- return -EPROBE_DEFER;
- priv->codec_dev = get_device(codec_dev);
-
if (byt_rt5640_quirk & BYT_RT5640_JD_HP_ELITEP_1000G2) {
acpi_dev_add_driver_gpios(ACPI_COMPANION(priv->codec_dev),
byt_rt5640_hp_elitepad_1000g2_gpios);
@@ -1733,7 +1733,7 @@ static int snd_byt_rt5640_mc_probe(struct platform_device *pdev)
byt_rt5640_card.long_name = byt_rt5640_long_name;
#endif
- /* override plaform name, if required */
+ /* override platform name, if required */
platform_name = mach->mach_params.platform;
ret_val = snd_soc_fixup_dai_links_platform_name(&byt_rt5640_card,
diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
index e94c912..93cec4d 100644
--- a/sound/soc/intel/boards/bytcr_rt5651.c
+++ b/sound/soc/intel/boards/bytcr_rt5651.c
@@ -930,7 +930,6 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
if (adev) {
snprintf(byt_rt5651_codec_name, sizeof(byt_rt5651_codec_name),
"i2c-%s", acpi_dev_name(adev));
- put_device(&adev->dev);
byt_rt5651_dais[dai_index].codecs->name = byt_rt5651_codec_name;
} else {
dev_err(&pdev->dev, "Error cannot find '%s' dev\n", mach->id);
@@ -938,6 +937,7 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
}
codec_dev = acpi_get_first_physical_node(adev);
+ acpi_dev_put(adev);
if (!codec_dev)
return -EPROBE_DEFER;
priv->codec_dev = get_device(codec_dev);
@@ -1104,7 +1104,7 @@ static int snd_byt_rt5651_mc_probe(struct platform_device *pdev)
byt_rt5651_card.long_name = byt_rt5651_long_name;
#endif
- /* override plaform name, if required */
+ /* override platform name, if required */
platform_name = mach->mach_params.platform;
ret_val = snd_soc_fixup_dai_links_platform_name(&byt_rt5651_card,
diff --git a/sound/soc/intel/boards/bytcr_wm5102.c b/sound/soc/intel/boards/bytcr_wm5102.c
index bb669d5..9a4126f 100644
--- a/sound/soc/intel/boards/bytcr_wm5102.c
+++ b/sound/soc/intel/boards/bytcr_wm5102.c
@@ -411,9 +411,9 @@ static int snd_byt_wm5102_mc_probe(struct platform_device *pdev)
return -ENOENT;
}
snprintf(codec_name, sizeof(codec_name), "spi-%s", acpi_dev_name(adev));
- put_device(&adev->dev);
codec_dev = bus_find_device_by_name(&spi_bus_type, NULL, codec_name);
+ acpi_dev_put(adev);
if (!codec_dev)
return -EPROBE_DEFER;
diff --git a/sound/soc/intel/boards/cht_bsw_max98090_ti.c b/sound/soc/intel/boards/cht_bsw_max98090_ti.c
index 1318823..ba6de1e 100644
--- a/sound/soc/intel/boards/cht_bsw_max98090_ti.c
+++ b/sound/soc/intel/boards/cht_bsw_max98090_ti.c
@@ -296,7 +296,7 @@ static int cht_max98090_headset_init(struct snd_soc_component *component)
int ret;
/*
- * TI supports 4 butons headset detection
+ * TI supports 4 buttons headset detection
* KEY_MEDIA
* KEY_VOICECOMMAND
* KEY_VOLUMEUP
@@ -558,7 +558,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
dev_dbg(dev, "Unable to add GPIO mapping table\n");
}
- /* override plaform name, if required */
+ /* override platform name, if required */
snd_soc_card_cht.dev = &pdev->dev;
mach = pdev->dev.platform_data;
platform_name = mach->mach_params.platform;
diff --git a/sound/soc/intel/boards/cht_bsw_nau8824.c b/sound/soc/intel/boards/cht_bsw_nau8824.c
index da5a5cb..779b388 100644
--- a/sound/soc/intel/boards/cht_bsw_nau8824.c
+++ b/sound/soc/intel/boards/cht_bsw_nau8824.c
@@ -100,7 +100,7 @@ static int cht_codec_init(struct snd_soc_pcm_runtime *runtime)
struct snd_soc_component *component = codec_dai->component;
int ret, jack_type;
- /* NAU88L24 supports 4 butons headset detection
+ /* NAU88L24 supports 4 buttons headset detection
* KEY_PLAYPAUSE
* KEY_VOICECOMMAND
* KEY_VOLUMEUP
@@ -257,7 +257,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
return -ENOMEM;
snd_soc_card_set_drvdata(&snd_soc_card_cht, drv);
- /* override plaform name, if required */
+ /* override platform name, if required */
snd_soc_card_cht.dev = &pdev->dev;
mach = pdev->dev.platform_data;
platform_name = mach->mach_params.platform;
diff --git a/sound/soc/intel/boards/cht_bsw_rt5645.c b/sound/soc/intel/boards/cht_bsw_rt5645.c
index 804dbc7..381bf60 100644
--- a/sound/soc/intel/boards/cht_bsw_rt5645.c
+++ b/sound/soc/intel/boards/cht_bsw_rt5645.c
@@ -653,7 +653,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
(cht_rt5645_quirk & CHT_RT5645_SSP0_AIF2))
cht_dailink[dai_index].cpus->dai_name = "ssp0-port";
- /* override plaform name, if required */
+ /* override platform name, if required */
platform_name = mach->mach_params.platform;
ret_val = snd_soc_fixup_dai_links_platform_name(card,
diff --git a/sound/soc/intel/boards/cht_bsw_rt5672.c b/sound/soc/intel/boards/cht_bsw_rt5672.c
index 9509b6e..ba96741 100644
--- a/sound/soc/intel/boards/cht_bsw_rt5672.c
+++ b/sound/soc/intel/boards/cht_bsw_rt5672.c
@@ -483,7 +483,7 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
drv->use_ssp0 = true;
}
- /* override plaform name, if required */
+ /* override platform name, if required */
snd_soc_card_cht.dev = &pdev->dev;
platform_name = mach->mach_params.platform;
diff --git a/sound/soc/intel/boards/glk_rt5682_max98357a.c b/sound/soc/intel/boards/glk_rt5682_max98357a.c
index 71fe26a..99b3d76 100644
--- a/sound/soc/intel/boards/glk_rt5682_max98357a.c
+++ b/sound/soc/intel/boards/glk_rt5682_max98357a.c
@@ -604,7 +604,7 @@ static int geminilake_audio_probe(struct platform_device *pdev)
card->dev = &pdev->dev;
snd_soc_card_set_drvdata(card, ctx);
- /* override plaform name, if required */
+ /* override platform name, if required */
mach = pdev->dev.platform_data;
platform_name = mach->mach_params.platform;
diff --git a/sound/soc/intel/boards/haswell.c b/sound/soc/intel/boards/haswell.c
index c763bfe..b5ca317 100644
--- a/sound/soc/intel/boards/haswell.c
+++ b/sound/soc/intel/boards/haswell.c
@@ -175,7 +175,7 @@ static int haswell_audio_probe(struct platform_device *pdev)
haswell_rt5640.dev = &pdev->dev;
- /* override plaform name, if required */
+ /* override platform name, if required */
mach = pdev->dev.platform_data;
ret = snd_soc_fixup_dai_links_platform_name(&haswell_rt5640,
mach->mach_params.platform);
diff --git a/sound/soc/intel/boards/sof_cs42l42.c b/sound/soc/intel/boards/sof_cs42l42.c
index ce78c18..8061082 100644
--- a/sound/soc/intel/boards/sof_cs42l42.c
+++ b/sound/soc/intel/boards/sof_cs42l42.c
@@ -311,6 +311,9 @@ static int create_spk_amp_dai_links(struct device *dev,
links[*id].platforms = platform_component;
links[*id].num_platforms = ARRAY_SIZE(platform_component);
links[*id].dpcm_playback = 1;
+ /* firmware-generated echo reference */
+ links[*id].dpcm_capture = 1;
+
links[*id].no_pcm = 1;
links[*id].cpus = &cpus[*id];
links[*id].num_cpus = 1;
diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
index f096bd6..d0ce2f0 100644
--- a/sound/soc/intel/boards/sof_rt5682.c
+++ b/sound/soc/intel/boards/sof_rt5682.c
@@ -737,8 +737,6 @@ static struct snd_soc_dai_link *sof_card_dai_links_create(struct device *dev,
links[id].num_codecs = ARRAY_SIZE(max_98373_components);
links[id].init = max_98373_spk_codec_init;
links[id].ops = &max_98373_ops;
- /* feedback stream */
- links[id].dpcm_capture = 1;
} else if (sof_rt5682_quirk &
SOF_MAX98360A_SPEAKER_AMP_PRESENT) {
max_98360a_dai_link(&links[id]);
@@ -751,6 +749,9 @@ static struct snd_soc_dai_link *sof_card_dai_links_create(struct device *dev,
links[id].platforms = platform_component;
links[id].num_platforms = ARRAY_SIZE(platform_component);
links[id].dpcm_playback = 1;
+ /* feedback stream or firmware-generated echo reference */
+ links[id].dpcm_capture = 1;
+
links[id].no_pcm = 1;
links[id].cpus = &cpus[id];
links[id].num_cpus = 1;
diff --git a/sound/soc/kirkwood/kirkwood-dma.c b/sound/soc/kirkwood/kirkwood-dma.c
index 700a185..640cebd 100644
--- a/sound/soc/kirkwood/kirkwood-dma.c
+++ b/sound/soc/kirkwood/kirkwood-dma.c
@@ -86,7 +86,7 @@ kirkwood_dma_conf_mbus_windows(void __iomem *base, int win,
/* try to find matching cs for current dma address */
for (i = 0; i < dram->num_cs; i++) {
- const struct mbus_dram_window *cs = dram->cs + i;
+ const struct mbus_dram_window *cs = &dram->cs[i];
if ((cs->base & 0xffff0000) < (dma & 0xffff0000)) {
writel(cs->base & 0xffff0000,
base + KIRKWOOD_AUDIO_WIN_BASE_REG(win));
diff --git a/sound/soc/mediatek/mt8195/mt8195-dai-etdm.c b/sound/soc/mediatek/mt8195/mt8195-dai-etdm.c
index 7378e42..9031d41 100644
--- a/sound/soc/mediatek/mt8195/mt8195-dai-etdm.c
+++ b/sound/soc/mediatek/mt8195/mt8195-dai-etdm.c
@@ -2567,6 +2567,9 @@ static void mt8195_dai_etdm_parse_of(struct mtk_base_afe *afe)
/* etdm in only */
for (i = 0; i < 2; i++) {
+ dai_id = ETDM_TO_DAI_ID(i);
+ etdm_data = afe_priv->dai_priv[dai_id];
+
ret = snprintf(prop, sizeof(prop),
"mediatek,%s-chn-disabled",
of_afe_etdms[i].name);
diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
index d9cd190..f8ef683 100644
--- a/sound/soc/sh/rcar/rsnd.h
+++ b/sound/soc/sh/rcar/rsnd.h
@@ -901,8 +901,6 @@ void rsnd_mod_make_sure(struct rsnd_mod *mod, enum rsnd_mod_type type);
if (!IS_BUILTIN(RSND_DEBUG_NO_DAI_CALL)) \
dev_dbg(dev, param)
-#endif
-
#ifdef CONFIG_DEBUG_FS
int rsnd_debugfs_probe(struct snd_soc_component *component);
void rsnd_debugfs_reg_show(struct seq_file *m, phys_addr_t _addr,
@@ -913,3 +911,5 @@ void rsnd_debugfs_mod_reg_show(struct seq_file *m, struct rsnd_mod *mod,
#else
#define rsnd_debugfs_probe NULL
#endif
+
+#endif /* RSND_H */
diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
index 2050728..c2703a7 100644
--- a/sound/soc/soc-compress.c
+++ b/sound/soc/soc-compress.c
@@ -116,6 +116,8 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
if (ret < 0)
goto be_err;
+ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
+
/* calculate valid and active FE <-> BE dpcms */
dpcm_process_paths(fe, stream, &list, 1);
fe->dpcm[stream].runtime = fe_substream->runtime;
@@ -151,7 +153,6 @@ static int soc_compr_open_fe(struct snd_compr_stream *cstream)
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_OPEN;
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_NO;
- mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
snd_soc_runtime_activate(fe, stream);
mutex_unlock(&fe->card->pcm_mutex);
@@ -182,7 +183,6 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
snd_soc_runtime_deactivate(fe, stream);
- mutex_unlock(&fe->card->pcm_mutex);
fe->dpcm[stream].runtime_update = SND_SOC_DPCM_UPDATE_FE;
@@ -201,6 +201,8 @@ static int soc_compr_free_fe(struct snd_compr_stream *cstream)
dpcm_be_disconnect(fe, stream);
+ mutex_unlock(&fe->card->pcm_mutex);
+
fe->dpcm[stream].runtime = NULL;
snd_soc_link_compr_shutdown(cstream, 0);
@@ -376,8 +378,9 @@ static int soc_compr_set_params_fe(struct snd_compr_stream *cstream,
ret = snd_soc_link_compr_set_params(cstream);
if (ret < 0)
goto out;
-
+ mutex_lock_nested(&fe->card->pcm_mutex, fe->card->pcm_subclass);
dpcm_dapm_stream_event(fe, stream, SND_SOC_DAPM_STREAM_START);
+ mutex_unlock(&fe->card->pcm_mutex);
fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE;
out:
@@ -590,7 +593,7 @@ int snd_soc_new_compress(struct snd_soc_pcm_runtime *rtd, int num)
rtd->fe_compr = 1;
if (rtd->dai_link->dpcm_playback)
be_pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream->private_data = rtd;
- else if (rtd->dai_link->dpcm_capture)
+ if (rtd->dai_link->dpcm_capture)
be_pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream->private_data = rtd;
memcpy(compr->ops, &soc_compr_dyn_ops, sizeof(soc_compr_dyn_ops));
} else {
diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
index eff8d4f..55b69e3 100644
--- a/sound/soc/soc-topology.c
+++ b/sound/soc/soc-topology.c
@@ -1480,13 +1480,17 @@ static int soc_tplg_dapm_widget_create(struct soc_tplg *tplg,
template.num_kcontrols = le32_to_cpu(w->num_kcontrols);
kc = devm_kcalloc(tplg->dev, le32_to_cpu(w->num_kcontrols), sizeof(*kc), GFP_KERNEL);
- if (!kc)
+ if (!kc) {
+ ret = -ENOMEM;
goto hdr_err;
+ }
kcontrol_type = devm_kcalloc(tplg->dev, le32_to_cpu(w->num_kcontrols), sizeof(unsigned int),
GFP_KERNEL);
- if (!kcontrol_type)
+ if (!kcontrol_type) {
+ ret = -ENOMEM;
goto hdr_err;
+ }
for (i = 0; i < w->num_kcontrols; i++) {
control_hdr = (struct snd_soc_tplg_ctl_hdr *)tplg->pos;
diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
index 5f355b8..56653d7 100644
--- a/sound/soc/sof/intel/hda-dai.c
+++ b/sound/soc/sof/intel/hda-dai.c
@@ -212,6 +212,10 @@ static int hda_link_hw_params(struct snd_pcm_substream *substream,
int stream_tag;
int ret;
+ link = snd_hdac_ext_bus_get_link(bus, codec_dai->component->name);
+ if (!link)
+ return -EINVAL;
+
/* get stored dma data if resuming from system suspend */
link_dev = snd_soc_dai_get_dma_data(dai, substream);
if (!link_dev) {
@@ -232,10 +236,6 @@ static int hda_link_hw_params(struct snd_pcm_substream *substream,
if (ret < 0)
return ret;
- link = snd_hdac_ext_bus_get_link(bus, codec_dai->component->name);
- if (!link)
- return -EINVAL;
-
/* set the hdac_stream in the codec dai */
snd_soc_dai_set_stream(codec_dai, hdac_stream(link_dev), substream->stream);
diff --git a/sound/synth/emux/emux_nrpn.c b/sound/synth/emux/emux_nrpn.c
index 8056422e..0d6b82a 100644
--- a/sound/synth/emux/emux_nrpn.c
+++ b/sound/synth/emux/emux_nrpn.c
@@ -349,6 +349,9 @@ int
snd_emux_xg_control(struct snd_emux_port *port, struct snd_midi_channel *chan,
int param)
{
+ if (param >= ARRAY_SIZE(chan->control))
+ return -EINVAL;
+
return send_converted_effect(xg_effects, ARRAY_SIZE(xg_effects),
port, chan, param,
chan->control[param],
diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
index 092350e..6c7d842 100644
--- a/sound/usb/endpoint.c
+++ b/sound/usb/endpoint.c
@@ -444,8 +444,8 @@ static void push_back_to_ready_list(struct snd_usb_endpoint *ep,
* This function is used both for implicit feedback endpoints and in low-
* latency playback mode.
*/
-void snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,
- bool in_stream_lock)
+int snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,
+ bool in_stream_lock)
{
bool implicit_fb = snd_usb_endpoint_implicit_feedback_sink(ep);
@@ -469,7 +469,7 @@ void snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,
spin_unlock_irqrestore(&ep->lock, flags);
if (ctx == NULL)
- return;
+ break;
/* copy over the length information */
if (implicit_fb) {
@@ -484,11 +484,14 @@ void snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,
break;
if (err < 0) {
/* push back to ready list again for -EAGAIN */
- if (err == -EAGAIN)
+ if (err == -EAGAIN) {
push_back_to_ready_list(ep, ctx);
- else
+ break;
+ }
+
+ if (!in_stream_lock)
notify_xrun(ep);
- return;
+ return -EPIPE;
}
err = usb_submit_urb(ctx->urb, GFP_ATOMIC);
@@ -496,13 +499,16 @@ void snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,
usb_audio_err(ep->chip,
"Unable to submit urb #%d: %d at %s\n",
ctx->index, err, __func__);
- notify_xrun(ep);
- return;
+ if (!in_stream_lock)
+ notify_xrun(ep);
+ return -EPIPE;
}
set_bit(ctx->index, &ep->active_mask);
atomic_inc(&ep->submitted_urbs);
}
+
+ return 0;
}
/*
diff --git a/sound/usb/endpoint.h b/sound/usb/endpoint.h
index 6a9af04..daa7ba0 100644
--- a/sound/usb/endpoint.h
+++ b/sound/usb/endpoint.h
@@ -49,7 +49,7 @@ int snd_usb_endpoint_implicit_feedback_sink(struct snd_usb_endpoint *ep);
int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep,
struct snd_urb_ctx *ctx, int idx,
unsigned int avail);
-void snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,
- bool in_stream_lock);
+int snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,
+ bool in_stream_lock);
#endif /* __USBAUDIO_ENDPOINT_H */
diff --git a/sound/usb/format.c b/sound/usb/format.c
index 405dc0b..4b1c5ba 100644
--- a/sound/usb/format.c
+++ b/sound/usb/format.c
@@ -39,8 +39,12 @@ static u64 parse_audio_format_i_type(struct snd_usb_audio *chip,
case UAC_VERSION_1:
default: {
struct uac_format_type_i_discrete_descriptor *fmt = _fmt;
- if (format >= 64)
- return 0; /* invalid format */
+ if (format >= 64) {
+ usb_audio_info(chip,
+ "%u:%d: invalid format type 0x%llx is detected, processed as PCM\n",
+ fp->iface, fp->altsetting, format);
+ format = UAC_FORMAT_TYPE_I_PCM;
+ }
sample_width = fmt->bBitResolution;
sample_bytes = fmt->bSubframeSize;
format = 1ULL << format;
diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
index 87a30be..de0964d 100644
--- a/sound/usb/pcm.c
+++ b/sound/usb/pcm.c
@@ -1557,7 +1557,7 @@ static int snd_usb_pcm_playback_ack(struct snd_pcm_substream *substream)
* outputs here
*/
if (!ep->active_mask)
- snd_usb_queue_pending_output_urbs(ep, true);
+ return snd_usb_queue_pending_output_urbs(ep, true);
return 0;
}
diff --git a/tools/bootconfig/scripts/ftrace2bconf.sh b/tools/bootconfig/scripts/ftrace2bconf.sh
index 6183b36..1603801 100755
--- a/tools/bootconfig/scripts/ftrace2bconf.sh
+++ b/tools/bootconfig/scripts/ftrace2bconf.sh
@@ -93,7 +93,7 @@
}
event_is_enabled() { # enable-file
- test -f $1 & grep -q "1" $1
+ test -f $1 && grep -q "1" $1
}
per_event_options() { # event-dir
diff --git a/tools/bootconfig/test-bootconfig.sh b/tools/bootconfig/test-bootconfig.sh
index f68e2e9..a2c484c 100755
--- a/tools/bootconfig/test-bootconfig.sh
+++ b/tools/bootconfig/test-bootconfig.sh
@@ -87,10 +87,14 @@
echo "Max node number check"
-echo -n > $TEMPCONF
-for i in `seq 1 1024` ; do
- echo "node$i" >> $TEMPCONF
-done
+awk '
+BEGIN {
+ for (i = 0; i < 26; i += 1)
+ printf("%c\n", 65 + i % 26)
+ for (i = 26; i < 8192; i += 1)
+ printf("%c%c%c\n", 65 + i % 26, 65 + (i / 26) % 26, 65 + (i / 26 / 26))
+}
+' > $TEMPCONF
xpass $BOOTCONF -a $TEMPCONF $INITRD
echo "badnode" >> $TEMPCONF
diff --git a/tools/bpf/Makefile b/tools/bpf/Makefile
index b11cfc8..664601a 100644
--- a/tools/bpf/Makefile
+++ b/tools/bpf/Makefile
@@ -34,7 +34,7 @@
endif
FEATURE_USER = .bpf
-FEATURE_TESTS = libbfd disassembler-four-args
+FEATURE_TESTS = libbfd disassembler-four-args disassembler-init-styled
FEATURE_DISPLAY = libbfd disassembler-four-args
check_feat := 1
@@ -56,6 +56,9 @@
ifeq ($(feature-disassembler-four-args), 1)
CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE
endif
+ifeq ($(feature-disassembler-init-styled), 1)
+CFLAGS += -DDISASM_INIT_STYLED
+endif
$(OUTPUT)%.yacc.c: $(srctree)/tools/bpf/%.y
$(QUIET_BISON)$(YACC) -o $@ -d $<
diff --git a/tools/bpf/bpf_jit_disasm.c b/tools/bpf/bpf_jit_disasm.c
index c8ae958..a90a5d1 100644
--- a/tools/bpf/bpf_jit_disasm.c
+++ b/tools/bpf/bpf_jit_disasm.c
@@ -28,6 +28,7 @@
#include <sys/types.h>
#include <sys/stat.h>
#include <limits.h>
+#include <tools/dis-asm-compat.h>
#define CMD_ACTION_SIZE_BUFFER 10
#define CMD_ACTION_READ_ALL 3
@@ -64,7 +65,9 @@ static void get_asm_insns(uint8_t *image, size_t len, int opcodes)
assert(bfdf);
assert(bfd_check_format(bfdf, bfd_object));
- init_disassemble_info(&info, stdout, (fprintf_ftype) fprintf);
+ init_disassemble_info_compat(&info, stdout,
+ (fprintf_ftype) fprintf,
+ fprintf_styled);
info.arch = bfd_get_arch(bfdf);
info.mach = bfd_get_mach(bfdf);
info.buffer = image;
diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile
index cce52df..11266c7 100644
--- a/tools/bpf/bpftool/Makefile
+++ b/tools/bpf/bpftool/Makefile
@@ -76,7 +76,7 @@
RM ?= rm -f
FEATURE_USER = .bpftool
-FEATURE_TESTS = libbfd disassembler-four-args reallocarray zlib libcap \
+FEATURE_TESTS = libbfd disassembler-four-args disassembler-init-styled reallocarray zlib libcap \
clang-bpf-co-re
FEATURE_DISPLAY = libbfd disassembler-four-args zlib libcap \
clang-bpf-co-re
@@ -111,6 +111,9 @@
CFLAGS += -DUSE_LIBCAP
LIBS += -lcap
endif
+ifeq ($(feature-disassembler-init-styled), 1)
+ CFLAGS += -DDISASM_INIT_STYLED
+endif
include $(wildcard $(OUTPUT)*.d)
diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
index 24734f2..aaf99a0 100644
--- a/tools/bpf/bpftool/jit_disasm.c
+++ b/tools/bpf/bpftool/jit_disasm.c
@@ -24,6 +24,7 @@
#include <sys/stat.h>
#include <limits.h>
#include <bpf/libbpf.h>
+#include <tools/dis-asm-compat.h>
#include "json_writer.h"
#include "main.h"
@@ -39,15 +40,12 @@ static void get_exec_path(char *tpath, size_t size)
}
static int oper_count;
-static int fprintf_json(void *out, const char *fmt, ...)
+static int printf_json(void *out, const char *fmt, va_list ap)
{
- va_list ap;
char *s;
int err;
- va_start(ap, fmt);
err = vasprintf(&s, fmt, ap);
- va_end(ap);
if (err < 0)
return -1;
@@ -73,6 +71,32 @@ static int fprintf_json(void *out, const char *fmt, ...)
return 0;
}
+static int fprintf_json(void *out, const char *fmt, ...)
+{
+ va_list ap;
+ int r;
+
+ va_start(ap, fmt);
+ r = printf_json(out, fmt, ap);
+ va_end(ap);
+
+ return r;
+}
+
+static int fprintf_json_styled(void *out,
+ enum disassembler_style style __maybe_unused,
+ const char *fmt, ...)
+{
+ va_list ap;
+ int r;
+
+ va_start(ap, fmt);
+ r = printf_json(out, fmt, ap);
+ va_end(ap);
+
+ return r;
+}
+
void disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
const char *arch, const char *disassembler_options,
const struct btf *btf,
@@ -99,11 +123,13 @@ void disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
assert(bfd_check_format(bfdf, bfd_object));
if (json_output)
- init_disassemble_info(&info, stdout,
- (fprintf_ftype) fprintf_json);
+ init_disassemble_info_compat(&info, stdout,
+ (fprintf_ftype) fprintf_json,
+ fprintf_json_styled);
else
- init_disassemble_info(&info, stdout,
- (fprintf_ftype) fprintf);
+ init_disassemble_info_compat(&info, stdout,
+ (fprintf_ftype) fprintf,
+ fprintf_styled);
/* Update architecture info for offload. */
if (arch) {
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index f8755beb..bdd4d3b 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -2064,10 +2064,38 @@ static void profile_close_perf_events(struct profiler_bpf *obj)
profile_perf_event_cnt = 0;
}
+static int profile_open_perf_event(int mid, int cpu, int map_fd)
+{
+ int pmu_fd;
+
+ pmu_fd = syscall(__NR_perf_event_open, &metrics[mid].attr,
+ -1 /*pid*/, cpu, -1 /*group_fd*/, 0);
+ if (pmu_fd < 0) {
+ if (errno == ENODEV) {
+ p_info("cpu %d may be offline, skip %s profiling.",
+ cpu, metrics[mid].name);
+ profile_perf_event_cnt++;
+ return 0;
+ }
+ return -1;
+ }
+
+ if (bpf_map_update_elem(map_fd,
+ &profile_perf_event_cnt,
+ &pmu_fd, BPF_ANY) ||
+ ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) {
+ close(pmu_fd);
+ return -1;
+ }
+
+ profile_perf_events[profile_perf_event_cnt++] = pmu_fd;
+ return 0;
+}
+
static int profile_open_perf_events(struct profiler_bpf *obj)
{
unsigned int cpu, m;
- int map_fd, pmu_fd;
+ int map_fd;
profile_perf_events = calloc(
sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric);
@@ -2086,17 +2114,11 @@ static int profile_open_perf_events(struct profiler_bpf *obj)
if (!metrics[m].selected)
continue;
for (cpu = 0; cpu < obj->rodata->num_cpu; cpu++) {
- pmu_fd = syscall(__NR_perf_event_open, &metrics[m].attr,
- -1/*pid*/, cpu, -1/*group_fd*/, 0);
- if (pmu_fd < 0 ||
- bpf_map_update_elem(map_fd, &profile_perf_event_cnt,
- &pmu_fd, BPF_ANY) ||
- ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) {
+ if (profile_open_perf_event(m, cpu, map_fd)) {
p_err("failed to create event %s on cpu %d",
metrics[m].name, cpu);
return -1;
}
- profile_perf_events[profile_perf_event_cnt++] = pmu_fd;
}
}
return 0;
diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
index 6abde48..f027281 100644
--- a/tools/build/Makefile.feature
+++ b/tools/build/Makefile.feature
@@ -69,6 +69,7 @@
libaio \
libzstd \
disassembler-four-args \
+ disassembler-init-styled \
file-handle
# FEATURE_TESTS_BASIC + FEATURE_TESTS_EXTRA is the complete list
diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
index 69a43d9..aa3b0d7 100644
--- a/tools/build/feature/Makefile
+++ b/tools/build/feature/Makefile
@@ -18,6 +18,7 @@
test-libbfd.bin \
test-libbfd-buildid.bin \
test-disassembler-four-args.bin \
+ test-disassembler-init-styled.bin \
test-reallocarray.bin \
test-libbfd-liberty.bin \
test-libbfd-liberty-z.bin \
@@ -239,6 +240,9 @@
$(OUTPUT)test-disassembler-four-args.bin:
$(BUILD) -DPACKAGE='"perf"' -lbfd -lopcodes
+$(OUTPUT)test-disassembler-init-styled.bin:
+ $(BUILD) -DPACKAGE='"perf"' -lbfd -lopcodes
+
$(OUTPUT)test-reallocarray.bin:
$(BUILD)
diff --git a/tools/build/feature/test-all.c b/tools/build/feature/test-all.c
index 5ffafb9..957c02c 100644
--- a/tools/build/feature/test-all.c
+++ b/tools/build/feature/test-all.c
@@ -166,6 +166,10 @@
# include "test-disassembler-four-args.c"
#undef main
+#define main main_test_disassembler_init_styled
+# include "test-disassembler-init-styled.c"
+#undef main
+
#define main main_test_libzstd
# include "test-libzstd.c"
#undef main
diff --git a/tools/build/feature/test-disassembler-init-styled.c b/tools/build/feature/test-disassembler-init-styled.c
new file mode 100644
index 0000000..f1ce0ec
--- /dev/null
+++ b/tools/build/feature/test-disassembler-init-styled.c
@@ -0,0 +1,13 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <stdio.h>
+#include <dis-asm.h>
+
+int main(void)
+{
+ struct disassemble_info info;
+
+ init_disassemble_info(&info, stdout,
+ NULL, NULL);
+
+ return 0;
+}
diff --git a/tools/gpio/gpio-event-mon.c b/tools/gpio/gpio-event-mon.c
index a2b233f..6670199 100644
--- a/tools/gpio/gpio-event-mon.c
+++ b/tools/gpio/gpio-event-mon.c
@@ -86,6 +86,7 @@ int monitor_device(const char *device_name,
gpiotools_test_bit(values.bits, i));
}
+ i = 0;
while (1) {
struct gpio_v2_line_event event;
diff --git a/tools/iio/iio_utils.c b/tools/iio/iio_utils.c
index 8d35893..6a00a6e 100644
--- a/tools/iio/iio_utils.c
+++ b/tools/iio/iio_utils.c
@@ -264,6 +264,7 @@ int iioutils_get_param_float(float *output, const char *param_name,
if (fscanf(sysfsfp, "%f", output) != 1)
ret = errno ? -errno : -ENODATA;
+ fclose(sysfsfp);
break;
}
error_free_filename:
@@ -345,9 +346,9 @@ int build_channel_array(const char *device_dir, int buffer_idx,
}
sysfsfp = fopen(filename, "r");
+ free(filename);
if (!sysfsfp) {
ret = -errno;
- free(filename);
goto error_close_dir;
}
@@ -357,7 +358,6 @@ int build_channel_array(const char *device_dir, int buffer_idx,
if (fclose(sysfsfp))
perror("build_channel_array(): Failed to close file");
- free(filename);
goto error_close_dir;
}
if (ret == 1)
@@ -365,11 +365,9 @@ int build_channel_array(const char *device_dir, int buffer_idx,
if (fclose(sysfsfp)) {
ret = -errno;
- free(filename);
goto error_close_dir;
}
- free(filename);
}
*ci_array = malloc(sizeof(**ci_array) * (*counter));
@@ -395,9 +393,9 @@ int build_channel_array(const char *device_dir, int buffer_idx,
}
sysfsfp = fopen(filename, "r");
+ free(filename);
if (!sysfsfp) {
ret = -errno;
- free(filename);
count--;
goto error_cleanup_array;
}
@@ -405,20 +403,17 @@ int build_channel_array(const char *device_dir, int buffer_idx,
errno = 0;
if (fscanf(sysfsfp, "%i", ¤t_enabled) != 1) {
ret = errno ? -errno : -ENODATA;
- free(filename);
count--;
goto error_cleanup_array;
}
if (fclose(sysfsfp)) {
ret = -errno;
- free(filename);
count--;
goto error_cleanup_array;
}
if (!current_enabled) {
- free(filename);
count--;
continue;
}
@@ -429,7 +424,6 @@ int build_channel_array(const char *device_dir, int buffer_idx,
strlen(ent->d_name) -
strlen("_en"));
if (!current->name) {
- free(filename);
ret = -ENOMEM;
count--;
goto error_cleanup_array;
@@ -439,7 +433,6 @@ int build_channel_array(const char *device_dir, int buffer_idx,
ret = iioutils_break_up_name(current->name,
¤t->generic_name);
if (ret) {
- free(filename);
free(current->name);
count--;
goto error_cleanup_array;
@@ -450,17 +443,16 @@ int build_channel_array(const char *device_dir, int buffer_idx,
scan_el_dir,
current->name);
if (ret < 0) {
- free(filename);
ret = -ENOMEM;
goto error_cleanup_array;
}
sysfsfp = fopen(filename, "r");
+ free(filename);
if (!sysfsfp) {
ret = -errno;
- fprintf(stderr, "failed to open %s\n",
- filename);
- free(filename);
+ fprintf(stderr, "failed to open %s/%s_index\n",
+ scan_el_dir, current->name);
goto error_cleanup_array;
}
@@ -470,17 +462,14 @@ int build_channel_array(const char *device_dir, int buffer_idx,
if (fclose(sysfsfp))
perror("build_channel_array(): Failed to close file");
- free(filename);
goto error_cleanup_array;
}
if (fclose(sysfsfp)) {
ret = -errno;
- free(filename);
goto error_cleanup_array;
}
- free(filename);
/* Find the scale */
ret = iioutils_get_param_float(¤t->scale,
"scale",
diff --git a/tools/include/linux/kernel.h b/tools/include/linux/kernel.h
index c2e1098..5a79572 100644
--- a/tools/include/linux/kernel.h
+++ b/tools/include/linux/kernel.h
@@ -108,7 +108,9 @@ int vscnprintf(char *buf, size_t size, const char *fmt, va_list args);
int scnprintf(char * buf, size_t size, const char * fmt, ...);
int scnprintf_pad(char * buf, size_t size, const char * fmt, ...);
+#ifndef ARRAY_SIZE
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
+#endif
/*
* This looks more complex than it should be. But we need to
diff --git a/tools/include/tools/dis-asm-compat.h b/tools/include/tools/dis-asm-compat.h
new file mode 100644
index 0000000..70f331e
--- /dev/null
+++ b/tools/include/tools/dis-asm-compat.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause */
+#ifndef _TOOLS_DIS_ASM_COMPAT_H
+#define _TOOLS_DIS_ASM_COMPAT_H
+
+#include <stdio.h>
+#include <dis-asm.h>
+
+/* define types for older binutils version, to centralize ifdef'ery a bit */
+#ifndef DISASM_INIT_STYLED
+enum disassembler_style {DISASSEMBLER_STYLE_NOT_EMPTY};
+typedef int (*fprintf_styled_ftype) (void *, enum disassembler_style, const char*, ...);
+#endif
+
+/*
+ * Trivial fprintf wrapper to be used as the fprintf_styled_func argument to
+ * init_disassemble_info_compat() when normal fprintf suffices.
+ */
+static inline int fprintf_styled(void *out,
+ enum disassembler_style style,
+ const char *fmt, ...)
+{
+ va_list args;
+ int r;
+
+ (void)style;
+
+ va_start(args, fmt);
+ r = vfprintf(out, fmt, args);
+ va_end(args);
+
+ return r;
+}
+
+/*
+ * Wrapper for init_disassemble_info() that hides version
+ * differences. Depending on binutils version and architecture either
+ * fprintf_func or fprintf_styled_func will be called.
+ */
+static inline void init_disassemble_info_compat(struct disassemble_info *info,
+ void *stream,
+ fprintf_ftype unstyled_func,
+ fprintf_styled_ftype styled_func)
+{
+#ifdef DISASM_INIT_STYLED
+ init_disassemble_info(info, stream,
+ unstyled_func,
+ styled_func);
+#else
+ (void)styled_func;
+ init_disassemble_info(info, stream,
+ unstyled_func);
+#endif
+}
+
+#endif /* _TOOLS_DIS_ASM_COMPAT_H */
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 3ed759f..fd23095 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -647,8 +647,21 @@ int btf__align_of(const struct btf *btf, __u32 id)
if (align <= 0)
return libbpf_err(align);
max_align = max(max_align, align);
+
+ /* if field offset isn't aligned according to field
+ * type's alignment, then struct must be packed
+ */
+ if (btf_member_bitfield_size(t, i) == 0 &&
+ (m->offset % (8 * align)) != 0)
+ return 1;
}
+ /* if struct/union size isn't a multiple of its alignment,
+ * then struct must be packed
+ */
+ if ((t->size % max_align) != 0)
+ return 1;
+
return max_align;
}
default:
diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
index f620911ad..a9f974e 100644
--- a/tools/lib/bpf/btf_dump.c
+++ b/tools/lib/bpf/btf_dump.c
@@ -821,14 +821,9 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
const struct btf_type *t)
{
const struct btf_member *m;
- int align, i, bit_sz;
+ int max_align = 1, align, i, bit_sz;
__u16 vlen;
- align = btf__align_of(btf, id);
- /* size of a non-packed struct has to be a multiple of its alignment*/
- if (align && t->size % align)
- return true;
-
m = btf_members(t);
vlen = btf_vlen(t);
/* all non-bitfield fields have to be naturally aligned */
@@ -837,8 +832,11 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
bit_sz = btf_member_bitfield_size(t, i);
if (align && bit_sz == 0 && m->offset % (8 * align) != 0)
return true;
+ max_align = max(align, max_align);
}
-
+ /* size of a non-packed struct has to be a multiple of its alignment */
+ if (t->size % max_align != 0)
+ return true;
/*
* if original struct was marked as packed, but its layout is
* naturally aligned, we'll detect that it's not packed
@@ -846,44 +844,97 @@ static bool btf_is_struct_packed(const struct btf *btf, __u32 id,
return false;
}
-static int chip_away_bits(int total, int at_most)
-{
- return total % at_most ? : at_most;
-}
-
static void btf_dump_emit_bit_padding(const struct btf_dump *d,
- int cur_off, int m_off, int m_bit_sz,
- int align, int lvl)
+ int cur_off, int next_off, int next_align,
+ bool in_bitfield, int lvl)
{
- int off_diff = m_off - cur_off;
- int ptr_bits = d->ptr_sz * 8;
+ const struct {
+ const char *name;
+ int bits;
+ } pads[] = {
+ {"long", d->ptr_sz * 8}, {"int", 32}, {"short", 16}, {"char", 8}
+ };
+ int new_off, pad_bits, bits, i;
+ const char *pad_type;
- if (off_diff <= 0)
- /* no gap */
- return;
- if (m_bit_sz == 0 && off_diff < align * 8)
- /* natural padding will take care of a gap */
- return;
+ if (cur_off >= next_off)
+ return; /* no gap */
- while (off_diff > 0) {
- const char *pad_type;
- int pad_bits;
+ /* For filling out padding we want to take advantage of
+ * natural alignment rules to minimize unnecessary explicit
+ * padding. First, we find the largest type (among long, int,
+ * short, or char) that can be used to force naturally aligned
+ * boundary. Once determined, we'll use such type to fill in
+ * the remaining padding gap. In some cases we can rely on
+ * compiler filling some gaps, but sometimes we need to force
+ * alignment to close natural alignment with markers like
+ * `long: 0` (this is always the case for bitfields). Note
+ * that even if struct itself has, let's say 4-byte alignment
+ * (i.e., it only uses up to int-aligned types), using `long:
+ * X;` explicit padding doesn't actually change struct's
+ * overall alignment requirements, but compiler does take into
+ * account that type's (long, in this example) natural
+ * alignment requirements when adding implicit padding. We use
+ * this fact heavily and don't worry about ruining correct
+ * struct alignment requirement.
+ */
+ for (i = 0; i < ARRAY_SIZE(pads); i++) {
+ pad_bits = pads[i].bits;
+ pad_type = pads[i].name;
- if (ptr_bits > 32 && off_diff > 32) {
- pad_type = "long";
- pad_bits = chip_away_bits(off_diff, ptr_bits);
- } else if (off_diff > 16) {
- pad_type = "int";
- pad_bits = chip_away_bits(off_diff, 32);
- } else if (off_diff > 8) {
- pad_type = "short";
- pad_bits = chip_away_bits(off_diff, 16);
- } else {
- pad_type = "char";
- pad_bits = chip_away_bits(off_diff, 8);
+ new_off = roundup(cur_off, pad_bits);
+ if (new_off <= next_off)
+ break;
+ }
+
+ if (new_off > cur_off && new_off <= next_off) {
+ /* We need explicit `<type>: 0` aligning mark if next
+ * field is right on alignment offset and its
+ * alignment requirement is less strict than <type>'s
+ * alignment (so compiler won't naturally align to the
+ * offset we expect), or if subsequent `<type>: X`,
+ * will actually completely fit in the remaining hole,
+ * making compiler basically ignore `<type>: X`
+ * completely.
+ */
+ if (in_bitfield ||
+ (new_off == next_off && roundup(cur_off, next_align * 8) != new_off) ||
+ (new_off != next_off && next_off - new_off <= new_off - cur_off))
+ /* but for bitfields we'll emit explicit bit count */
+ btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type,
+ in_bitfield ? new_off - cur_off : 0);
+ cur_off = new_off;
+ }
+
+ /* Now we know we start at naturally aligned offset for a chosen
+ * padding type (long, int, short, or char), and so the rest is just
+ * a straightforward filling of remaining padding gap with full
+ * `<type>: sizeof(<type>);` markers, except for the last one, which
+ * might need smaller than sizeof(<type>) padding.
+ */
+ while (cur_off != next_off) {
+ bits = min(next_off - cur_off, pad_bits);
+ if (bits == pad_bits) {
+ btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, pad_bits);
+ cur_off += bits;
+ continue;
}
- btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, pad_bits);
- off_diff -= pad_bits;
+ /* For the remainder padding that doesn't cover entire
+ * pad_type bit length, we pick the smallest necessary type.
+ * This is pure aesthetics, we could have just used `long`,
+ * but having smallest necessary one communicates better the
+ * scale of the padding gap.
+ */
+ for (i = ARRAY_SIZE(pads) - 1; i >= 0; i--) {
+ pad_type = pads[i].name;
+ pad_bits = pads[i].bits;
+ if (pad_bits < bits)
+ continue;
+
+ btf_dump_printf(d, "\n%s%s: %d;", pfx(lvl), pad_type, bits);
+ cur_off += bits;
+ break;
+ }
}
}
@@ -903,9 +954,11 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
{
const struct btf_member *m = btf_members(t);
bool is_struct = btf_is_struct(t);
- int align, i, packed, off = 0;
+ bool packed, prev_bitfield = false;
+ int align, i, off = 0;
__u16 vlen = btf_vlen(t);
+ align = btf__align_of(d->btf, id);
packed = is_struct ? btf_is_struct_packed(d->btf, id, t) : 0;
btf_dump_printf(d, "%s%s%s {",
@@ -915,35 +968,42 @@ static void btf_dump_emit_struct_def(struct btf_dump *d,
for (i = 0; i < vlen; i++, m++) {
const char *fname;
- int m_off, m_sz;
+ int m_off, m_sz, m_align;
+ bool in_bitfield;
fname = btf_name_of(d, m->name_off);
m_sz = btf_member_bitfield_size(t, i);
m_off = btf_member_bit_offset(t, i);
- align = packed ? 1 : btf__align_of(d->btf, m->type);
+ m_align = packed ? 1 : btf__align_of(d->btf, m->type);
- btf_dump_emit_bit_padding(d, off, m_off, m_sz, align, lvl + 1);
+ in_bitfield = prev_bitfield && m_sz != 0;
+
+ btf_dump_emit_bit_padding(d, off, m_off, m_align, in_bitfield, lvl + 1);
btf_dump_printf(d, "\n%s", pfx(lvl + 1));
btf_dump_emit_type_decl(d, m->type, fname, lvl + 1);
if (m_sz) {
btf_dump_printf(d, ": %d", m_sz);
off = m_off + m_sz;
+ prev_bitfield = true;
} else {
m_sz = max((__s64)0, btf__resolve_size(d->btf, m->type));
off = m_off + m_sz * 8;
+ prev_bitfield = false;
}
+
btf_dump_printf(d, ";");
}
/* pad at the end, if necessary */
- if (is_struct) {
- align = packed ? 1 : btf__align_of(d->btf, id);
- btf_dump_emit_bit_padding(d, off, t->size * 8, 0, align,
- lvl + 1);
- }
+ if (is_struct)
+ btf_dump_emit_bit_padding(d, off, t->size * 8, align, false, lvl + 1);
- if (vlen)
+ /*
+ * Keep `struct empty {}` on a single line,
+ * only print newline when there are regular or padding fields.
+ */
+ if (vlen || t->size)
btf_dump_printf(d, "\n");
btf_dump_printf(d, "%s}", pfx(lvl));
if (packed)
diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c
index f57e77a..2dbe7b9 100644
--- a/tools/lib/bpf/nlattr.c
+++ b/tools/lib/bpf/nlattr.c
@@ -178,7 +178,7 @@ int libbpf_nla_dump_errormsg(struct nlmsghdr *nlh)
hlen += nlmsg_len(&err->msg);
attr = (struct nlattr *) ((void *) err + hlen);
- alen = nlh->nlmsg_len - hlen;
+ alen = (void *)nlh + nlh->nlmsg_len - (void *)attr;
if (libbpf_nla_parse(tb, NLMSGERR_ATTR_MAX, attr, alen,
extack_policy) != 0) {
diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
index 92ce4fc..549acc5 100644
--- a/tools/objtool/Makefile
+++ b/tools/objtool/Makefile
@@ -19,8 +19,8 @@
OBJTOOL := $(OUTPUT)objtool
OBJTOOL_IN := $(OBJTOOL)-in.o
-LIBELF_FLAGS := $(shell pkg-config libelf --cflags 2>/dev/null)
-LIBELF_LIBS := $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
+LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null)
+LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf)
all: $(OBJTOOL)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 308c880..32f119e 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -169,6 +169,7 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
"panic",
"do_exit",
"do_task_dead",
+ "make_task_dead",
"__module_put_and_exit",
"complete_and_exit",
"__reiserfs_panic",
@@ -176,10 +177,11 @@ static bool __dead_end_function(struct objtool_file *file, struct symbol *func,
"fortify_panic",
"usercopy_abort",
"machine_real_restart",
- "rewind_stack_do_exit",
+ "rewind_stack_and_make_dead",
"kunit_try_catch_throw",
"xen_start_kernel",
"cpu_bringup_and_idle",
+ "stop_this_cpu",
};
if (!func)
@@ -571,6 +573,7 @@ static int create_static_call_sections(struct objtool_file *file)
if (strncmp(key_name, STATIC_CALL_TRAMP_PREFIX_STR,
STATIC_CALL_TRAMP_PREFIX_LEN)) {
WARN("static_call: trampoline name malformed: %s", key_name);
+ free(key_name);
return -1;
}
tmp = key_name + STATIC_CALL_TRAMP_PREFIX_LEN - STATIC_CALL_KEY_PREFIX_LEN;
@@ -580,6 +583,7 @@ static int create_static_call_sections(struct objtool_file *file)
if (!key_sym) {
if (!module) {
WARN("static_call: can't find static_call_key symbol: %s", tmp);
+ free(key_name);
return -1;
}
@@ -906,6 +910,8 @@ static const char *uaccess_safe_builtin[] = {
"__tsan_atomic64_compare_exchange_val",
"__tsan_atomic_thread_fence",
"__tsan_atomic_signal_fence",
+ "__tsan_unaligned_read16",
+ "__tsan_unaligned_write16",
/* KCOV */
"write_comp_data",
"check_kcov_mode",
diff --git a/tools/perf/Documentation/perf-intel-pt.txt b/tools/perf/Documentation/perf-intel-pt.txt
index db465fa..de63c41 100644
--- a/tools/perf/Documentation/perf-intel-pt.txt
+++ b/tools/perf/Documentation/perf-intel-pt.txt
@@ -108,9 +108,10 @@
perf script --itrace=ibxwpe -F+flags
-The flags are "bcrosyiABExgh" which stand for branch, call, return, conditional,
+The flags are "bcrosyiABExghDt" which stand for branch, call, return, conditional,
system, asynchronous, interrupt, transaction abort, trace begin, trace end,
-in transaction, VM-entry, and VM-exit respectively.
+in transaction, VM-entry, VM-exit, interrupt disabled, and interrupt disable
+toggle respectively.
perf script also supports higher level ways to dump instruction traces:
@@ -456,6 +457,8 @@
which contains "1" if the feature is supported and
"0" otherwise.
+ As an alternative, refer to "Emulated PTWRITE" further below.
+
fup_on_ptw Enable a FUP packet to follow the PTWRITE packet. The FUP packet
provides the address of the ptwrite instruction. In the absence of
fup_on_ptw, the decoder will use the address of the previous branch
@@ -472,6 +475,30 @@
which contains "1" if the feature is supported and
"0" otherwise.
+event Enable Event Trace. The events provide information about asynchronous
+ events.
+
+ Support for this feature is indicated by:
+
+ /sys/bus/event_source/devices/intel_pt/caps/event_trace
+
+ which contains "1" if the feature is supported and
+ "0" otherwise.
+
+notnt Disable TNT packets. Without TNT packets, it is not possible to walk
+ executable code to reconstruct control flow, however FUP, TIP, TIP.PGE
+ and TIP.PGD packets still indicate asynchronous control flow, and (if
+ return compression is disabled - see noretcomp) return statements.
+ The advantage of eliminating TNT packets is reducing the size of the
+ trace and corresponding tracing overhead.
+
+ Support for this feature is indicated by:
+
+ /sys/bus/event_source/devices/intel_pt/caps/tnt_disable
+
+ which contains "1" if the feature is supported and
+ "0" otherwise.
+
AUX area sampling option
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -865,6 +892,8 @@
p synthesize "power" events (incl. PSB events)
c synthesize branches events (calls only)
r synthesize branches events (returns only)
+ o synthesize PEBS-via-PT events
+ I synthesize Event Trace events
e synthesize tracing error events
d create a debug log
g synthesize a call chain (use with i or x)
@@ -1338,6 +1367,202 @@
:17006 17006 [001] 11500.262869216: ffffffff8220116e error_entry+0xe ([guest.kernel.kallsyms]) pushq %rax
+Event Trace
+-----------
+
+Event Trace records information about asynchronous events, for example interrupts,
+faults, VM exits and entries. The information is recorded in CFE and EVD packets,
+and also the Interrupt Flag is recorded on the MODE.Exec packet. The CFE packet
+contains a type field to identify one of the following:
+
+ 1 INTR interrupt, fault, exception, NMI
+ 2 IRET interrupt return
+ 3 SMI system management interrupt
+ 4 RSM resume from system management mode
+ 5 SIPI startup interprocessor interrupt
+ 6 INIT INIT signal
+ 7 VMENTRY VM-Entry
+ 8 VMEXIT VM-Entry
+ 9 VMEXIT_INTR VM-Exit due to interrupt
+ 10 SHUTDOWN Shutdown
+
+For more details, refer to the Intel 64 and IA-32 Architectures Software
+Developer Manuals (version 076 or later).
+
+The capability to do Event Trace is indicated by the
+/sys/bus/event_source/devices/intel_pt/caps/event_trace file.
+
+Event trace is selected for recording using the "event" config term. e.g.
+
+ perf record -e intel_pt/event/u uname
+
+Event trace events are output using the --itrace I option. e.g.
+
+ perf script --itrace=Ie
+
+perf script displays events containing CFE type, vector and event data,
+in the form:
+
+ evt: hw int (t) cfe: INTR IP: 1 vector: 3 PFA: 0x8877665544332211
+
+The IP flag indicates if the event binds to an IP, which includes any case where
+flow control packet generation is enabled, as well as when CFE packet IP bit is
+set.
+
+perf script displays events containing changes to the Interrupt Flag in the form:
+
+ iflag: t IFLAG: 1->0 via branch
+
+where "via branch" indicates a branch (interrupt or return from interrupt) and
+"non branch" indicates an instruction such as CFI, STI or POPF).
+
+In addition, the current state of the interrupt flag is indicated by the presence
+or absence of the "D" (interrupt disabled) perf script flag. If the interrupt
+flag is changed, then the "t" flag is also included i.e.
+
+ no flag, interrupts enabled IF=1
+ t interrupts become disabled IF=1 -> IF=0
+ D interrupts are disabled IF=0
+ Dt interrupts become enabled IF=0 -> IF=1
+
+The intel-pt-events.py script illustrates how to access Event Trace information
+using a Python script.
+
+
+TNT Disable
+-----------
+
+TNT packets are disabled using the "notnt" config term. e.g.
+
+ perf record -e intel_pt/notnt/u uname
+
+In that case the --itrace q option is forced because walking executable code
+to reconstruct the control flow is not possible.
+
+
+Emulated PTWRITE
+----------------
+
+Later perf tools support a method to emulate the ptwrite instruction, which
+can be useful if hardware does not support the ptwrite instruction.
+
+Instead of using the ptwrite instruction, a function is used which produces
+a trace that encodes the payload data into TNT packets. Here is an example
+of the function:
+
+ #include <stdint.h>
+
+ void perf_emulate_ptwrite(uint64_t x)
+ __attribute__((externally_visible, noipa, no_instrument_function, naked));
+
+ #define PERF_EMULATE_PTWRITE_8_BITS \
+ "1: shl %rax\n" \
+ " jc 1f\n" \
+ "1: shl %rax\n" \
+ " jc 1f\n" \
+ "1: shl %rax\n" \
+ " jc 1f\n" \
+ "1: shl %rax\n" \
+ " jc 1f\n" \
+ "1: shl %rax\n" \
+ " jc 1f\n" \
+ "1: shl %rax\n" \
+ " jc 1f\n" \
+ "1: shl %rax\n" \
+ " jc 1f\n" \
+ "1: shl %rax\n" \
+ " jc 1f\n"
+
+ /* Undefined instruction */
+ #define PERF_EMULATE_PTWRITE_UD2 ".byte 0x0f, 0x0b\n"
+
+ #define PERF_EMULATE_PTWRITE_MAGIC PERF_EMULATE_PTWRITE_UD2 ".ascii \"perf,ptwrite \"\n"
+
+ void perf_emulate_ptwrite(uint64_t x __attribute__ ((__unused__)))
+ {
+ /* Assumes SysV ABI : x passed in rdi */
+ __asm__ volatile (
+ "jmp 1f\n"
+ PERF_EMULATE_PTWRITE_MAGIC
+ "1: mov %rdi, %rax\n"
+ PERF_EMULATE_PTWRITE_8_BITS
+ PERF_EMULATE_PTWRITE_8_BITS
+ PERF_EMULATE_PTWRITE_8_BITS
+ PERF_EMULATE_PTWRITE_8_BITS
+ PERF_EMULATE_PTWRITE_8_BITS
+ PERF_EMULATE_PTWRITE_8_BITS
+ PERF_EMULATE_PTWRITE_8_BITS
+ PERF_EMULATE_PTWRITE_8_BITS
+ "1: ret\n"
+ );
+ }
+
+For example, a test program with the function above:
+
+ #include <stdio.h>
+ #include <stdint.h>
+ #include <stdlib.h>
+
+ #include "perf_emulate_ptwrite.h"
+
+ int main(int argc, char *argv[])
+ {
+ uint64_t x = 0;
+
+ if (argc > 1)
+ x = strtoull(argv[1], NULL, 0);
+ perf_emulate_ptwrite(x);
+ return 0;
+ }
+
+Can be compiled and traced:
+
+ $ gcc -Wall -Wextra -O3 -g -o eg_ptw eg_ptw.c
+ $ perf record -e intel_pt//u ./eg_ptw 0x1234567890abcdef
+ [ perf record: Woken up 1 times to write data ]
+ [ perf record: Captured and wrote 0.017 MB perf.data ]
+ $ perf script --itrace=ew
+ eg_ptw 19875 [007] 8061.235912: ptwrite: IP: 0 payload: 0x1234567890abcdef 55701249a196 perf_emulate_ptwrite+0x16 (/home/user/eg_ptw)
+ $
+
+
+Pipe mode
+---------
+Pipe mode is a problem for Intel PT and possibly other auxtrace users.
+It's not recommended to use a pipe as data output with Intel PT because
+of the following reason.
+
+Essentially the auxtrace buffers do not behave like the regular perf
+event buffers. That is because the head and tail are updated by
+software, but in the auxtrace case the data is written by hardware.
+So the head and tail do not get updated as data is written.
+
+In the Intel PT case, the head and tail are updated only when the trace
+is disabled by software, for example:
+ - full-trace, system wide : when buffer passes watermark
+ - full-trace, not system-wide : when buffer passes watermark or
+ context switches
+ - snapshot mode : as above but also when a snapshot is made
+ - sample mode : as above but also when a sample is made
+
+That means finished-round ordering doesn't work. An auxtrace buffer
+can turn up that has data that extends back in time, possibly to the
+very beginning of tracing.
+
+For a perf.data file, that problem is solved by going through the trace
+and queuing up the auxtrace buffers in advance.
+
+For pipe mode, the order of events and timestamps can presumably
+be messed up.
+
+
+EXAMPLE
+-------
+
+Examples can be found on perf wiki page "Perf tools support for Intel® Processor Trace":
+
+https://perf.wiki.kernel.org/index.php/Perf_tools_support_for_Intel%C2%AE_Processor_Trace
+
SEE ALSO
--------
diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
index 35e1f2a..2c30a2b 100644
--- a/tools/perf/Makefile.config
+++ b/tools/perf/Makefile.config
@@ -296,6 +296,7 @@
FEATURE_CHECK_LDFLAGS-libaio = -lrt
FEATURE_CHECK_LDFLAGS-disassembler-four-args = -lbfd -lopcodes -ldl
+FEATURE_CHECK_LDFLAGS-disassembler-init-styled = -lbfd -lopcodes -ldl
CORE_CFLAGS += -fno-omit-frame-pointer
CORE_CFLAGS += -ggdb3
@@ -872,13 +873,16 @@
ifeq ($(feature-libbfd-liberty), 1)
EXTLIBS += -lbfd -lopcodes -liberty
FEATURE_CHECK_LDFLAGS-disassembler-four-args += -liberty -ldl
+ FEATURE_CHECK_LDFLAGS-disassembler-init-styled += -liberty -ldl
else
ifeq ($(feature-libbfd-liberty-z), 1)
EXTLIBS += -lbfd -lopcodes -liberty -lz
FEATURE_CHECK_LDFLAGS-disassembler-four-args += -liberty -lz -ldl
+ FEATURE_CHECK_LDFLAGS-disassembler-init-styled += -liberty -lz -ldl
endif
endif
$(call feature_check,disassembler-four-args)
+ $(call feature_check,disassembler-init-styled)
endif
ifeq ($(feature-libbfd-buildid), 1)
@@ -992,6 +996,10 @@
CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE
endif
+ifeq ($(feature-disassembler-init-styled), 1)
+ CFLAGS += -DDISASM_INIT_STYLED
+endif
+
ifeq (${IS_64_BIT}, 1)
ifndef NO_PERF_READ_VDSO32
$(call feature_check,compile-32)
diff --git a/tools/perf/arch/x86/tests/intel-pt-pkt-decoder-test.c b/tools/perf/arch/x86/tests/intel-pt-pkt-decoder-test.c
index c933e3d..9589314 100644
--- a/tools/perf/arch/x86/tests/intel-pt-pkt-decoder-test.c
+++ b/tools/perf/arch/x86/tests/intel-pt-pkt-decoder-test.c
@@ -166,6 +166,14 @@ struct test_data {
{2, {0x02, 0xb3}, INTEL_PT_BLK_4_CTX, {INTEL_PT_BEP_IP, 0, 0}, 0, 0 },
{2, {0x02, 0x33}, INTEL_PT_BLK_8_CTX, {INTEL_PT_BEP, 0, 0}, 0, 0 },
{2, {0x02, 0xb3}, INTEL_PT_BLK_8_CTX, {INTEL_PT_BEP_IP, 0, 0}, 0, 0 },
+ /* Control Flow Event Packet */
+ {4, {0x02, 0x13, 0x01, 0x03}, 0, {INTEL_PT_CFE, 1, 3}, 0, 0 },
+ {4, {0x02, 0x13, 0x81, 0x03}, 0, {INTEL_PT_CFE_IP, 1, 3}, 0, 0 },
+ {4, {0x02, 0x13, 0x1f, 0x00}, 0, {INTEL_PT_CFE, 0x1f, 0}, 0, 0 },
+ {4, {0x02, 0x13, 0x9f, 0xff}, 0, {INTEL_PT_CFE_IP, 0x1f, 0xff}, 0, 0 },
+ /* */
+ {11, {0x02, 0x53, 0x09, 1, 2, 3, 4, 5, 6, 7}, 0, {INTEL_PT_EVD, 0x09, 0x7060504030201}, 0, 0 },
+ {11, {0x02, 0x53, 0x3f, 2, 3, 4, 5, 6, 7, 8}, 0, {INTEL_PT_EVD, 0x3f, 0x8070605040302}, 0, 0 },
/* Terminator */
{0, {0}, 0, {0, 0, 0}, 0, 0 },
};
diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index 50c2e68..8e7a65a8 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -142,14 +142,14 @@ static int perf_event__repipe_event_update(struct perf_tool *tool,
#ifdef HAVE_AUXTRACE_SUPPORT
-static int copy_bytes(struct perf_inject *inject, int fd, off_t size)
+static int copy_bytes(struct perf_inject *inject, struct perf_data *data, off_t size)
{
char buf[4096];
ssize_t ssz;
int ret;
while (size > 0) {
- ssz = read(fd, buf, min(size, (off_t)sizeof(buf)));
+ ssz = perf_data__read(data, buf, min(size, (off_t)sizeof(buf)));
if (ssz < 0)
return -errno;
ret = output_bytes(inject, buf, ssz);
@@ -187,7 +187,7 @@ static s64 perf_event__repipe_auxtrace(struct perf_session *session,
ret = output_bytes(inject, event, event->header.size);
if (ret < 0)
return ret;
- ret = copy_bytes(inject, perf_data__fd(session->data),
+ ret = copy_bytes(inject, session->data,
event->auxtrace.size);
} else {
ret = output_bytes(inject, event,
@@ -463,6 +463,7 @@ static int perf_event__repipe_buildid_mmap2(struct perf_tool *tool,
dso->hit = 1;
}
dso__put(dso);
+ perf_event__repipe(tool, event, sample, machine);
return 0;
}
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index aad65c9..0b709e3 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -558,12 +558,7 @@ static int enable_counters(void)
return err;
}
- /*
- * We need to enable counters only if:
- * - we don't have tracee (attaching to task or cpu)
- * - we have initial delay configured
- */
- if (!target__none(&target)) {
+ if (!target__enable_on_exec(&target)) {
if (!all_counters_use_bpf)
evlist__enable(evsel_list);
}
@@ -941,7 +936,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
return err;
}
- if (stat_config.initial_delay) {
+ if (target.initial_delay) {
pr_info(EVLIST_DISABLED_MSG);
} else {
err = enable_counters();
@@ -953,8 +948,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (forks)
evlist__start_workload(evsel_list);
- if (stat_config.initial_delay > 0) {
- usleep(stat_config.initial_delay * USEC_PER_MSEC);
+ if (target.initial_delay > 0) {
+ usleep(target.initial_delay * USEC_PER_MSEC);
err = enable_counters();
if (err)
return -1;
@@ -1244,7 +1239,7 @@ static struct option stat_options[] = {
"aggregate counts per thread", AGGR_THREAD),
OPT_SET_UINT(0, "per-node", &stat_config.aggr_mode,
"aggregate counts per numa node", AGGR_NODE),
- OPT_INTEGER('D', "delay", &stat_config.initial_delay,
+ OPT_INTEGER('D', "delay", &target.initial_delay,
"ms to wait before starting measurement after program start (-1: start with events disabled)"),
OPT_CALLBACK_NOOPT(0, "metric-only", &stat_config.metric_only, NULL,
"Only print computed metrics. No raw values", enable_metric_only),
diff --git a/tools/perf/perf-completion.sh b/tools/perf/perf-completion.sh
index fdf75d4..978249d 100644
--- a/tools/perf/perf-completion.sh
+++ b/tools/perf/perf-completion.sh
@@ -165,7 +165,12 @@
local cur1=${COMP_WORDS[COMP_CWORD]}
local raw_evts=$($cmd list --raw-dump)
- local arr s tmp result
+ local arr s tmp result cpu_evts
+
+ # aarch64 doesn't have /sys/bus/event_source/devices/cpu/events
+ if [[ `uname -m` != aarch64 ]]; then
+ cpu_evts=$(ls /sys/bus/event_source/devices/cpu/events)
+ fi
if [[ "$cur1" == */* && ${cur1#*/} =~ ^[A-Z] ]]; then
OLD_IFS="$IFS"
@@ -183,9 +188,9 @@
fi
done
- evts=${result}" "$(ls /sys/bus/event_source/devices/cpu/events)
+ evts=${result}" "${cpu_evts}
else
- evts=${raw_evts}" "$(ls /sys/bus/event_source/devices/cpu/events)
+ evts=${raw_evts}" "${cpu_evts}
fi
if [[ "$cur1" == , ]]; then
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 0bae061..a5e87c7 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1694,6 +1694,7 @@ static int dso__disassemble_filename(struct dso *dso, char *filename, size_t fil
#include <bpf/btf.h>
#include <bpf/libbpf.h>
#include <linux/btf.h>
+#include <tools/dis-asm-compat.h>
static int symbol__disassemble_bpf(struct symbol *sym,
struct annotate_args *args)
@@ -1736,9 +1737,9 @@ static int symbol__disassemble_bpf(struct symbol *sym,
ret = errno;
goto out;
}
- init_disassemble_info(&info, s,
- (fprintf_ftype) fprintf);
-
+ init_disassemble_info_compat(&info, s,
+ (fprintf_ftype) fprintf,
+ fprintf_styled);
info.arch = bfd_get_arch(bfdf);
info.mach = bfd_get_mach(bfdf);
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index 0ef4cbf..344b65a 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -1107,6 +1107,9 @@ int auxtrace_queue_data(struct perf_session *session, bool samples, bool events)
if (auxtrace__dont_decode(session))
return 0;
+ if (perf_data__is_pipe(session->data))
+ return 0;
+
if (!session->auxtrace || !session->auxtrace->queue_data)
return -EINVAL;
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
index b0034ee..372ffec 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
@@ -137,6 +137,7 @@ struct intel_pt_decoder {
bool in_psb;
bool hop;
bool leap;
+ bool emulated_ptwrite;
bool vm_time_correlation;
bool vm_tm_corr_dry_run;
bool vm_tm_corr_reliable;
@@ -473,6 +474,8 @@ static int intel_pt_ext_err(int code)
return INTEL_PT_ERR_LOST;
case -ELOOP:
return INTEL_PT_ERR_NELOOP;
+ case -ECONNRESET:
+ return INTEL_PT_ERR_EPTW;
default:
return INTEL_PT_ERR_UNK;
}
@@ -489,6 +492,7 @@ static const char *intel_pt_err_msgs[] = {
[INTEL_PT_ERR_LOST] = "Lost trace data",
[INTEL_PT_ERR_UNK] = "Unknown error!",
[INTEL_PT_ERR_NELOOP] = "Never-ending loop (refer perf config intel-pt.max-loops)",
+ [INTEL_PT_ERR_EPTW] = "Broken emulated ptwrite",
};
int intel_pt__strerror(int code, char *buf, size_t buflen)
@@ -819,6 +823,9 @@ static int intel_pt_calc_cyc_cb(struct intel_pt_pkt_info *pkt_info)
case INTEL_PT_BIP:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
return 0;
case INTEL_PT_MTC:
@@ -1402,17 +1409,108 @@ static int intel_pt_walk_tip(struct intel_pt_decoder *decoder)
return intel_pt_bug(decoder);
}
+struct eptw_data {
+ int bit_countdown;
+ uint64_t payload;
+};
+
+static int intel_pt_eptw_lookahead_cb(struct intel_pt_pkt_info *pkt_info)
+{
+ struct eptw_data *data = pkt_info->data;
+ int nr_bits;
+
+ switch (pkt_info->packet.type) {
+ case INTEL_PT_PAD:
+ case INTEL_PT_MNT:
+ case INTEL_PT_MODE_EXEC:
+ case INTEL_PT_MODE_TSX:
+ case INTEL_PT_MTC:
+ case INTEL_PT_FUP:
+ case INTEL_PT_CYC:
+ case INTEL_PT_CBR:
+ case INTEL_PT_TSC:
+ case INTEL_PT_TMA:
+ case INTEL_PT_PIP:
+ case INTEL_PT_VMCS:
+ case INTEL_PT_PSB:
+ case INTEL_PT_PSBEND:
+ case INTEL_PT_PTWRITE:
+ case INTEL_PT_PTWRITE_IP:
+ case INTEL_PT_EXSTOP:
+ case INTEL_PT_EXSTOP_IP:
+ case INTEL_PT_MWAIT:
+ case INTEL_PT_PWRE:
+ case INTEL_PT_PWRX:
+ case INTEL_PT_BBP:
+ case INTEL_PT_BIP:
+ case INTEL_PT_BEP:
+ case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
+ break;
+
+ case INTEL_PT_TNT:
+ nr_bits = data->bit_countdown;
+ if (nr_bits > pkt_info->packet.count)
+ nr_bits = pkt_info->packet.count;
+ data->payload <<= nr_bits;
+ data->payload |= pkt_info->packet.payload >> (64 - nr_bits);
+ data->bit_countdown -= nr_bits;
+ return !data->bit_countdown;
+
+ case INTEL_PT_TIP_PGE:
+ case INTEL_PT_TIP_PGD:
+ case INTEL_PT_TIP:
+ case INTEL_PT_BAD:
+ case INTEL_PT_OVF:
+ case INTEL_PT_TRACESTOP:
+ default:
+ return 1;
+ }
+
+ return 0;
+}
+
+static int intel_pt_emulated_ptwrite(struct intel_pt_decoder *decoder)
+{
+ int n = 64 - decoder->tnt.count;
+ struct eptw_data data = {
+ .bit_countdown = n,
+ .payload = decoder->tnt.payload >> n,
+ };
+
+ decoder->emulated_ptwrite = false;
+ intel_pt_log("Emulated ptwrite detected\n");
+
+ intel_pt_pkt_lookahead(decoder, intel_pt_eptw_lookahead_cb, &data);
+ if (data.bit_countdown)
+ return -ECONNRESET;
+
+ decoder->state.type = INTEL_PT_PTW;
+ decoder->state.from_ip = decoder->ip;
+ decoder->state.to_ip = 0;
+ decoder->state.ptw_payload = data.payload;
+ return 0;
+}
+
static int intel_pt_walk_tnt(struct intel_pt_decoder *decoder)
{
struct intel_pt_insn intel_pt_insn;
int err;
while (1) {
+ if (decoder->emulated_ptwrite)
+ return intel_pt_emulated_ptwrite(decoder);
err = intel_pt_walk_insn(decoder, &intel_pt_insn, 0);
- if (err == INTEL_PT_RETURN)
+ if (err == INTEL_PT_RETURN) {
+ decoder->emulated_ptwrite = intel_pt_insn.emulated_ptwrite;
return 0;
- if (err)
+ }
+ if (err) {
+ decoder->emulated_ptwrite = false;
return err;
+ }
if (intel_pt_insn.op == INTEL_PT_OP_RET) {
if (!decoder->return_compression) {
@@ -1872,6 +1970,9 @@ static int intel_pt_walk_psbend(struct intel_pt_decoder *decoder)
case INTEL_PT_BIP:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
decoder->have_tma = false;
intel_pt_log("ERROR: Unexpected packet\n");
err = -EAGAIN;
@@ -1974,6 +2075,9 @@ static int intel_pt_walk_fup_tip(struct intel_pt_decoder *decoder)
case INTEL_PT_BIP:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
intel_pt_log("ERROR: Missing TIP after FUP\n");
decoder->pkt_state = INTEL_PT_STATE_ERR3;
decoder->pkt_step = 0;
@@ -2133,6 +2237,9 @@ static int intel_pt_vm_psb_lookahead_cb(struct intel_pt_pkt_info *pkt_info)
case INTEL_PT_TIP:
case INTEL_PT_PSB:
case INTEL_PT_TRACESTOP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
default:
return 1;
}
@@ -2652,6 +2759,9 @@ static int intel_pt_vm_time_correlation(struct intel_pt_decoder *decoder)
decoder->blk_type = 0;
break;
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
case INTEL_PT_MODE_EXEC:
case INTEL_PT_MODE_TSX:
case INTEL_PT_MNT:
@@ -2788,6 +2898,9 @@ static int intel_pt_hop_trace(struct intel_pt_decoder *decoder, bool *no_tip, in
case INTEL_PT_BIP:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
default:
return HOP_PROCESS;
}
@@ -2856,6 +2969,9 @@ static int intel_pt_psb_lookahead_cb(struct intel_pt_pkt_info *pkt_info)
case INTEL_PT_BIP:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
if (data->after_psbend) {
data->after_psbend -= 1;
if (!data->after_psbend)
@@ -3222,6 +3338,11 @@ static int intel_pt_walk_trace(struct intel_pt_decoder *decoder)
}
goto next;
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
+ break;
+
default:
return intel_pt_bug(decoder);
}
@@ -3264,6 +3385,9 @@ static int intel_pt_walk_psb(struct intel_pt_decoder *decoder)
case INTEL_PT_BIP:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
intel_pt_log("ERROR: Unexpected packet\n");
err = -ENOENT;
goto out;
@@ -3475,6 +3599,9 @@ static int intel_pt_walk_to_ip(struct intel_pt_decoder *decoder)
case INTEL_PT_BIP:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
default:
break;
}
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
index 4b5e79f..0a641ab 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
@@ -55,6 +55,7 @@ enum {
INTEL_PT_ERR_LOST,
INTEL_PT_ERR_UNK,
INTEL_PT_ERR_NELOOP,
+ INTEL_PT_ERR_EPTW,
INTEL_PT_ERR_MAX,
};
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.c
index 593f20e..9f29cf7 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.c
@@ -32,6 +32,7 @@ static void intel_pt_insn_decoder(struct insn *insn,
int ext;
intel_pt_insn->rel = 0;
+ intel_pt_insn->emulated_ptwrite = false;
if (insn_is_avx(insn)) {
intel_pt_insn->op = INTEL_PT_OP_OTHER;
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.h
index c2861cf..e3338b5 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-insn-decoder.h
@@ -37,6 +37,7 @@ enum intel_pt_insn_branch {
struct intel_pt_insn {
enum intel_pt_insn_op op;
enum intel_pt_insn_branch branch;
+ bool emulated_ptwrite;
int length;
int32_t rel;
unsigned char buf[INTEL_PT_INSN_BUF_SZ];
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
index 02a3395..6ff97b6 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.c
@@ -64,6 +64,9 @@ static const char * const packet_name[] = {
[INTEL_PT_BIP] = "BIP",
[INTEL_PT_BEP] = "BEP",
[INTEL_PT_BEP_IP] = "BEP",
+ [INTEL_PT_CFE] = "CFE",
+ [INTEL_PT_CFE_IP] = "CFE",
+ [INTEL_PT_EVD] = "EVD",
};
const char *intel_pt_pkt_name(enum intel_pt_pkt_type type)
@@ -329,6 +332,29 @@ static int intel_pt_get_bep_ip(size_t len, struct intel_pt_pkt *packet)
return 2;
}
+static int intel_pt_get_cfe(const unsigned char *buf, size_t len,
+ struct intel_pt_pkt *packet)
+{
+ if (len < 4)
+ return INTEL_PT_NEED_MORE_BYTES;
+ packet->type = buf[2] & 0x80 ? INTEL_PT_CFE_IP : INTEL_PT_CFE;
+ packet->count = buf[2] & 0x1f;
+ packet->payload = buf[3];
+ return 4;
+}
+
+static int intel_pt_get_evd(const unsigned char *buf, size_t len,
+ struct intel_pt_pkt *packet)
+{
+ if (len < 11)
+ return INTEL_PT_NEED_MORE_BYTES;
+ packet->type = INTEL_PT_EVD;
+ packet->count = buf[2] & 0x3f;
+ packet->payload = buf[3];
+ memcpy_le64(&packet->payload, buf + 3, 8);
+ return 11;
+}
+
static int intel_pt_get_ext(const unsigned char *buf, size_t len,
struct intel_pt_pkt *packet)
{
@@ -375,6 +401,10 @@ static int intel_pt_get_ext(const unsigned char *buf, size_t len,
return intel_pt_get_bep(len, packet);
case 0xb3: /* BEP with IP */
return intel_pt_get_bep_ip(len, packet);
+ case 0x13: /* CFE */
+ return intel_pt_get_cfe(buf, len, packet);
+ case 0x53: /* EVD */
+ return intel_pt_get_evd(buf, len, packet);
default:
return INTEL_PT_BAD_PACKET;
}
@@ -624,6 +654,9 @@ void intel_pt_upd_pkt_ctx(const struct intel_pt_pkt *packet,
case INTEL_PT_MWAIT:
case INTEL_PT_BEP:
case INTEL_PT_BEP_IP:
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ case INTEL_PT_EVD:
*ctx = INTEL_PT_NO_CTX;
break;
case INTEL_PT_BBP:
@@ -751,6 +784,13 @@ int intel_pt_pkt_desc(const struct intel_pt_pkt *packet, char *buf,
case INTEL_PT_BIP:
return snprintf(buf, buf_len, "%s ID 0x%02x Value 0x%llx",
name, packet->count, payload);
+ case INTEL_PT_CFE:
+ case INTEL_PT_CFE_IP:
+ return snprintf(buf, buf_len, "%s IP:%d Type 0x%02x Vector 0x%llx",
+ name, packet->type == INTEL_PT_CFE_IP, packet->count, payload);
+ case INTEL_PT_EVD:
+ return snprintf(buf, buf_len, "%s Type 0x%02x Payload 0x%llx",
+ name, packet->count, payload);
default:
break;
}
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.h
index 996090c..496ba4b 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-pkt-decoder.h
@@ -56,6 +56,9 @@ enum intel_pt_pkt_type {
INTEL_PT_BIP,
INTEL_PT_BEP,
INTEL_PT_BEP_IP,
+ INTEL_PT_CFE,
+ INTEL_PT_CFE_IP,
+ INTEL_PT_EVD,
};
struct intel_pt_pkt {
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 6324195..7a2ce387 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -506,6 +506,7 @@ struct intel_pt_cache_entry {
u64 byte_cnt;
enum intel_pt_insn_op op;
enum intel_pt_insn_branch branch;
+ bool emulated_ptwrite;
int length;
int32_t rel;
char insn[INTEL_PT_INSN_BUF_SZ];
@@ -592,6 +593,7 @@ static int intel_pt_cache_add(struct dso *dso, struct machine *machine,
e->byte_cnt = byte_cnt;
e->op = intel_pt_insn->op;
e->branch = intel_pt_insn->branch;
+ e->emulated_ptwrite = intel_pt_insn->emulated_ptwrite;
e->length = intel_pt_insn->length;
e->rel = intel_pt_insn->rel;
memcpy(e->insn, intel_pt_insn->buf, INTEL_PT_INSN_BUF_SZ);
@@ -678,6 +680,28 @@ static int intel_pt_get_guest(struct intel_pt_queue *ptq)
return 0;
}
+static inline bool intel_pt_jmp_16(struct intel_pt_insn *intel_pt_insn)
+{
+ return intel_pt_insn->rel == 16 && intel_pt_insn->branch == INTEL_PT_BR_UNCONDITIONAL;
+}
+
+#define PTWRITE_MAGIC "\x0f\x0bperf,ptwrite "
+#define PTWRITE_MAGIC_LEN 16
+
+static bool intel_pt_emulated_ptwrite(struct dso *dso, struct machine *machine, u64 offset)
+{
+ unsigned char buf[PTWRITE_MAGIC_LEN];
+ ssize_t len;
+
+ len = dso__data_read_offset(dso, machine, offset, buf, PTWRITE_MAGIC_LEN);
+ if (len == PTWRITE_MAGIC_LEN && !memcmp(buf, PTWRITE_MAGIC, PTWRITE_MAGIC_LEN)) {
+ intel_pt_log("Emulated ptwrite signature found\n");
+ return true;
+ }
+ intel_pt_log("Emulated ptwrite signature not found\n");
+ return false;
+}
+
static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
uint64_t *insn_cnt_ptr, uint64_t *ip,
uint64_t to_ip, uint64_t max_insn_cnt,
@@ -740,6 +764,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
*ip += e->byte_cnt;
intel_pt_insn->op = e->op;
intel_pt_insn->branch = e->branch;
+ intel_pt_insn->emulated_ptwrite = e->emulated_ptwrite;
intel_pt_insn->length = e->length;
intel_pt_insn->rel = e->rel;
memcpy(intel_pt_insn->buf, e->insn,
@@ -771,8 +796,18 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
insn_cnt += 1;
- if (intel_pt_insn->branch != INTEL_PT_BR_NO_BRANCH)
+ if (intel_pt_insn->branch != INTEL_PT_BR_NO_BRANCH) {
+ bool eptw;
+ u64 offs;
+
+ if (!intel_pt_jmp_16(intel_pt_insn))
+ goto out;
+ /* Check for emulated ptwrite */
+ offs = offset + intel_pt_insn->length;
+ eptw = intel_pt_emulated_ptwrite(al.map->dso, machine, offs);
+ intel_pt_insn->emulated_ptwrite = eptw;
goto out;
+ }
if (max_insn_cnt && insn_cnt >= max_insn_cnt)
goto out_no_cache;
@@ -3907,6 +3942,12 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
intel_pt_setup_pebs_events(pt);
+ if (perf_data__is_pipe(session->data)) {
+ pr_warning("WARNING: Intel PT with pipe mode is not recommended.\n"
+ " The output cannot relied upon. In particular,\n"
+ " timestamps and the order of events may be incorrect.\n");
+ }
+
if (pt->sampling_mode || list_empty(&session->auxtrace_index))
err = auxtrace_queue_data(session, true, true);
else
diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
index 96c8ef6..8ee3a94 100644
--- a/tools/perf/util/llvm-utils.c
+++ b/tools/perf/util/llvm-utils.c
@@ -531,14 +531,37 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
pr_debug("llvm compiling command template: %s\n", template);
+ /*
+ * Below, substitute control characters for values that can cause the
+ * echo to misbehave, then substitute the values back.
+ */
err = -ENOMEM;
- if (asprintf(&command_echo, "echo -n \"%s\"", template) < 0)
+ if (asprintf(&command_echo, "echo -n \a%s\a", template) < 0)
goto errout;
+#define SWAP_CHAR(a, b) do { if (*p == a) *p = b; } while (0)
+ for (char *p = command_echo; *p; p++) {
+ SWAP_CHAR('<', '\001');
+ SWAP_CHAR('>', '\002');
+ SWAP_CHAR('"', '\003');
+ SWAP_CHAR('\'', '\004');
+ SWAP_CHAR('|', '\005');
+ SWAP_CHAR('&', '\006');
+ SWAP_CHAR('\a', '"');
+ }
err = read_from_pipe(command_echo, (void **) &command_out, NULL);
if (err)
goto errout;
+ for (char *p = command_out; *p; p++) {
+ SWAP_CHAR('\001', '<');
+ SWAP_CHAR('\002', '>');
+ SWAP_CHAR('\003', '"');
+ SWAP_CHAR('\004', '\'');
+ SWAP_CHAR('\005', '|');
+ SWAP_CHAR('\006', '&');
+ }
+#undef SWAP_CHAR
pr_debug("llvm compiling command : %s\n", command_out);
err = read_from_pipe(template, &obj_buf, &obj_buf_sz);
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index 09ea334..5a0b3db 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -576,11 +576,7 @@ int create_perf_stat_counter(struct evsel *evsel,
if (evsel__is_group_leader(evsel)) {
attr->disabled = 1;
- /*
- * In case of initial_delay we enable tracee
- * events manually.
- */
- if (target__none(target) && !config->initial_delay)
+ if (target__enable_on_exec(target))
attr->enable_on_exec = 1;
}
diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
index 32c8527..977616c 100644
--- a/tools/perf/util/stat.h
+++ b/tools/perf/util/stat.h
@@ -137,7 +137,6 @@ struct perf_stat_config {
FILE *output;
unsigned int interval;
unsigned int timeout;
- int initial_delay;
unsigned int unit_width;
unsigned int metric_only_len;
int times;
diff --git a/tools/perf/util/target.h b/tools/perf/util/target.h
index daec6cb..880f1af 100644
--- a/tools/perf/util/target.h
+++ b/tools/perf/util/target.h
@@ -18,6 +18,7 @@ struct target {
bool per_thread;
bool use_bpf;
bool hybrid;
+ int initial_delay;
const char *attr_map;
};
@@ -72,6 +73,17 @@ static inline bool target__none(struct target *target)
return !target__has_task(target) && !target__has_cpu(target);
}
+static inline bool target__enable_on_exec(struct target *target)
+{
+ /*
+ * Normally enable_on_exec should be set if:
+ * 1) The tracee process is forked (not attaching to existed task or cpu).
+ * 2) And initial_delay is not configured.
+ * Otherwise, we enable tracee events manually.
+ */
+ return target__none(target) && !target->initial_delay;
+}
+
static inline bool target__has_per_thread(struct target *target)
{
return target->system_wide && target->per_thread;
diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
index bf9fd35..cd08ffe0 100644
--- a/tools/power/x86/intel-speed-select/isst-config.c
+++ b/tools/power/x86/intel-speed-select/isst-config.c
@@ -108,7 +108,7 @@ int is_skx_based_platform(void)
int is_spr_platform(void)
{
- if (cpu_model == 0x8F)
+ if (cpu_model == 0x8F || cpu_model == 0xCF)
return 1;
return 0;
diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
index 9b17097..b3d4bf0 100644
--- a/tools/power/x86/turbostat/turbostat.8
+++ b/tools/power/x86/turbostat/turbostat.8
@@ -296,6 +296,8 @@
# chmod +r /dev/cpu/*/msr
+# chmod +r /dev/cpu_dma_latency
+
.B "turbostat "
reads hardware counters, but doesn't write them.
So it will not interfere with the OS or other programs, including
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
index 4f176bb..a3197ef 100644
--- a/tools/power/x86/turbostat/turbostat.c
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -4241,7 +4241,7 @@ int print_hwp(struct thread_data *t, struct core_data *c, struct pkg_data *p)
fprintf(outf, "cpu%d: MSR_HWP_STATUS: 0x%08llx "
"(%sGuaranteed_Perf_Change, %sExcursion_Min)\n",
- cpu, msr, ((msr) & 0x1) ? "" : "No-", ((msr) & 0x2) ? "" : "No-");
+ cpu, msr, ((msr) & 0x1) ? "" : "No-", ((msr) & 0x4) ? "" : "No-");
return 0;
}
@@ -5237,7 +5237,7 @@ void print_dev_latency(void)
retval = read(fd, (void *)&value, sizeof(int));
if (retval != sizeof(int)) {
- warn("read %s\n", path);
+ warn("read failed %s\n", path);
close(fd);
return;
}
diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
index 1737c59..e6c3814 100755
--- a/tools/testing/ktest/ktest.pl
+++ b/tools/testing/ktest/ktest.pl
@@ -178,6 +178,7 @@
my $store_successes;
my $test_name;
my $timeout;
+my $run_timeout;
my $connect_timeout;
my $config_bisect_exec;
my $booted_timeout;
@@ -340,6 +341,7 @@
"STORE_SUCCESSES" => \$store_successes,
"TEST_NAME" => \$test_name,
"TIMEOUT" => \$timeout,
+ "RUN_TIMEOUT" => \$run_timeout,
"CONNECT_TIMEOUT" => \$connect_timeout,
"CONFIG_BISECT_EXEC" => \$config_bisect_exec,
"BOOTED_TIMEOUT" => \$booted_timeout,
@@ -1488,7 +1490,8 @@
# Still need to wait for the reboot to finish
wait_for_monitor($time, $reboot_success_line);
-
+ }
+ if ($powercycle || $time) {
end_monitor;
}
}
@@ -1850,6 +1853,14 @@
$command =~ s/\$SSH_USER/$ssh_user/g;
$command =~ s/\$MACHINE/$machine/g;
+ if (!defined($timeout)) {
+ $timeout = $run_timeout;
+ }
+
+ if (!defined($timeout)) {
+ $timeout = -1; # tell wait_for_input to wait indefinitely
+ }
+
doprint("$command ... ");
$start_time = time;
@@ -1876,13 +1887,10 @@
while (1) {
my $fp = \*CMD;
- if (defined($timeout)) {
- doprint "timeout = $timeout\n";
- }
my $line = wait_for_input($fp, $timeout);
if (!defined($line)) {
my $now = time;
- if (defined($timeout) && (($now - $start_time) >= $timeout)) {
+ if ($timeout >= 0 && (($now - $start_time) >= $timeout)) {
doprint "Hit timeout of $timeout, killing process\n";
$hit_timeout = 1;
kill 9, $pid;
@@ -2054,6 +2062,11 @@
$time = $timeout;
}
+ if ($time < 0) {
+ # Negative number means wait indefinitely
+ undef $time;
+ }
+
$rin = '';
vec($rin, fileno($fp), 1) = 1;
vec($rin, fileno(\*STDIN), 1) = 1;
@@ -4193,6 +4206,9 @@
}
sub cancel_test {
+ if ($monitor_cnt) {
+ end_monitor;
+ }
if ($email_when_canceled) {
my $name = get_test_name;
send_email("KTEST: Your [$name] test was cancelled",
diff --git a/tools/testing/ktest/sample.conf b/tools/testing/ktest/sample.conf
index 5e7d1d72..65957a9 100644
--- a/tools/testing/ktest/sample.conf
+++ b/tools/testing/ktest/sample.conf
@@ -809,6 +809,11 @@
# is issued instead of a reboot.
# CONNECT_TIMEOUT = 25
+# The timeout in seconds for how long to wait for any running command
+# to timeout. If not defined, it will let it go indefinitely.
+# (default undefined)
+#RUN_TIMEOUT = 600
+
# In between tests, a reboot of the box may occur, and this
# is the time to wait for the console after it stops producing
# output. Some machines may not produce a large lag on reboot
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 638966a..0d845a0 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -144,8 +144,6 @@
# NOTE: Semicolon at the end is critical to override lib.mk's default static
# rule for binaries.
$(notdir $(TEST_GEN_PROGS) \
- $(TEST_PROGS) \
- $(TEST_PROGS_EXTENDED) \
$(TEST_GEN_PROGS_EXTENDED) \
$(TEST_CUSTOM_PROGS)): %: $(OUTPUT)/% ;
diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
index 50afa75..2a04dbe 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf.c
@@ -882,6 +882,34 @@ static struct btf_raw_test raw_tests[] = {
.btf_load_err = true,
.err_str = "Invalid elem",
},
+{
+ .descr = "var after datasec, ptr followed by modifier",
+ .raw_types = {
+ /* .bss section */ /* [1] */
+ BTF_TYPE_ENC(NAME_TBD, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 2),
+ sizeof(void*)+4),
+ BTF_VAR_SECINFO_ENC(4, 0, sizeof(void*)),
+ BTF_VAR_SECINFO_ENC(6, sizeof(void*), 4),
+ /* int */ /* [2] */
+ BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4),
+ /* int* */ /* [3] */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_PTR, 0, 0), 2),
+ BTF_VAR_ENC(NAME_TBD, 3, 0), /* [4] */
+ /* const int */ /* [5] */
+ BTF_TYPE_ENC(0, BTF_INFO_ENC(BTF_KIND_CONST, 0, 0), 2),
+ BTF_VAR_ENC(NAME_TBD, 5, 0), /* [6] */
+ BTF_END_RAW,
+ },
+ .str_sec = "\0a\0b\0c\0",
+ .str_sec_size = sizeof("\0a\0b\0c\0"),
+ .map_type = BPF_MAP_TYPE_ARRAY,
+ .map_name = ".bss",
+ .key_size = sizeof(int),
+ .value_size = sizeof(void*)+4,
+ .key_type_id = 0,
+ .value_type_id = 1,
+ .max_entries = 1,
+},
/* Test member exceeds the size of struct.
*
* struct A {
diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c
index 8f44767..22a7cd8 100644
--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c
+++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_bitfields.c
@@ -53,7 +53,7 @@ struct bitfields_only_mixed_types {
*/
/* ------ END-EXPECTED-OUTPUT ------ */
struct bitfield_mixed_with_others {
- long: 4; /* char is enough as a backing field */
+ char: 4; /* char is enough as a backing field */
int a: 4;
/* 8-bit implicit padding */
short b; /* combined with previous bitfield */
diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c
index 1cef3be..22dbd12 100644
--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c
+++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_packing.c
@@ -58,7 +58,81 @@ union jump_code_union {
} __attribute__((packed));
};
-/*------ END-EXPECTED-OUTPUT ------ */
+/* ----- START-EXPECTED-OUTPUT ----- */
+/*
+ *struct nested_packed_but_aligned_struct {
+ * int x1;
+ * int x2;
+ *};
+ *
+ *struct outer_implicitly_packed_struct {
+ * char y1;
+ * struct nested_packed_but_aligned_struct y2;
+ *} __attribute__((packed));
+ *
+ */
+/* ------ END-EXPECTED-OUTPUT ------ */
+
+struct nested_packed_but_aligned_struct {
+ int x1;
+ int x2;
+} __attribute__((packed));
+
+struct outer_implicitly_packed_struct {
+ char y1;
+ struct nested_packed_but_aligned_struct y2;
+};
+/* ----- START-EXPECTED-OUTPUT ----- */
+/*
+ *struct usb_ss_ep_comp_descriptor {
+ * char: 8;
+ * char bDescriptorType;
+ * char bMaxBurst;
+ * short wBytesPerInterval;
+ *};
+ *
+ *struct usb_host_endpoint {
+ * long: 64;
+ * char: 8;
+ * struct usb_ss_ep_comp_descriptor ss_ep_comp;
+ * long: 0;
+ *} __attribute__((packed));
+ *
+ */
+/* ------ END-EXPECTED-OUTPUT ------ */
+
+struct usb_ss_ep_comp_descriptor {
+ char: 8;
+ char bDescriptorType;
+ char bMaxBurst;
+ int: 0;
+ short wBytesPerInterval;
+} __attribute__((packed));
+
+struct usb_host_endpoint {
+ long: 64;
+ char: 8;
+ struct usb_ss_ep_comp_descriptor ss_ep_comp;
+ long: 0;
+};
+
+/* ----- START-EXPECTED-OUTPUT ----- */
+struct nested_packed_struct {
+ int a;
+ char b;
+} __attribute__((packed));
+
+struct outer_nonpacked_struct {
+ short a;
+ struct nested_packed_struct b;
+};
+
+struct outer_packed_struct {
+ short a;
+ struct nested_packed_struct b;
+} __attribute__((packed));
+
+/* ------ END-EXPECTED-OUTPUT ------ */
int f(struct {
struct packed_trailing_space _1;
@@ -69,6 +143,10 @@ int f(struct {
union union_is_never_packed _6;
union union_does_not_need_packing _7;
union jump_code_union _8;
+ struct outer_implicitly_packed_struct _9;
+ struct usb_host_endpoint _10;
+ struct outer_nonpacked_struct _11;
+ struct outer_packed_struct _12;
} *_)
{
return 0;
diff --git a/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c b/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c
index 35c5128..0b3cdff 100644
--- a/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c
+++ b/tools/testing/selftests/bpf/progs/btf_dump_test_case_padding.c
@@ -19,7 +19,7 @@ struct padded_implicitly {
/*
*struct padded_explicitly {
* int a;
- * int: 32;
+ * long: 0;
* int b;
*};
*
@@ -28,41 +28,28 @@ struct padded_implicitly {
struct padded_explicitly {
int a;
- int: 1; /* algo will explicitly pad with full 32 bits here */
+ int: 1; /* algo will emit aligning `long: 0;` here */
int b;
};
/* ----- START-EXPECTED-OUTPUT ----- */
-/*
- *struct padded_a_lot {
- * int a;
- * long: 32;
- * long: 64;
- * long: 64;
- * int b;
- *};
- *
- */
-/* ------ END-EXPECTED-OUTPUT ------ */
-
struct padded_a_lot {
int a;
- /* 32 bit of implicit padding here, which algo will make explicit */
long: 64;
long: 64;
int b;
};
+/* ------ END-EXPECTED-OUTPUT ------ */
+
/* ----- START-EXPECTED-OUTPUT ----- */
/*
*struct padded_cache_line {
* int a;
- * long: 32;
* long: 64;
* long: 64;
* long: 64;
* int b;
- * long: 32;
* long: 64;
* long: 64;
* long: 64;
@@ -85,7 +72,7 @@ struct padded_cache_line {
*struct zone {
* int a;
* short b;
- * short: 16;
+ * long: 0;
* struct zone_padding __pad__;
*};
*
@@ -102,12 +89,160 @@ struct zone {
struct zone_padding __pad__;
};
+/* ----- START-EXPECTED-OUTPUT ----- */
+struct padding_wo_named_members {
+ long: 64;
+ long: 64;
+};
+
+struct padding_weird_1 {
+ int a;
+ long: 64;
+ short: 16;
+ short b;
+};
+
+/* ------ END-EXPECTED-OUTPUT ------ */
+
+/* ----- START-EXPECTED-OUTPUT ----- */
+/*
+ *struct padding_weird_2 {
+ * long: 56;
+ * char a;
+ * long: 56;
+ * char b;
+ * char: 8;
+ *};
+ *
+ */
+/* ------ END-EXPECTED-OUTPUT ------ */
+struct padding_weird_2 {
+ int: 32; /* these paddings will be collapsed into `long: 56;` */
+ short: 16;
+ char: 8;
+ char a;
+ int: 32; /* these paddings will be collapsed into `long: 56;` */
+ short: 16;
+ char: 8;
+ char b;
+ char: 8;
+};
+
+/* ----- START-EXPECTED-OUTPUT ----- */
+struct exact_1byte {
+ char x;
+};
+
+struct padded_1byte {
+ char: 8;
+};
+
+struct exact_2bytes {
+ short x;
+};
+
+struct padded_2bytes {
+ short: 16;
+};
+
+struct exact_4bytes {
+ int x;
+};
+
+struct padded_4bytes {
+ int: 32;
+};
+
+struct exact_8bytes {
+ long x;
+};
+
+struct padded_8bytes {
+ long: 64;
+};
+
+struct ff_periodic_effect {
+ int: 32;
+ short magnitude;
+ long: 0;
+ short phase;
+ long: 0;
+ int: 32;
+ int custom_len;
+ short *custom_data;
+};
+
+struct ib_wc {
+ long: 64;
+ long: 64;
+ int: 32;
+ int byte_len;
+ void *qp;
+ union {} ex;
+ long: 64;
+ int slid;
+ int wc_flags;
+ long: 64;
+ char smac[6];
+ long: 0;
+ char network_hdr_type;
+};
+
+struct acpi_object_method {
+ long: 64;
+ char: 8;
+ char type;
+ short reference_count;
+ char flags;
+ short: 0;
+ char: 8;
+ char sync_level;
+ long: 64;
+ void *node;
+ void *aml_start;
+ union {} dispatch;
+ long: 64;
+ int aml_length;
+};
+
+struct nested_unpacked {
+ int x;
+};
+
+struct nested_packed {
+ struct nested_unpacked a;
+ char c;
+} __attribute__((packed));
+
+struct outer_mixed_but_unpacked {
+ struct nested_packed b1;
+ short a1;
+ struct nested_packed b2;
+};
+
+/* ------ END-EXPECTED-OUTPUT ------ */
+
int f(struct {
struct padded_implicitly _1;
struct padded_explicitly _2;
struct padded_a_lot _3;
struct padded_cache_line _4;
struct zone _5;
+ struct padding_wo_named_members _6;
+ struct padding_weird_1 _7;
+ struct padding_weird_2 _8;
+ struct exact_1byte _100;
+ struct padded_1byte _101;
+ struct exact_2bytes _102;
+ struct padded_2bytes _103;
+ struct exact_4bytes _104;
+ struct padded_4bytes _105;
+ struct exact_8bytes _106;
+ struct padded_8bytes _107;
+ struct ff_periodic_effect _200;
+ struct ib_wc _201;
+ struct acpi_object_method _202;
+ struct outer_mixed_but_unpacked _203;
} *_)
{
return 0;
diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
index 7e50cb8..7e36078 100644
--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
+++ b/tools/testing/selftests/bpf/verifier/search_pruning.c
@@ -154,3 +154,39 @@
.result_unpriv = ACCEPT,
.insn_processed = 15,
},
+/* The test performs a conditional 64-bit write to a stack location
+ * fp[-8], this is followed by an unconditional 8-bit write to fp[-8],
+ * then data is read from fp[-8]. This sequence is unsafe.
+ *
+ * The test would be mistakenly marked as safe w/o dst register parent
+ * preservation in verifier.c:copy_register_state() function.
+ *
+ * Note the usage of BPF_F_TEST_STATE_FREQ to force creation of the
+ * checkpoint state after conditional 64-bit assignment.
+ */
+{
+ "write tracking and register parent chain bug",
+ .insns = {
+ /* r6 = ktime_get_ns() */
+ BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
+ BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
+ /* r0 = ktime_get_ns() */
+ BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
+ /* if r0 > r6 goto +1 */
+ BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_6, 1),
+ /* *(u64 *)(r10 - 8) = 0xdeadbeef */
+ BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0xdeadbeef),
+ /* r1 = 42 */
+ BPF_MOV64_IMM(BPF_REG_1, 42),
+ /* *(u8 *)(r10 - 8) = r1 */
+ BPF_STX_MEM(BPF_B, BPF_REG_FP, BPF_REG_1, -8),
+ /* r2 = *(u64 *)(r10 - 8) */
+ BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_FP, -8),
+ /* exit(0) */
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+ .flags = BPF_F_TEST_STATE_FREQ,
+ .errstr = "invalid read from stack off -8+1 size 8",
+ .result = REJECT,
+},
diff --git a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
index a08c02a..7f7d20f 100755
--- a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+++ b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
@@ -17,6 +17,18 @@
DEBUGFS_DIR=/sys/kernel/debug/netdevsim/$DEV_NAME/
DL_HANDLE=netdevsim/$DEV_NAME
+wait_for_devlink()
+{
+ "$@" | grep -q $DL_HANDLE
+}
+
+devlink_wait()
+{
+ local timeout=$1
+
+ busywait "$timeout" wait_for_devlink devlink dev
+}
+
fw_flash_test()
{
RET=0
@@ -256,6 +268,9 @@
ip netns del testns2
ip netns del testns1
+ # Wait until netns async cleanup is done.
+ devlink_wait 2000
+
log_test "netns reload test"
}
@@ -348,6 +363,9 @@
ip netns del testns2
ip netns del testns1
+ # Wait until netns async cleanup is done.
+ devlink_wait 2000
+
log_test "resource test"
}
diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
index 27a68bb..d9b8127 100644
--- a/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
+++ b/tools/testing/selftests/ftrace/test.d/ftrace/func_event_triggers.tc
@@ -42,7 +42,7 @@
while [ $check_times -ne 0 ]; do
e=`cat $EVENT_ENABLE`
- if [ "$e" == $val ]; then
+ if [ "$e" = $val ]; then
return 0
fi
sleep $SLEEP_TIME
diff --git a/tools/testing/selftests/kselftest.h b/tools/testing/selftests/kselftest.h
index 8d50483..898d7b2 100644
--- a/tools/testing/selftests/kselftest.h
+++ b/tools/testing/selftests/kselftest.h
@@ -48,6 +48,25 @@
#include <stdarg.h>
#include <stdio.h>
+#ifndef ARRAY_SIZE
+#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+#endif
+
+/*
+ * gcc cpuid.h provides __cpuid_count() since v4.4.
+ * Clang/LLVM cpuid.h provides __cpuid_count() since v3.4.0.
+ *
+ * Provide local define for tests needing __cpuid_count() because
+ * selftests need to work in older environments that do not yet
+ * have __cpuid_count().
+ */
+#ifndef __cpuid_count
+#define __cpuid_count(level, count, a, b, c, d) \
+ __asm__ __volatile__ ("cpuid\n\t" \
+ : "=a" (a), "=b" (b), "=c" (c), "=d" (d) \
+ : "0" (level), "2" (count))
+#endif
+
/* define kselftest exit codes */
#define KSFT_PASS 0
#define KSFT_FAIL 1
diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
index 78e5962..1177940 100644
--- a/tools/testing/selftests/kselftest_harness.h
+++ b/tools/testing/selftests/kselftest_harness.h
@@ -671,7 +671,9 @@
#define EXPECT_STRNE(expected, seen) \
__EXPECT_STR(expected, seen, !=, 0)
+#ifndef ARRAY_SIZE
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
+#endif
/* Support an optional handler after and ASSERT_* or EXPECT_*. The approach is
* not thread-safe, but it should be fine in most sane test scenarios.
diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
index a4fdcda..ea988b3 100644
--- a/tools/testing/selftests/landlock/fs_test.c
+++ b/tools/testing/selftests/landlock/fs_test.c
@@ -11,6 +11,7 @@
#include <fcntl.h>
#include <linux/landlock.h>
#include <sched.h>
+#include <stdio.h>
#include <string.h>
#include <sys/capability.h>
#include <sys/mount.h>
@@ -87,6 +88,40 @@ static const char dir_s3d3[] = TMP_DIR "/s3d1/s3d2/s3d3";
* └── s3d3
*/
+static bool fgrep(FILE *const inf, const char *const str)
+{
+ char line[32];
+ const int slen = strlen(str);
+
+ while (!feof(inf)) {
+ if (!fgets(line, sizeof(line), inf))
+ break;
+ if (strncmp(line, str, slen))
+ continue;
+
+ return true;
+ }
+
+ return false;
+}
+
+static bool supports_overlayfs(void)
+{
+ bool res;
+ FILE *const inf = fopen("/proc/filesystems", "r");
+
+ /*
+ * Consider that the filesystem is supported if we cannot get the
+ * supported ones.
+ */
+ if (!inf)
+ return true;
+
+ res = fgrep(inf, "nodev\toverlay\n");
+ fclose(inf);
+ return res;
+}
+
static void mkdir_parents(struct __test_metadata *const _metadata,
const char *const path)
{
@@ -2650,6 +2685,9 @@ FIXTURE(layout2_overlay) {};
FIXTURE_SETUP(layout2_overlay)
{
+ if (!supports_overlayfs())
+ SKIP(return, "overlayfs is not supported");
+
prepare_layout(_metadata);
create_directory(_metadata, LOWER_BASE);
@@ -2686,6 +2724,9 @@ FIXTURE_SETUP(layout2_overlay)
FIXTURE_TEARDOWN(layout2_overlay)
{
+ if (!supports_overlayfs())
+ SKIP(return, "overlayfs is not supported");
+
EXPECT_EQ(0, remove_path(lower_do1_fl3));
EXPECT_EQ(0, remove_path(lower_dl1_fl2));
EXPECT_EQ(0, remove_path(lower_fl1));
@@ -2717,6 +2758,9 @@ FIXTURE_TEARDOWN(layout2_overlay)
TEST_F_FORK(layout2_overlay, no_restriction)
{
+ if (!supports_overlayfs())
+ SKIP(return, "overlayfs is not supported");
+
ASSERT_EQ(0, test_open(lower_fl1, O_RDONLY));
ASSERT_EQ(0, test_open(lower_dl1, O_RDONLY));
ASSERT_EQ(0, test_open(lower_dl1_fl2, O_RDONLY));
@@ -2880,6 +2924,9 @@ TEST_F_FORK(layout2_overlay, same_content_different_file)
size_t i;
const char *path_entry;
+ if (!supports_overlayfs())
+ SKIP(return, "overlayfs is not supported");
+
/* Sets rules on base directories (i.e. outside overlay scope). */
ruleset_fd = create_ruleset(_metadata, ACCESS_RW, layer1_base);
ASSERT_LE(0, ruleset_fd);
diff --git a/tools/testing/selftests/landlock/ptrace_test.c b/tools/testing/selftests/landlock/ptrace_test.c
index c28ef98..55e7871 100644
--- a/tools/testing/selftests/landlock/ptrace_test.c
+++ b/tools/testing/selftests/landlock/ptrace_test.c
@@ -19,6 +19,12 @@
#include "common.h"
+/* Copied from security/yama/yama_lsm.c */
+#define YAMA_SCOPE_DISABLED 0
+#define YAMA_SCOPE_RELATIONAL 1
+#define YAMA_SCOPE_CAPABILITY 2
+#define YAMA_SCOPE_NO_ATTACH 3
+
static void create_domain(struct __test_metadata *const _metadata)
{
int ruleset_fd;
@@ -60,6 +66,25 @@ static int test_ptrace_read(const pid_t pid)
return 0;
}
+static int get_yama_ptrace_scope(void)
+{
+ int ret;
+ char buf[2] = {};
+ const int fd = open("/proc/sys/kernel/yama/ptrace_scope", O_RDONLY);
+
+ if (fd < 0)
+ return 0;
+
+ if (read(fd, buf, 1) < 0) {
+ close(fd);
+ return -1;
+ }
+
+ ret = atoi(buf);
+ close(fd);
+ return ret;
+}
+
/* clang-format off */
FIXTURE(hierarchy) {};
/* clang-format on */
@@ -232,8 +257,51 @@ TEST_F(hierarchy, trace)
pid_t child, parent;
int status, err_proc_read;
int pipe_child[2], pipe_parent[2];
+ int yama_ptrace_scope;
char buf_parent;
long ret;
+ bool can_read_child, can_trace_child, can_read_parent, can_trace_parent;
+
+ yama_ptrace_scope = get_yama_ptrace_scope();
+ ASSERT_LE(0, yama_ptrace_scope);
+
+ if (yama_ptrace_scope > YAMA_SCOPE_DISABLED)
+ TH_LOG("Incomplete tests due to Yama restrictions (scope %d)",
+ yama_ptrace_scope);
+
+ /*
+ * can_read_child is true if a parent process can read its child
+ * process, which is only the case when the parent process is not
+ * isolated from the child with a dedicated Landlock domain.
+ */
+ can_read_child = !variant->domain_parent;
+
+ /*
+ * can_trace_child is true if a parent process can trace its child
+ * process. This depends on two conditions:
+ * - The parent process is not isolated from the child with a dedicated
+ * Landlock domain.
+ * - Yama allows tracing children (up to YAMA_SCOPE_RELATIONAL).
+ */
+ can_trace_child = can_read_child &&
+ yama_ptrace_scope <= YAMA_SCOPE_RELATIONAL;
+
+ /*
+ * can_read_parent is true if a child process can read its parent
+ * process, which is only the case when the child process is not
+ * isolated from the parent with a dedicated Landlock domain.
+ */
+ can_read_parent = !variant->domain_child;
+
+ /*
+ * can_trace_parent is true if a child process can trace its parent
+ * process. This depends on two conditions:
+ * - The child process is not isolated from the parent with a dedicated
+ * Landlock domain.
+ * - Yama is disabled (YAMA_SCOPE_DISABLED).
+ */
+ can_trace_parent = can_read_parent &&
+ yama_ptrace_scope <= YAMA_SCOPE_DISABLED;
/*
* Removes all effective and permitted capabilities to not interfere
@@ -264,16 +332,21 @@ TEST_F(hierarchy, trace)
/* Waits for the parent to be in a domain, if any. */
ASSERT_EQ(1, read(pipe_parent[0], &buf_child, 1));
- /* Tests PTRACE_ATTACH and PTRACE_MODE_READ on the parent. */
+ /* Tests PTRACE_MODE_READ on the parent. */
err_proc_read = test_ptrace_read(parent);
+ if (can_read_parent) {
+ EXPECT_EQ(0, err_proc_read);
+ } else {
+ EXPECT_EQ(EACCES, err_proc_read);
+ }
+
+ /* Tests PTRACE_ATTACH on the parent. */
ret = ptrace(PTRACE_ATTACH, parent, NULL, 0);
- if (variant->domain_child) {
+ if (can_trace_parent) {
+ EXPECT_EQ(0, ret);
+ } else {
EXPECT_EQ(-1, ret);
EXPECT_EQ(EPERM, errno);
- EXPECT_EQ(EACCES, err_proc_read);
- } else {
- EXPECT_EQ(0, ret);
- EXPECT_EQ(0, err_proc_read);
}
if (ret == 0) {
ASSERT_EQ(parent, waitpid(parent, &status, 0));
@@ -283,11 +356,11 @@ TEST_F(hierarchy, trace)
/* Tests child PTRACE_TRACEME. */
ret = ptrace(PTRACE_TRACEME);
- if (variant->domain_parent) {
+ if (can_trace_child) {
+ EXPECT_EQ(0, ret);
+ } else {
EXPECT_EQ(-1, ret);
EXPECT_EQ(EPERM, errno);
- } else {
- EXPECT_EQ(0, ret);
}
/*
@@ -296,7 +369,7 @@ TEST_F(hierarchy, trace)
*/
ASSERT_EQ(1, write(pipe_child[1], ".", 1));
- if (!variant->domain_parent) {
+ if (can_trace_child) {
ASSERT_EQ(0, raise(SIGSTOP));
}
@@ -321,7 +394,7 @@ TEST_F(hierarchy, trace)
ASSERT_EQ(1, read(pipe_child[0], &buf_parent, 1));
/* Tests child PTRACE_TRACEME. */
- if (!variant->domain_parent) {
+ if (can_trace_child) {
ASSERT_EQ(child, waitpid(child, &status, 0));
ASSERT_EQ(1, WIFSTOPPED(status));
ASSERT_EQ(0, ptrace(PTRACE_DETACH, child, NULL, 0));
@@ -331,17 +404,23 @@ TEST_F(hierarchy, trace)
EXPECT_EQ(ESRCH, errno);
}
- /* Tests PTRACE_ATTACH and PTRACE_MODE_READ on the child. */
+ /* Tests PTRACE_MODE_READ on the child. */
err_proc_read = test_ptrace_read(child);
+ if (can_read_child) {
+ EXPECT_EQ(0, err_proc_read);
+ } else {
+ EXPECT_EQ(EACCES, err_proc_read);
+ }
+
+ /* Tests PTRACE_ATTACH on the child. */
ret = ptrace(PTRACE_ATTACH, child, NULL, 0);
- if (variant->domain_parent) {
+ if (can_trace_child) {
+ EXPECT_EQ(0, ret);
+ } else {
EXPECT_EQ(-1, ret);
EXPECT_EQ(EPERM, errno);
- EXPECT_EQ(EACCES, err_proc_read);
- } else {
- EXPECT_EQ(0, ret);
- EXPECT_EQ(0, err_proc_read);
}
+
if (ret == 0) {
ASSERT_EQ(child, waitpid(child, &status, 0));
ASSERT_EQ(1, WIFSTOPPED(status));
diff --git a/tools/testing/selftests/lkdtm/stack-entropy.sh b/tools/testing/selftests/lkdtm/stack-entropy.sh
index 1b4d95d..14fedee 100755
--- a/tools/testing/selftests/lkdtm/stack-entropy.sh
+++ b/tools/testing/selftests/lkdtm/stack-entropy.sh
@@ -4,13 +4,27 @@
# Measure kernel stack entropy by sampling via LKDTM's REPORT_STACK test.
set -e
samples="${1:-1000}"
+TRIGGER=/sys/kernel/debug/provoke-crash/DIRECT
+KSELFTEST_SKIP_TEST=4
+
+# Verify we have LKDTM available in the kernel.
+if [ ! -r $TRIGGER ] ; then
+ /sbin/modprobe -q lkdtm || true
+ if [ ! -r $TRIGGER ] ; then
+ echo "Cannot find $TRIGGER (missing CONFIG_LKDTM?)"
+ else
+ echo "Cannot write $TRIGGER (need to run as root?)"
+ fi
+ # Skip this test
+ exit $KSELFTEST_SKIP_TEST
+fi
# Capture dmesg continuously since it may fill up depending on sample size.
log=$(mktemp -t stack-entropy-XXXXXX)
dmesg --follow >"$log" & pid=$!
report=-1
for i in $(seq 1 $samples); do
- echo "REPORT_STACK" >/sys/kernel/debug/provoke-crash/DIRECT
+ echo "REPORT_STACK" > $TRIGGER
if [ -t 1 ]; then
percent=$(( 100 * $i / $samples ))
if [ "$percent" -ne "$report" ]; then
diff --git a/tools/testing/selftests/net/devlink_port_split.py b/tools/testing/selftests/net/devlink_port_split.py
index 2b5d6ff..2d84c7a 100755
--- a/tools/testing/selftests/net/devlink_port_split.py
+++ b/tools/testing/selftests/net/devlink_port_split.py
@@ -59,6 +59,8 @@
assert stderr == ""
ports = json.loads(stdout)['port']
+ validate_devlink_output(ports, 'flavour')
+
for port in ports:
if dev in port:
if ports[port]['flavour'] == 'physical':
@@ -220,6 +222,27 @@
unsplit(port.bus_info)
+def validate_devlink_output(devlink_data, target_property=None):
+ """
+ Determine if test should be skipped by checking:
+ 1. devlink_data contains values
+ 2. The target_property exist in devlink_data
+ """
+ skip_reason = None
+ if any(devlink_data.values()):
+ if target_property:
+ skip_reason = "{} not found in devlink output, test skipped".format(target_property)
+ for key in devlink_data:
+ if target_property in devlink_data[key]:
+ skip_reason = None
+ else:
+ skip_reason = 'devlink output is empty, test skipped'
+
+ if skip_reason:
+ print(skip_reason)
+ sys.exit(KSFT_SKIP)
+
+
def make_parser():
parser = argparse.ArgumentParser(description='A test for port splitting.')
parser.add_argument('--dev',
@@ -240,12 +263,9 @@
stdout, stderr = run_command(cmd)
assert stderr == ""
+ validate_devlink_output(json.loads(stdout))
devs = json.loads(stdout)['dev']
- if devs:
- dev = list(devs.keys())[0]
- else:
- print("no devlink device was found, test skipped")
- sys.exit(KSFT_SKIP)
+ dev = list(devs.keys())[0]
cmd = "devlink dev show %s" % dev
stdout, stderr = run_command(cmd)
@@ -255,6 +275,7 @@
ports = devlink_ports(dev)
+ found_max_lanes = False
for port in ports.if_names:
max_lanes = get_max_lanes(port.name)
@@ -277,6 +298,11 @@
split_splittable_port(port, lane, max_lanes, dev)
lane //= 2
+ found_max_lanes = True
+
+ if not found_max_lanes:
+ print(f"Test not started, no port of device {dev} reports max_lanes")
+ sys.exit(KSFT_SKIP)
if __name__ == "__main__":
diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
index 7df066b..c64b0b1 100755
--- a/tools/testing/selftests/net/fib_tests.sh
+++ b/tools/testing/selftests/net/fib_tests.sh
@@ -1921,6 +1921,8 @@
################################################################################
# main
+trap cleanup EXIT
+
while getopts :t:pPhv o
do
case $o in
diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh
index c9507df..b7d946c 100644
--- a/tools/testing/selftests/net/forwarding/lib.sh
+++ b/tools/testing/selftests/net/forwarding/lib.sh
@@ -817,14 +817,14 @@
local value=$1; shift
SYSCTL_ORIG[$key]=$(sysctl -n $key)
- sysctl -qw $key=$value
+ sysctl -qw $key="$value"
}
sysctl_restore()
{
local key=$1; shift
- sysctl -qw $key=${SYSCTL_ORIG["$key"]}
+ sysctl -qw $key="${SYSCTL_ORIG[$key]}"
}
forwarding_enable()
diff --git a/tools/testing/selftests/net/forwarding/tc_actions.sh b/tools/testing/selftests/net/forwarding/tc_actions.sh
index d9eca22..22a1e4c 100755
--- a/tools/testing/selftests/net/forwarding/tc_actions.sh
+++ b/tools/testing/selftests/net/forwarding/tc_actions.sh
@@ -3,7 +3,7 @@
ALL_TESTS="gact_drop_and_ok_test mirred_egress_redirect_test \
mirred_egress_mirror_test matchall_mirred_egress_mirror_test \
- gact_trap_test"
+ gact_trap_test mirred_egress_to_ingress_tcp_test"
NUM_NETIFS=4
source tc_common.sh
source lib.sh
@@ -153,6 +153,53 @@
log_test "trap ($tcflags)"
}
+mirred_egress_to_ingress_tcp_test()
+{
+ local tmpfile=$(mktemp) tmpfile1=$(mktemp)
+
+ RET=0
+ dd conv=sparse status=none if=/dev/zero bs=1M count=2 of=$tmpfile
+ tc filter add dev $h1 protocol ip pref 100 handle 100 egress flower \
+ $tcflags ip_proto tcp src_ip 192.0.2.1 dst_ip 192.0.2.2 \
+ action ct commit nat src addr 192.0.2.2 pipe \
+ action ct clear pipe \
+ action ct commit nat dst addr 192.0.2.1 pipe \
+ action ct clear pipe \
+ action skbedit ptype host pipe \
+ action mirred ingress redirect dev $h1
+ tc filter add dev $h1 protocol ip pref 101 handle 101 egress flower \
+ $tcflags ip_proto icmp \
+ action mirred ingress redirect dev $h1
+ tc filter add dev $h1 protocol ip pref 102 handle 102 ingress flower \
+ ip_proto icmp \
+ action drop
+
+ ip vrf exec v$h1 nc --recv-only -w10 -l -p 12345 -o $tmpfile1 &
+ local rpid=$!
+ ip vrf exec v$h1 nc -w1 --send-only 192.0.2.2 12345 <$tmpfile
+ wait -n $rpid
+ cmp -s $tmpfile $tmpfile1
+ check_err $? "server output check failed"
+
+ $MZ $h1 -c 10 -p 64 -a $h1mac -b $h1mac -A 192.0.2.1 -B 192.0.2.1 \
+ -t icmp "ping,id=42,seq=5" -q
+ tc_check_packets "dev $h1 egress" 101 10
+ check_err $? "didn't mirred redirect ICMP"
+ tc_check_packets "dev $h1 ingress" 102 10
+ check_err $? "didn't drop mirred ICMP"
+ local overlimits=$(tc_rule_stats_get ${h1} 101 egress .overlimits)
+ test ${overlimits} = 10
+ check_err $? "wrong overlimits, expected 10 got ${overlimits}"
+
+ tc filter del dev $h1 egress protocol ip pref 100 handle 100 flower
+ tc filter del dev $h1 egress protocol ip pref 101 handle 101 flower
+ tc filter del dev $h1 ingress protocol ip pref 102 handle 102 flower
+
+ rm -f $tmpfile $tmpfile1
+ log_test "mirred_egress_to_ingress_tcp ($tcflags)"
+}
+
+>>>>>>> e921d05033293 (act_mirred: use the backlog for nested calls to mirred ingress)
setup_prepare()
{
h1=${NETIFS[p1]}
diff --git a/tools/testing/selftests/net/toeplitz.c b/tools/testing/selftests/net/toeplitz.c
index c548934..8ce9602 100644
--- a/tools/testing/selftests/net/toeplitz.c
+++ b/tools/testing/selftests/net/toeplitz.c
@@ -213,7 +213,7 @@ static char *recv_frame(const struct ring_state *ring, char *frame)
}
/* A single TPACKET_V3 block can hold multiple frames */
-static void recv_block(struct ring_state *ring)
+static bool recv_block(struct ring_state *ring)
{
struct tpacket_block_desc *block;
char *frame;
@@ -221,7 +221,7 @@ static void recv_block(struct ring_state *ring)
block = (void *)(ring->mmap + ring->idx * ring_block_sz);
if (!(block->hdr.bh1.block_status & TP_STATUS_USER))
- return;
+ return false;
frame = (char *)block;
frame += block->hdr.bh1.offset_to_first_pkt;
@@ -233,6 +233,8 @@ static void recv_block(struct ring_state *ring)
block->hdr.bh1.block_status = TP_STATUS_KERNEL;
ring->idx = (ring->idx + 1) % ring_block_nr;
+
+ return true;
}
/* simple test: sleep once unconditionally and then process all rings */
@@ -243,7 +245,7 @@ static void process_rings(void)
usleep(1000 * cfg_timeout_msec);
for (i = 0; i < num_cpus; i++)
- recv_block(&rings[i]);
+ do {} while (recv_block(&rings[i]));
fprintf(stderr, "count: pass=%u nohash=%u fail=%u\n",
frames_received - frames_nohash - frames_error,
@@ -255,12 +257,12 @@ static char *setup_ring(int fd)
struct tpacket_req3 req3 = {0};
void *ring;
- req3.tp_retire_blk_tov = cfg_timeout_msec;
+ req3.tp_retire_blk_tov = cfg_timeout_msec / 8;
req3.tp_feature_req_word = TP_FT_REQ_FILL_RXHASH;
req3.tp_frame_size = 2048;
req3.tp_frame_nr = 1 << 10;
- req3.tp_block_nr = 2;
+ req3.tp_block_nr = 16;
req3.tp_block_size = req3.tp_frame_size * req3.tp_frame_nr;
req3.tp_block_size /= req3.tp_block_nr;
diff --git a/tools/testing/selftests/net/udpgso_bench.sh b/tools/testing/selftests/net/udpgso_bench.sh
index dc932fd..640bc43 100755
--- a/tools/testing/selftests/net/udpgso_bench.sh
+++ b/tools/testing/selftests/net/udpgso_bench.sh
@@ -7,6 +7,7 @@
readonly YELLOW='\033[0;33m'
readonly RED='\033[0;31m'
readonly NC='\033[0m' # No Color
+readonly TESTPORT=8000
readonly KSFT_PASS=0
readonly KSFT_FAIL=1
@@ -56,11 +57,26 @@
run_one() {
local -r args=$@
+ local nr_socks=0
+ local i=0
+ local -r timeout=10
- ./udpgso_bench_rx &
- ./udpgso_bench_rx -t &
+ ./udpgso_bench_rx -p "$TESTPORT" &
+ ./udpgso_bench_rx -p "$TESTPORT" -t &
- ./udpgso_bench_tx ${args}
+ # Wait for the above test program to get ready to receive connections.
+ while [ "$i" -lt "$timeout" ]; do
+ nr_socks="$(ss -lnHi | grep -c "\*:${TESTPORT}")"
+ [ "$nr_socks" -eq 2 ] && break
+ i=$((i + 1))
+ sleep 1
+ done
+ if [ "$nr_socks" -ne 2 ]; then
+ echo "timed out while waiting for udpgso_bench_rx"
+ exit 1
+ fi
+
+ ./udpgso_bench_tx -p "$TESTPORT" ${args}
}
run_in_netns() {
diff --git a/tools/testing/selftests/net/udpgso_bench_rx.c b/tools/testing/selftests/net/udpgso_bench_rx.c
index 6a19342..f35a924 100644
--- a/tools/testing/selftests/net/udpgso_bench_rx.c
+++ b/tools/testing/selftests/net/udpgso_bench_rx.c
@@ -214,11 +214,10 @@ static void do_verify_udp(const char *data, int len)
static int recv_msg(int fd, char *buf, int len, int *gso_size)
{
- char control[CMSG_SPACE(sizeof(uint16_t))] = {0};
+ char control[CMSG_SPACE(sizeof(int))] = {0};
struct msghdr msg = {0};
struct iovec iov = {0};
struct cmsghdr *cmsg;
- uint16_t *gsosizeptr;
int ret;
iov.iov_base = buf;
@@ -237,8 +236,7 @@ static int recv_msg(int fd, char *buf, int len, int *gso_size)
cmsg = CMSG_NXTHDR(&msg, cmsg)) {
if (cmsg->cmsg_level == SOL_UDP
&& cmsg->cmsg_type == UDP_GRO) {
- gsosizeptr = (uint16_t *) CMSG_DATA(cmsg);
- *gso_size = *gsosizeptr;
+ *gso_size = *(int *)CMSG_DATA(cmsg);
break;
}
}
@@ -250,7 +248,7 @@ static int recv_msg(int fd, char *buf, int len, int *gso_size)
static void do_flush_udp(int fd)
{
static char rbuf[ETH_MAX_MTU];
- int ret, len, gso_size, budget = 256;
+ int ret, len, gso_size = 0, budget = 256;
len = cfg_read_all ? sizeof(rbuf) : 0;
while (budget--) {
@@ -336,6 +334,8 @@ static void parse_opts(int argc, char **argv)
cfg_verify = true;
cfg_read_all = true;
break;
+ default:
+ exit(1);
}
}
diff --git a/tools/testing/selftests/net/udpgso_bench_tx.c b/tools/testing/selftests/net/udpgso_bench_tx.c
index f1fdaa2..4773927 100644
--- a/tools/testing/selftests/net/udpgso_bench_tx.c
+++ b/tools/testing/selftests/net/udpgso_bench_tx.c
@@ -62,6 +62,7 @@ static int cfg_payload_len = (1472 * 42);
static int cfg_port = 8000;
static int cfg_runtime_ms = -1;
static bool cfg_poll;
+static int cfg_poll_loop_timeout_ms = 2000;
static bool cfg_segment;
static bool cfg_sendmmsg;
static bool cfg_tcp;
@@ -235,16 +236,17 @@ static void flush_errqueue_recv(int fd)
}
}
-static void flush_errqueue(int fd, const bool do_poll)
+static void flush_errqueue(int fd, const bool do_poll,
+ unsigned long poll_timeout, const bool poll_err)
{
if (do_poll) {
struct pollfd fds = {0};
int ret;
fds.fd = fd;
- ret = poll(&fds, 1, 500);
+ ret = poll(&fds, 1, poll_timeout);
if (ret == 0) {
- if (cfg_verbose)
+ if ((cfg_verbose) && (poll_err))
fprintf(stderr, "poll timeout\n");
} else if (ret < 0) {
error(1, errno, "poll");
@@ -254,6 +256,20 @@ static void flush_errqueue(int fd, const bool do_poll)
flush_errqueue_recv(fd);
}
+static void flush_errqueue_retry(int fd, unsigned long num_sends)
+{
+ unsigned long tnow, tstop;
+ bool first_try = true;
+
+ tnow = gettimeofday_ms();
+ tstop = tnow + cfg_poll_loop_timeout_ms;
+ do {
+ flush_errqueue(fd, true, tstop - tnow, first_try);
+ first_try = false;
+ tnow = gettimeofday_ms();
+ } while ((stat_zcopies != num_sends) && (tnow < tstop));
+}
+
static int send_tcp(int fd, char *data)
{
int ret, done = 0, count = 0;
@@ -413,7 +429,8 @@ static int send_udp_segment(int fd, char *data)
static void usage(const char *filepath)
{
- error(1, 0, "Usage: %s [-46acmHPtTuvz] [-C cpu] [-D dst ip] [-l secs] [-M messagenr] [-p port] [-s sendsize] [-S gsosize]",
+ error(1, 0, "Usage: %s [-46acmHPtTuvz] [-C cpu] [-D dst ip] [-l secs] "
+ "[-L secs] [-M messagenr] [-p port] [-s sendsize] [-S gsosize]",
filepath);
}
@@ -423,7 +440,7 @@ static void parse_opts(int argc, char **argv)
int max_len, hdrlen;
int c;
- while ((c = getopt(argc, argv, "46acC:D:Hl:mM:p:s:PS:tTuvz")) != -1) {
+ while ((c = getopt(argc, argv, "46acC:D:Hl:L:mM:p:s:PS:tTuvz")) != -1) {
switch (c) {
case '4':
if (cfg_family != PF_UNSPEC)
@@ -452,6 +469,9 @@ static void parse_opts(int argc, char **argv)
case 'l':
cfg_runtime_ms = strtoul(optarg, NULL, 10) * 1000;
break;
+ case 'L':
+ cfg_poll_loop_timeout_ms = strtoul(optarg, NULL, 10) * 1000;
+ break;
case 'm':
cfg_sendmmsg = true;
break;
@@ -490,6 +510,8 @@ static void parse_opts(int argc, char **argv)
case 'z':
cfg_zerocopy = true;
break;
+ default:
+ exit(1);
}
}
@@ -677,7 +699,7 @@ int main(int argc, char **argv)
num_sends += send_udp(fd, buf[i]);
num_msgs++;
if ((cfg_zerocopy && ((num_msgs & 0xF) == 0)) || cfg_tx_tstamp)
- flush_errqueue(fd, cfg_poll);
+ flush_errqueue(fd, cfg_poll, 500, true);
if (cfg_msg_nr && num_msgs >= cfg_msg_nr)
break;
@@ -696,7 +718,7 @@ int main(int argc, char **argv)
} while (!interrupted && (cfg_runtime_ms == -1 || tnow < tstop));
if (cfg_zerocopy || cfg_tx_tstamp)
- flush_errqueue(fd, true);
+ flush_errqueue_retry(fd, num_sends);
if (close(fd))
error(1, errno, "close");
diff --git a/tools/testing/selftests/netfilter/nft_nat.sh b/tools/testing/selftests/netfilter/nft_nat.sh
index 032f2de..462dc47 100755
--- a/tools/testing/selftests/netfilter/nft_nat.sh
+++ b/tools/testing/selftests/netfilter/nft_nat.sh
@@ -404,6 +404,8 @@
echo SERVER-$family | ip netns exec "$ns1" timeout 5 socat -u STDIN TCP-LISTEN:2000 &
sc_s=$!
+ sleep 1
+
result=$(ip netns exec "$ns0" timeout 1 socat TCP:$daddr:2000 STDOUT)
if [ "$result" = "SERVER-inet" ];then
diff --git a/tools/testing/selftests/vm/mremap_test.c b/tools/testing/selftests/vm/mremap_test.c
index e3ce33a..58775da 100644
--- a/tools/testing/selftests/vm/mremap_test.c
+++ b/tools/testing/selftests/vm/mremap_test.c
@@ -22,7 +22,6 @@
#define VALIDATION_DEFAULT_THRESHOLD 4 /* 4MB */
#define VALIDATION_NO_THRESHOLD 0 /* Verify the entire region */
-#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
struct config {
diff --git a/tools/testing/selftests/vm/pkey-helpers.h b/tools/testing/selftests/vm/pkey-helpers.h
index 622a858..92f3be3 100644
--- a/tools/testing/selftests/vm/pkey-helpers.h
+++ b/tools/testing/selftests/vm/pkey-helpers.h
@@ -13,6 +13,8 @@
#include <ucontext.h>
#include <sys/mman.h>
+#include "../kselftest.h"
+
/* Define some kernel-like types */
#define u8 __u8
#define u16 __u16
@@ -175,7 +177,6 @@ static inline void __pkey_write_allow(int pkey, int do_allow_write)
dprintf4("pkey_reg now: %016llx\n", read_pkey_reg());
}
-#define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
#define ALIGN_UP(x, align_to) (((x) + ((align_to)-1)) & ~((align_to)-1))
#define ALIGN_DOWN(x, align_to) ((x) & ~((align_to)-1))
#define ALIGN_PTR_UP(p, ptr_align_to) \
diff --git a/tools/testing/selftests/vm/va_128TBswitch.c b/tools/testing/selftests/vm/va_128TBswitch.c
index 83acdff..da6ec3b 100644
--- a/tools/testing/selftests/vm/va_128TBswitch.c
+++ b/tools/testing/selftests/vm/va_128TBswitch.c
@@ -9,7 +9,7 @@
#include <sys/mman.h>
#include <string.h>
-#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
+#include "../kselftest.h"
#ifdef __powerpc64__
#define PAGE_SIZE (64 << 10)
diff --git a/tools/virtio/linux/bug.h b/tools/virtio/linux/bug.h
index 813baf1..51a919083 100644
--- a/tools/virtio/linux/bug.h
+++ b/tools/virtio/linux/bug.h
@@ -1,13 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef BUG_H
-#define BUG_H
+#ifndef _LINUX_BUG_H
+#define _LINUX_BUG_H
#include <asm/bug.h>
#define BUG_ON(__BUG_ON_cond) assert(!(__BUG_ON_cond))
-#define BUILD_BUG_ON(x)
-
#define BUG() abort()
-#endif /* BUG_H */
+#endif /* _LINUX_BUG_H */
diff --git a/tools/virtio/linux/build_bug.h b/tools/virtio/linux/build_bug.h
new file mode 100644
index 0000000..cdbb75e
--- /dev/null
+++ b/tools/virtio/linux/build_bug.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_BUILD_BUG_H
+#define _LINUX_BUILD_BUG_H
+
+#define BUILD_BUG_ON(x)
+
+#endif /* _LINUX_BUILD_BUG_H */
diff --git a/tools/virtio/linux/cpumask.h b/tools/virtio/linux/cpumask.h
new file mode 100644
index 0000000..307da69
--- /dev/null
+++ b/tools/virtio/linux/cpumask.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_CPUMASK_H
+#define _LINUX_CPUMASK_H
+
+#include <linux/kernel.h>
+
+#endif /* _LINUX_CPUMASK_H */
diff --git a/tools/virtio/linux/gfp.h b/tools/virtio/linux/gfp.h
new file mode 100644
index 0000000..43d146f
--- /dev/null
+++ b/tools/virtio/linux/gfp.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_GFP_H
+#define __LINUX_GFP_H
+
+#include <linux/topology.h>
+
+#endif
diff --git a/tools/virtio/linux/kernel.h b/tools/virtio/linux/kernel.h
index 0b49354..a4beb719 100644
--- a/tools/virtio/linux/kernel.h
+++ b/tools/virtio/linux/kernel.h
@@ -10,6 +10,7 @@
#include <stdarg.h>
#include <linux/compiler.h>
+#include <linux/log2.h>
#include <linux/types.h>
#include <linux/overflow.h>
#include <linux/list.h>
diff --git a/tools/virtio/linux/kmsan.h b/tools/virtio/linux/kmsan.h
new file mode 100644
index 0000000..272b5aa
--- /dev/null
+++ b/tools/virtio/linux/kmsan.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_KMSAN_H
+#define _LINUX_KMSAN_H
+
+#include <linux/gfp.h>
+
+inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
+ enum dma_data_direction dir)
+{
+}
+
+#endif /* _LINUX_KMSAN_H */
diff --git a/tools/virtio/linux/scatterlist.h b/tools/virtio/linux/scatterlist.h
index 369ee30..74d9e18 100644
--- a/tools/virtio/linux/scatterlist.h
+++ b/tools/virtio/linux/scatterlist.h
@@ -2,6 +2,7 @@
#ifndef SCATTERLIST_H
#define SCATTERLIST_H
#include <linux/kernel.h>
+#include <linux/bug.h>
struct scatterlist {
unsigned long page_link;
diff --git a/tools/virtio/linux/topology.h b/tools/virtio/linux/topology.h
new file mode 100644
index 0000000..910794a
--- /dev/null
+++ b/tools/virtio/linux/topology.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_TOPOLOGY_H
+#define _LINUX_TOPOLOGY_H
+
+#include <linux/cpumask.h>
+
+#endif /* _LINUX_TOPOLOGY_H */
diff --git a/tools/virtio/vringh_test.c b/tools/virtio/vringh_test.c
index fa87b58..98ff808 100644
--- a/tools/virtio/vringh_test.c
+++ b/tools/virtio/vringh_test.c
@@ -308,6 +308,7 @@ static int parallel_test(u64 features,
gvdev.vdev.features = features;
INIT_LIST_HEAD(&gvdev.vdev.vqs);
+ spin_lock_init(&gvdev.vdev.vqs_list_lock);
gvdev.to_host_fd = to_host[1];
gvdev.notifies = 0;
@@ -455,6 +456,7 @@ int main(int argc, char *argv[])
getrange = getrange_iov;
vdev.features = 0;
INIT_LIST_HEAD(&vdev.vqs);
+ spin_lock_init(&vdev.vqs_list_lock);
while (argv[1]) {
if (strcmp(argv[1], "--indirect") == 0)
diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
index 0be80c2..5ef88f5 100644
--- a/virt/kvm/coalesced_mmio.c
+++ b/virt/kvm/coalesced_mmio.c
@@ -187,15 +187,17 @@ int kvm_vm_ioctl_unregister_coalesced_mmio(struct kvm *kvm,
r = kvm_io_bus_unregister_dev(kvm,
zone->pio ? KVM_PIO_BUS : KVM_MMIO_BUS, &dev->dev);
+ kvm_iodevice_destructor(&dev->dev);
+
/*
* On failure, unregister destroys all devices on the
* bus _except_ the target device, i.e. coalesced_zones
- * has been modified. No need to restart the walk as
- * there aren't any zones left.
+ * has been modified. Bail after destroying the target
+ * device, there's no need to restart the walk as there
+ * aren't any zones left.
*/
if (r)
break;
- kvm_iodevice_destructor(&dev->dev);
}
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 3ffed09..1cb2530 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -157,6 +157,8 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm);
static unsigned long long kvm_createvm_count;
static unsigned long long kvm_active_vms;
+static DEFINE_PER_CPU(cpumask_var_t, cpu_kick_mask);
+
__weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
unsigned long start, unsigned long end)
{
@@ -257,50 +259,57 @@ static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait)
return true;
}
+static void kvm_make_vcpu_request(struct kvm *kvm, struct kvm_vcpu *vcpu,
+ unsigned int req, cpumask_var_t tmp,
+ int current_cpu)
+{
+ int cpu;
+
+ kvm_make_request(req, vcpu);
+
+ if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
+ return;
+
+ /*
+ * tmp can be "unavailable" if cpumasks are allocated off stack as
+ * allocation of the mask is deliberately not fatal and is handled by
+ * falling back to kicking all online CPUs.
+ */
+ if (!cpumask_available(tmp))
+ return;
+
+ /*
+ * Note, the vCPU could get migrated to a different pCPU at any point
+ * after kvm_request_needs_ipi(), which could result in sending an IPI
+ * to the previous pCPU. But, that's OK because the purpose of the IPI
+ * is to ensure the vCPU returns to OUTSIDE_GUEST_MODE, which is
+ * satisfied if the vCPU migrates. Entering READING_SHADOW_PAGE_TABLES
+ * after this point is also OK, as the requirement is only that KVM wait
+ * for vCPUs that were reading SPTEs _before_ any changes were
+ * finalized. See kvm_vcpu_kick() for more details on handling requests.
+ */
+ if (kvm_request_needs_ipi(vcpu, req)) {
+ cpu = READ_ONCE(vcpu->cpu);
+ if (cpu != -1 && cpu != current_cpu)
+ __cpumask_set_cpu(cpu, tmp);
+ }
+}
+
bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
struct kvm_vcpu *except,
unsigned long *vcpu_bitmap, cpumask_var_t tmp)
{
- int i, cpu, me;
struct kvm_vcpu *vcpu;
+ int i, me;
bool called;
me = get_cpu();
- kvm_for_each_vcpu(i, vcpu, kvm) {
- if ((vcpu_bitmap && !test_bit(i, vcpu_bitmap)) ||
- vcpu == except)
+ for_each_set_bit(i, vcpu_bitmap, KVM_MAX_VCPUS) {
+ vcpu = kvm_get_vcpu(kvm, i);
+ if (!vcpu || vcpu == except)
continue;
-
- kvm_make_request(req, vcpu);
-
- if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
- continue;
-
- /*
- * tmp can be "unavailable" if cpumasks are allocated off stack
- * as allocation of the mask is deliberately not fatal and is
- * handled by falling back to kicking all online CPUs.
- */
- if (!cpumask_available(tmp))
- continue;
-
- /*
- * Note, the vCPU could get migrated to a different pCPU at any
- * point after kvm_request_needs_ipi(), which could result in
- * sending an IPI to the previous pCPU. But, that's ok because
- * the purpose of the IPI is to ensure the vCPU returns to
- * OUTSIDE_GUEST_MODE, which is satisfied if the vCPU migrates.
- * Entering READING_SHADOW_PAGE_TABLES after this point is also
- * ok, as the requirement is only that KVM wait for vCPUs that
- * were reading SPTEs _before_ any changes were finalized. See
- * kvm_vcpu_kick() for more details on handling requests.
- */
- if (kvm_request_needs_ipi(vcpu, req)) {
- cpu = READ_ONCE(vcpu->cpu);
- if (cpu != -1 && cpu != me)
- __cpumask_set_cpu(cpu, tmp);
- }
+ kvm_make_vcpu_request(kvm, vcpu, req, tmp, me);
}
called = kvm_kick_many_cpus(tmp, !!(req & KVM_REQUEST_WAIT));
@@ -312,14 +321,25 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req,
struct kvm_vcpu *except)
{
- cpumask_var_t cpus;
+ struct kvm_vcpu *vcpu;
+ struct cpumask *cpus;
bool called;
+ int i, me;
- zalloc_cpumask_var(&cpus, GFP_ATOMIC);
+ me = get_cpu();
- called = kvm_make_vcpus_request_mask(kvm, req, except, NULL, cpus);
+ cpus = this_cpu_cpumask_var_ptr(cpu_kick_mask);
+ cpumask_clear(cpus);
- free_cpumask_var(cpus);
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ if (vcpu == except)
+ continue;
+ kvm_make_vcpu_request(kvm, vcpu, req, cpus, me);
+ }
+
+ called = kvm_kick_many_cpus(cpus, !!(req & KVM_REQUEST_WAIT));
+ put_cpu();
+
return called;
}
@@ -5619,20 +5639,22 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
goto out_free_3;
}
+ for_each_possible_cpu(cpu) {
+ if (!alloc_cpumask_var_node(&per_cpu(cpu_kick_mask, cpu),
+ GFP_KERNEL, cpu_to_node(cpu))) {
+ r = -ENOMEM;
+ goto out_free_4;
+ }
+ }
+
r = kvm_async_pf_init();
if (r)
- goto out_free;
+ goto out_free_4;
kvm_chardev_ops.owner = module;
kvm_vm_fops.owner = module;
kvm_vcpu_fops.owner = module;
- r = misc_register(&kvm_dev);
- if (r) {
- pr_err("kvm: misc device register failed\n");
- goto out_unreg;
- }
-
register_syscore_ops(&kvm_syscore_ops);
kvm_preempt_ops.sched_in = kvm_sched_in;
@@ -5641,13 +5663,28 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
kvm_init_debug();
r = kvm_vfio_ops_init();
- WARN_ON(r);
+ if (WARN_ON_ONCE(r))
+ goto err_vfio;
+
+ /*
+ * Registration _must_ be the very last thing done, as this exposes
+ * /dev/kvm to userspace, i.e. all infrastructure must be setup!
+ */
+ r = misc_register(&kvm_dev);
+ if (r) {
+ pr_err("kvm: misc device register failed\n");
+ goto err_register;
+ }
return 0;
-out_unreg:
+err_register:
+ kvm_vfio_ops_exit();
+err_vfio:
kvm_async_pf_deinit();
-out_free:
+out_free_4:
+ for_each_possible_cpu(cpu)
+ free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
kmem_cache_destroy(kvm_vcpu_cache);
out_free_3:
unregister_reboot_notifier(&kvm_reboot_notifier);
@@ -5667,8 +5704,18 @@ EXPORT_SYMBOL_GPL(kvm_init);
void kvm_exit(void)
{
- debugfs_remove_recursive(kvm_debugfs_dir);
+ int cpu;
+
+ /*
+ * Note, unregistering /dev/kvm doesn't strictly need to come first,
+ * fops_get(), a.k.a. try_module_get(), prevents acquiring references
+ * to KVM while the module is being stopped.
+ */
misc_deregister(&kvm_dev);
+
+ debugfs_remove_recursive(kvm_debugfs_dir);
+ for_each_possible_cpu(cpu)
+ free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
kmem_cache_destroy(kvm_vcpu_cache);
kvm_async_pf_deinit();
unregister_syscore_ops(&kvm_syscore_ops);